POPULARITY
Categories
In this episode of Next in Media, I sit down with Kiri Masters, host of the Retail Media Breakfast Club podcast, to explore the biggest shifts happening in retail media advertising. We dive into the recent announcement about ads coming to ChatGPT and what that means for brands trying to meet consumers where they are. Kiri shares her perspective on whether AI-powered shopping will truly disrupt the retail media landscape - and why she's optimistic that LLM-based ads could actually be more relevant and less annoying than traditional formats. We also unpack the Walmart-Google partnership and discuss what it signals about the future of conversational commerce.Beyond the AI conversation, we tackle some of the industry's most pressing questions. Will we see consolidation in retail media networks this year? Can shoppable TV finally gain traction? And what happens when offsite retail media faces competition from platforms with their own transactional data? Kiri brings both historical context - including a fascinating story about Piggly Wiggly's self-service revolution - and forward-looking insights about how brands and retailers need to collaborate differently. Whether you're a marketer navigating this space or just curious about where AI and commerce intersect, this conversation offers a clear-eyed look at what's real, what's hype, and what's coming next._______________________________________________Key Highlights
Topics covered in this episode: GreyNoise IP Check tprof: a targeting profiler TOAD is out Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: GreyNoise IP Check GreyNoise watches the internet's background radiation—the constant storm of scanners, bots, and probes hitting every IP address on Earth. Is your computer sending out bot or other bad-actor traffic? What about the myriad of devices and IoT things on your local IP? Heads up: If your IP has recently changed, it might not be you (false positive). Brian #2: tprof: a targeting profiler Adam Johnson Intro blog post: Python: introducing tprof, a targeting profiler Michael #3: TOAD is out Toad is a unified experience for AI in the terminal Front-end for AI tools such as OpenHands, Claude Code, Gemini CLI, and many more. Better TUI experience (e.g. @ for file context uses fuzzy search and dropdowns) Better prompt input (mouse, keyboard, even colored code and markdown blocks) Terminal within terminals (for TUI support) Brian #4: FastAPI adds Contribution Guidelines around AI usage Docs commit: Add contribution instructions about LLM generated code and comments and automated tools for PRs Docs section: Development - Contributing : Automated Code and AI Great inspiration and example of how to deal with this for popular open source projects “If the human effort put in a PR, e.g. writing LLM prompts, is less than the effort we would need to put to review it, please don't submit the PR.” With sections on Closing Automated and AI PRs Human Effort Denial of Service Use Tools Wisely Extras Brian: Apparently Digg is back and there's a Python Community there Why light-weight websites may one day save your life - Marijke LuttekesHome Michael: Blog posts about Talk Python AI Integrations Announcing Talk Python AI Integrations on Talk Python's Blog Blocking AI crawlers might be a bad idea on Michael's Blog Already using the compile flag for faster app startup on the containers: RUN --mount=type=cache,target=/root/.cache uv pip install --compile-bytecode --python /venv/bin/python I think it's speeding startup by about 1s / container. Biggest prompt yet? 72 pages, 11, 000 Joke: A date via From Pat Decker
We argue that SEO isn't dying; bad SEO is. Fundamentals still drive wins while LLMs change how answers surface, making complete, original content and smart strategy more valuable than ever.• fundamentals over fads as LLMs shift discovery• why pages beyond top ten fuel LLM citations• customer journey, CRO, PR, and dev fluency as real leverage• global markets with lower competition and faster output• enterprise constraints, silos, and when to say no• partnerships over one‑stop shops to deliver depth• personalization pitfalls in tracking and sane KPI alignment• future of agents, multimodal search, and offline attentionGuest Contact Information: LinkedIn: linkedin.com/in/kresimir-corlukaWebsite: canonical.hrSummit: croatiaseosummit.comMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
Segment 1: Interview with Thyaga Vasudevan Hybrid by Design: Zero Trust, AI, and the Future of Data Control AI is reshaping how work gets done, accelerating decision-making and introducing new ways for data to be created, accessed, and shared. As a result, organizations must evolve Zero Trust beyond an access-only model into an inline data governance approach that continuously protects sensitive information wherever it moves. Securing access alone is no longer enough in an AI-driven world. In this episode, we'll unpack why real-time visibility and control over data usage are now essential for safe AI adoption, accurate outcomes, and regulatory compliance. From preventing data leakage to governing how data is used by AI systems, security teams need controls that operate in the moment - across cloud, browser, SaaS, and on-prem environments - without slowing the business. We'll also explore how growing data sovereignty and regulatory pressures are driving renewed interest in hybrid architectures. By combining cloud agility with local control, organizations can keep sensitive data protected, governed, and compliant, regardless of where it resides or how AI is applied. This segment is sponsored by Skyhigh Security. Visit https://securityweekly.com/skyhighsecurity to learn more about them! Segment 2: Why detection fails Caleb Sima put together a nice roundup of the issues around detection engineering struggles that I thought worth discussing. Amélie Koran also shared some interesting thoughts and experiences. Segment 3: Weekly Enterprise News Finally, in the enterprise security news, Fundings and acquisitions are going strong can cyber insurance be profitable? some new free tools shared by the community RSAC gets a new CEO Large-scale enterprise AI initiatives aren't going well LLM impacts on exploit development AI vulnerabilities global risk reports floppies are still used daily, but not for long? All that and more, on this episode of Enterprise Security Weekly. Visit https://www.securityweekly.com/esw for all the latest episodes! Show Notes: https://securityweekly.com/esw-443
Tech has looked unstoppable thanks to AI winners—but a huge part of the market is telling a very different story. In this episode, Simon and Dan break down the brutal valuation reset hitting SaaS (Software-as-a-Service) stocks, with many major names down 30–50%+ despite still-solid underlying businesses. They explain the classic “moats” that made SaaS so powerful—switching costs, ecosystems, and data—and why AI agents and LLM-driven automation are now challenging the seat-based pricing model that many software companies depend on. The discussion moves through a rapid-fire list of well-known SaaS names to unpack what’s driving the drawdowns, where the market may be overreacting, and where the risk of disruption is real. Bottom line: some of these stocks may be turning into genuine value opportunities—but the old playbook may no longer apply, and investors need to underwrite what the business looks like 2–5 years from now, not what it used to be. Tickers mentioned: CSU, CRM, ADBE, NOW, ADSK, INTU, TEAM, WDAY, TWLO, DOCU, ADP Check out our portfolio by going to Jointci.com Our Website Our New Youtube Channel! Canadian Investor Podcast Network Twitter: @cdn_investing Simon’s twitter: @Fiat_Iceberg Braden’s twitter: @BradoCapital Dan’s Twitter: @stocktrades_ca Want to learn more about Real Estate Investing? Check out the Canadian Real Estate Investor Podcast! Apple Podcast - The Canadian Real Estate Investor Spotify - The Canadian Real Estate Investor Web player - The Canadian Real Estate Investor Asset Allocation ETFs | BMO Global Asset Management Sign up for Fiscal.ai for free to get easy access to global stock coverage and powerful AI investing tools. Register for EQ Bank, the seamless digital banking experience with better rates and no nonsense.See omnystudio.com/listener for privacy information.
The questions piled up over the holidays and now it's time to answer them in this, the first Q&A of 2026. This month we touch on topics like the splendor Gateway 2000's cow boxes, the mystery of the ENIAC, whether a shed qualifies as off-site backup, what the heck volt-amps are (and how calculus is involved), the glory days of multi-user computing, what tech today's kids will be nostalgic for in 20 years, using LLMs for troubleshooting and command line assistance, and more. Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, a monthly bonus episode, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Send us a textThe ground is shifting under our feet as AI moves from answering questions to taking action. We dig into what that transformation really means for leaders: how operating models evolve, where risk compounds, and what it takes to capture speed without inviting chaos. With Dr. Jamila Amimer, CEO of MindSenses Global and a recognized AI strategist, we unpack practical steps to go from pilots to production and build systems that are fast, reliable, and governed.We start by separating AI families—predictive, generative, and agentic—and why each demands a different approach to design, safety, and measurement. Dr. Amimer explains why spatial AI and the convergence with robotics will redefine context and capability, and how to prepare now without tossing out today's LLM investments. From domain expertise and humans in the loop to controlled knowledge bases and action approvals, we lay out the essential guardrails to minimize hallucinations, manage model drift, and avoid compounding errors at scale.Then we turn to the human layer. HR data becomes a strategic asset, revealing task flows and handoffs that inform agent orchestration. We talk through preserving meaning and motivation as agents absorb routine work, and how equitable upskilling—analytical thinking, data literacy, exception handling—keeps teams engaged and effective. Accountability and auditability aren't abstract; they're the difference between a clever demo and a trustworthy system your board will support.If you're ready to move beyond hype and design AI that plans, decides, and acts with confidence, this conversation gives you the operating principles to start today and scale tomorrow. If this resonated, follow the show, share it with a colleague who cares about AI governance, and leave a review so we can reach more leaders building responsibly.Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com. And don't miss Grant McGaugh's new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/ See you next time on Follow The Brand!
In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries.Timestamps00:00 Introduction to Data and AI Challenges03:08 The Evolution of Data Management05:54 Understanding Data Quality and Metadata08:57 The Role of AI in Data Cleaning11:50 Knowledge Management in Large Organizations14:55 The Future of AI and LLMs17:59 Economics of AI Implementation29:14 The Importance of LLMs for Major Tech Companies32:00 Open Source: Opportunities and Challenges35:19 The Future of AI Inference and Hardware43:24 Optimizing Inference: The Next Frontier49:23 The Commercial Viability of AI ModelsKey Insights1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations.2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative.3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions.4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology).5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware.6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch.7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively expensive.
Focus sur Cerebras, les puces aussi grosses qu’un wafer, Windows 11 est définitivement un veau, les LLMs connaissent toujours presque tous leurs classiques en intégralité et les modèles IA de la semaine. Me soutenir sur Patreon Me retrouver sur YouTube On discute ensemble sur Discord Modèles de la semaine Mocha, la revanche de V-JEPA. Social Reasoner et OpenVoxel. Ministral 3 et la sécurité des IA. Un Erdős tres, un pasito pa'lante matemáticas ! Stupefix ! Les LLM connaissent toujours leur classiques… Un récapitulatif sur ces LLM qu'on aime. Et là, c'est le DRAM Panier percé : Sam investit dans Altman, OpenAI dans dans Cerebras. Marie Kondo pour les puces de RAM. Encore des centrales en orbites… qui bougent. C'est confirmé scientifiquement : Windows 11 est un veau. Dilbert est orphelin. Participants Une émission préparée par Guillaume Poggiaspalla Présenté par Guillaume Vendé
Learn more about TrainerRoad AI: https://www.trainerroad.com/blog/introducing-trainerroad-ai/Learn more about the updated AI FTP Detection: https://www.trainerroad.com/blog/why-is-ai-ftp-detecting-an-ftp-change/// SHARE AND RATE THE PODCAST!iTunes: https://trainerroad.cc/apple2 Spotify: https://trainerroad.cc/spotify2Google Podcasts: https://trainerroad.cc/google// TOPICS COVERED(00:00:00) Welcome & Why TrainerRoad AI Is a Major Update(00:02:00) How TrainerRoad AI Has Evolved Over Time(00:04:00) Why TrainerRoad AI Isn't Just a Chatbot or LLM(00:06:30) How TrainerRoad AI Simulates and Selects Workouts(00:10:30) How TrainerRoad AI Replaces Static Training Plans(00:15:00) How TrainerRoad AI Reduces Failed Workouts and Burnout(00:20:40) How TrainerRoad AI Adjusts for Fatigue and Big Rides(00:32:00) Why Most AI Training Tools Don't Validate Workouts(00:36:10) TrainerRoad AI Training Forecasts and Simulations(00:49:40) TrainerRoad AI Workout Alternatives Explained(01:03:20) Why Long Rides Can Undermine Progress(01:17:20) Conservative vs Aggressive Training in TrainerRoad AI(01:37:30) How TrainerRoad AI Changes How Athletes Get FasterIn this episode, Nate and Coach Jonathan explain all the details behind TrainerRoad AI, the biggest evolution yet in TrainerRoad's training system, walking through how the new AI-driven approach goes far beyond static plans to dynamically simulate, predict, and personalize every workout on your calendar. They explain how years of performance data, workout feedback, power and heart rate, and progression history now power a system that actively chooses the right workout for the day, reduces burnout, cuts down workouts that are too hard or too easy, and helps athletes recover faster from missed sessions or failures. The conversation dives into how simulations work behind the scenes, why long rides and “hidden fatigue” can quietly sabotage progress, and how features like AI Predicted Difficulty, AI Training Simulation, Dynamic Duration, and Training Approach sliders give athletes confidence that every session is worth their time. The result is training that feels consistently “just right,” adapts to real life, and helps athletes get faster with less wasted effort and fewer mistakes along the way.// RESOURCES MENTIONED- Sign up for TrainerRoad! https://trainerroad.cc/GetFaster- Follow TrainerRoad on Instagram
Jake and Michael return for 2026 and talk about their evolving experiences with AI; what it's good at, what it's not, and how it's changing the way they work.Show linksOpenAI / ChatGPTAnthropic / Claude (Sonnet & Opus)OpenCode (multi-provider AI coding interface)Home AssistantZigbee temperature sensorsGitHub CopilotOllama (local LLM runner)NVIDIA DGX SparkAmp CodeMiniMax M2.1 modelSoftware for an audience of oneArborOpenCode Desktop has workspaces supportOpus 4.5 is going to change everything
In this episode of 'One in Ten,' host Teresa Huizar speaks with Liisa Järvilehto, a psychologist and Ph.D. candidate at the University of Helsinki, about the positive uses of AI in child abuse investigations and forensic interviews. The conversation addresses the common misuse of AI and explores its potential in assisting professionals by proposing hypotheses, generating question sets, and more. The discussion delves into the application of large language models (LLMs) in generating alternative hypotheses and the nuances of using these tools to avoid confirmation bias in interviews. Huizar and Järvilehto also touch on the practical implications for current practitioners and future research directions. Time Stamps: 00:00 Introduction to the Episode 00:00 Introduction to the Episode 00:22 Exploring AI in Child Abuse Investigations 01:06 Introducing Liisa Järvilehto and Her Research 01:48 Challenges in Child Abuse Investigations 04:24 The Role of Large Language Models 06:28 Addressing Bias in Investigations 09:13 Hypothesis Testing in Forensic Interviews 12:18 Study Design and Findings 25:54 Implications for Practitioners 33:41 Future Research Directions 36:49 Conclusion and Final Thoughts Resources:Pre-interview hypothesis generation: large language models (LLMs) show promise for child abuse investigationsSupport the showDid you like this episode? Please leave us a review on Apple Podcasts.
Can't be bothered with email or speak pipe? Text us!Hey! Jason will be in the UK in Late April. If you'd like to chat to him about the trip and possibly having him visit you, email us on enquiries@ontheregteam.comWatch this episode on YouTubeThe banter section of this episode is lost to history because Jason forgot to hit record! After a short intro by Inger, the team go straight into the mail bag. Let's face it, this is the good stuff anyway - who needs Tinny and puppy news?In our work problems segment, we discuss the appearance of ChatGPT as a referrer to the Thesis Whisperer blog and what it might mean for researcher visibility. How does an LLM decide on what is an 'authoritative source' on the internet? Is this a new form of 'research impact' and what should researchers do about it, if anything?This episode also ends abruptly when Inger gets a phone call - the next episode will be much more polished, we promise!Things we mentioned:Tiago Forte's PARA bookBlooms TaxonomyAmazing Marvin appClick up appBe visible or vanish bookThe Enshittification of Academic Social MediaGot thoughts and feel pinions? Want to ask a question? You can email us on - Leave us a message on www.speakpipe.com/thesiswhisperer. - See our workshop catalogue on www.ontheregteam.com. You can book us via emailing Jason at enquiries@ontheregteam.com- Subscribe to the free, monthly Two Minute Tips newsletter here (scroll down to enter your email address) - We're on BlueSky as @drjd and @thesiswhisperer (but don't expect to hear back from Jason, he's still mostly on a Socials break).- Read Inger's stuff on www.thesiswhisperer.com. - If you want to support our work, you can sign up to be a 'Riding the Bus' member for just $2 a month, via our On The Reg Ko-Fi site
Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.The highest profile examples of this phenomenon — what's being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It's the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale. Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs. If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That's why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIHPRA.org. This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services RECOMMENDED MEDIA The website for the AI Psychological Harms Research CoalitionFurther reading on AI PscyhosisThe Atlantic article on LLM-ings outsourcing their thinking to AIFurther reading on David Sacks' comparison of AI psychosis to a “moral panic” RECOMMENDED YUA EPISODESHow OpenAI's ChatGPT Guided a Teen to His DeathPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionRethinking School in the Age of AI CORRECTIONSAfter this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this feed drop from The Six Five Pod, a16z General Partner Martin Casado discusses how AI is changing infrastructure, software, and enterprise purchasing. He explains why current constraints are driven less by technical limits and more by regulation, particularly around power, data centers, and compute expansion.The episode also covers how AI is affecting software development, lowering the barrier to coding without eliminating the need for experienced engineers, and how agent-driven tools may shift infrastructure decision-making away from humans.Watch more from Six Five Media: https://www.youtube.com/@SixFiveMedia Resources:Follow Martin Casado on X: https://twitter.com/martin_casado Follow Patrick Moorhead on X: https://twitter.com/PatrickMoorheadFollow Daniel Newman on X: https://twitter.com/danielnewmanUV Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The marketing teams winning with AI today are not the ones chasing every new model release. They are the ones who found the boring, repetitive tasks their teams hate and automated those first.Nir Pochter, Co-Founder and CMO at Lightricks, joins Stephanie Postles on Marketing Trends to break down what AI actually means for creative workflows and why most teams are still using it wrong.You'll learn:- The "algebra problem" of AI adoption- How to save your design team 80% of their time- Why the gap between marketers who use AI well and those who don't is widening fast.- How to use an LLM scoring system to pre-review documents for you- The dangerous trend of "AI Marketer" job titles- What's really in store for the future of video+AI Key Moments:00:00 — Why AI Hasn't Improved Creative Output Yet02:06 — The Algebra Problem: Tools vs. Knowing How to Use Them07:27 — Nir's Background: AI PhD to Lightricks and FaceTune09:46 — What Used to Take Weeks Now Takes Minutes13:35 — Why Automating Everything Failed Miserably16:38 — Start with What People Hate Doing20:08 — The LLM Scoring System: Nothing Gets Reviewed Without an 8521:43 — Train Your LLM to Be Mean, Not Nice23:32 — Building Custom GPTs with Company Guidelines26:30 — The Pitfall: Using AI to Please Leadership28:47 — From Toys to Tools: Why Text-to-Video Isn't Enough31:05 — Coca-Cola's 70,000 Prompts (Was It Worth It?)34:41 — AI Won't Replace Creatives, But This Will37:04 — The Two Critical Skills: Prompting and Curation37:55 — How AI Multiplies the Skills Gap (7 vs 10 Example)42:47 — What CMOs Should Be Asking Their Teams46:20 — Why "AI Marketer" Is LinkedIn Fluff This episode is brought to you by Lightricks. LTX is the all-in-one creative suite for AI-driven video production; built by Lightricks to take you from idea to final 4K render in one streamlined workspace.Powered by LTX-2, our next-generation creative engine, LTX lets you move faster, collaborate seamlessly, and deliver studio-quality results without compromise. Try it today at ltx.studio Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Most AI pilots fail—at least, that's the headline. But today's guest is proving that doesn't have to be true.In this Watson Weekly interview, Rick Watson goes behind the scenes with Chris Kellner, CEO of DigitalGenius, to discuss how a decade-old AI pioneer is navigating the modern LLM explosion. We move past the hype to explore how AI is actually rewriting the playbooks for sales, marketing, and customer support.In this episode, you'll learn:* The "Locomotive" Metaphor: Why DigitalGenius didn't have to reinvent itself, but rather accelerated on existing tracks.* Sales & Marketing Rewired: How Chris's team uses AI for call coaching, MEDPIC scorecards, and CRM automation to let humans focus on high-value closing.* UK vs. US Market Mismatch: Why traditional outbound playbooks failed when crossing the Atlantic and what they did to fix it.*The Competition for Culture: How an internal "Agent-Building" competition spurred unexpected creativity across the company.3 Hard Lessons in AI Support: The essential checklist for any leader deploying AI in customer service today.Chapters00:00 – Intro: Turning AI buzz into customer wins 03:15 – Chris Kellner's journey from Banking to SaaS CEO 07:40 – Acceleration vs. Reinvention: Building the AI train line 14:20 – Rewiring GTM: AI call coaching and MEDPIC scorecards 22:10 – Market Mismatches: Lessons from scaling from the UK to the US 30:45 – Culture & Internal Adoption: The AI Agent competition 38:30 – Why most AI pilots fail (and how to make yours succeed) 45:15 – Build vs. Buy: When to go in-house and when to use a vendor 52:00 – Closing thoughts and key takeaways#watsonweekly #ai #customersupportThis podcast uses the following third-party services for analysis: Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
Are you ready to take your podcast from a passion project to a monetization-based international business advertising/marketing tool? In this comprehensive episode, host Favour Obasi-ike, MBA, MS delivers an in-depth masterclass on leveraging podcast SEO and monetization strategies for international business growth. This session is the final installment in a series focused on helping podcasters and business owners build sustainable, globally-reaching content strategies.Favour explores the critical intersection of podcasting, search engine optimization, and international business development. The episode covers essential topics including multilingual content localization, performance benchmarks, download metrics, and how to position podcasts for passive monetization through advertising networks.Key highlights include real-world success stories from clients who have transformed their podcasts into powerful SEO assets, including a case study of turning 50 podcast episodes into 50 optimized blog posts that now rank on Google's AI-powered search results. Favour also demonstrates how his own podcast appears in Google's featured snippets and AI mode results, providing concrete proof of the strategies discussed.The episode features interactive discussions with community members Juliana, Celeste, and others who share their own experiences with SEO implementation, AI optimization (AIO), and the tangible business results they've achieved. Juliana shares an exciting success story about landing a major client through Google Gemini recommendations, directly attributable to SEO work completed three years prior with FavourThis episode is essential listening for podcasters, content creators, coaches, consultants, and international business owners who want to understand how to build long-term digital assets, increase discoverability across global markets, and create multiple revenue streams through strategic content optimization.Need to Book An SEO Discovery Call for Advertising or Marketing Services?>> Book a Complimentary SEO Discovery Call with Favour Obasi-Ike>> Visit Work and PLAY Entertainment website to learn about our digital marketing services>> Join our exclusive SEO Marketing community>> Read SEO Articles>> Subscribe to the We Don't PLAY Podcast>> Purchase Flaev Beatz Beats OnlineWhat You'll Learn:International SEO Fundamentals: How to optimize your podcast content for multiple languages, regions, and search engines (Google.com, Google.co.uk, and beyond).Monetization Metrics That Matter: Understanding downloads vs. unique listeners, 7-day and 30-day performance benchmarks, and what advertising networks look for.Multilingual Content Strategy: Leveraging localization and translation features to expand your audience across different cultures and languages.Podcast-to-Blog Conversion: The proven method of turning podcast episodes into SEO-optimized blog posts that rank on Google and drive traffic back to your audio content.AI Optimization (AIO): How to position your content to appear in Google's AI mode, featured snippets, and AI-powered recommendation engines like Google Gemini.Real Results: Case studies including a client whose emotional coaching podcast now ranks on Google, and a CPA who landed a major client through Gemini AI recommendations.Long-Term Asset Building: Why SEO is a marathon, not a sprint, and how work done today pays dividends for years to come.Detailed Episode TimestampsIntroduction & Overview (00:00 - 05:55) 00:00 - 00:13: Episode title: "Podcast SEO Monetization for International Businesses". 00:13 - 00:45: Welcome and call to subscribe to We Don't Play Podcast. 00:45 - 01:27: Overview: International business connections through podcasting. 01:27 - 02:31: Performance benchmarks: Downloads vs. unique listeners, measuring success. 02:31 - 03:33: Building sustainable growth and niche dominance. 03:33 - 04:48: Multilingual content and localization strategies. 04:48 - 05:55: International perspective: Moving beyond regional thinking.International SEO Strategy (05:55 - 10:03) 05:55 - 06:58: Analytics insights: Tracking international audience growth. 06:58 - 08:04: Case study introduction: Client success with emotional coaching podcast. 08:04 - 09:09: Turning 50 podcast episodes into 50 SEO-optimized blogs. 09:09 - 10:03: Podcast-to-blog strategy and long-term asset building.Content Conversion & Client Success Stories (10:03 - 15:00) 10:03 - 11:00: Amazon book-to-podcast conversion strategy. 11:00 - 12:00: Passive vs. active content consumption patterns. 12:00 - 13:00: Multi-platform distribution: Spotify, Apple Podcasts, YouTube. 13:00 - 14:00: Clubhouse as a content creation and community building platform. 14:00 - 15:00: Real-time engagement and relationship building.Technical SEO Implementation (15:00 - 20:00) 15:00 - 16:00: Search engine algorithms and content discoverability. 16:00 - 17:00: Metadata optimization for podcasts. 17:00 - 18:00: Location-specific SEO strategies. 18:00 - 19:00: Building booking systems and conversion pathways. 19:00 - 20:00: Creating "red carpet" experiences for potential clients.Monetization Strategies (20:00 - 25:00) 20:00 - 21:00: Advertising network requirements and download thresholds. 21:00 - 22:00: Passive income through podcast monetization. 22:00 - 23:00: Building credibility through consistent content. 23:00 - 24:00: Long-term revenue stream development. 24:00 - 25:00: International market opportunities.Community Engagement & Live Discussion (25:00 - 30:00) 25:00 - 26:22: Community building on Clubhouse since 2020. 26:22 - 27:40: Prayer and intentionality in content creation. 27:40 - 28:40: Daily room commitment and audience engagement. 28:40 - 29:19: Juliana's Success Story: Landing a major CPA client through Google Gemini. 29:19 - 30:00: AI Optimization (AIO) and its importance.AI-Powered Search Results (30:00 - 35:00) 30:00 - 31:11: SEO as a long-term investment: Results from work done 3 years ago. 31:11 - 32:30: Live Demonstration: Host's podcast appearing in Google AI mode with timestamp references. 32:30 - 33:50: Dual focus: Local search dominance + global revenue streams. 33:50 - 34:30: International markets and currency considerations (Shopify example). 34:30 - 35:00: Technical factors: IP address, API, LLM, search history.Actionable Strategies & Takeaways (35:00 - 39:07) 35:00 - 35:50: Being intentional about topics of interest. 35:50 - 36:20: Importance of independent research and validation. 36:20 - 37:18: Celeste's Reflection: Community value and 2026 goals. 37:18 - 38:00: Top 3 priorities: Booking system, financial management, business structure. 38:00 - 38:46: Encouragement and resources for implementation. 38:46 - 39:07: Closing remarks and invitation to daily rooms.This episode is perfect for:Podcasters looking to monetize their content.International business owners seeking global visibility.Coaches and consultants building authority online.Content creators wanting to maximize their reach.Marketers interested in AI optimization strategies.Episode Tags/KeywordsPodcast SEO, International Business, Podcast Monetization, Multilingual Content, Content Localization, AI Optimization, AIO, Google Gemini, Featured Snippets, Download Metrics, Passive Income, Content Repurposing, Blog Strategy, Digital Marketing, Search Engine Optimization, Global Revenue Streams, Podcast Analytics, Advertising Networks, Authority Building, Long-term Strategy, Clubhouse Marketing, Community Building, Business Growth, Online Visibility, International Markets.Target AudiencePodcasters seeking monetization strategies.International business owners.Digital marketers and SEO professionals.Coaches and consultants.Content creators and influencers.Entrepreneurs building online presence.Small business owners expanding globally.Marketing professionals learning AI optimization.Anyone interested in passive income through content.This episode is part of the We Don't PLAY!™️ Podcast series, hosted by Favour Obasi-Ike, focusing on practical digital marketing strategies for business growth.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
Enterprise SEO teams struggle to prove ROI while chasing AI visibility metrics. Eli Schwartz, growth advisor who has driven millions in organic revenue for Tinder, Coinbase, and LinkedIn, challenges the industry's obsession with LLM mention tracking and prompt visibility tools. He reveals why first-click attribution models show SEO driving 67% of conversions at major brands, demonstrates how to position SEO as a revenue driver rather than traffic generator, and explains why sustainable organic growth requires focusing on user intent over algorithmic manipulation.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode of the Level Up Claims Podcast, host Galen Hair sits down with Austin Holmes, CEO of Signal Raptor and President of Publicity for Good, to break down how storytelling, earned media, and authentic communication help businesses stand out in an increasingly noisy, AI-driven world. Austin shares how his background in the military shaped his leadership and communication style, why most business owners struggle to build trust at scale, and how PR can be transformed from a vague expense into a predictable growth tool. From scaling a seven-figure PR agency while raising a family on the road to building systems that replace "hope-based PR," this conversation delivers a practical, no-fluff framework business owners can apply every day. Highlights · Austin Holmes' journey from military leadership to scaling PR companies · Why authenticity and storytelling are essential in modern marketing · How AI has increased noise—and why story cuts through it · Turning your customer's story into a trust-building asset · Why "hope-based PR" doesn't work for growing businesses · Earned media vs influencers vs advertising: what actually matters · How Austin grew his company more than 50x with mentorship and systems · Building a business while traveling full-time with a growing family · Why systems—not hustle—create sustainable growth · Breaking down the COMPARE framework for business communication · The importance of omnipresence across platforms · Using earned media as third-party validation · How relationships and engagement drive long-term visibility · Preparing your brand for AI and LLM-based search · Austin's definition of leveling up: never settling and always growing Episode Resources · Connect with Austin Holmes · https://signalraptor.com · https://publicityforgood.com · Connect with Galen M. Hair · https://insuranceclaimhq.com · hair@hairshunnarah.com · https://levelupclaim.com/
We're excited to continue our AI Tools series with Yaron Lavie, a veteran product leader with over 25 years of experience in FinTech, InsurTech, and now retail tech at Nexite, where he helps fashion retailers unlock unique in-store data. In this episode, Yaron joins Matt and Moshe to share how he used Base44, an AI-powered, full‑stack vibe coding platform, to take a completely new product idea from concept to a deployed prototype without touching his R&D team.Yaron walks through why traditional approaches like Figma mockups and static visuals weren't enough for the kind of validation he needed, and how he experimented with tools like Gemini, Claude, and ChatGPT before landing on Base44 for an end‑to‑end, fully hosted solution. He explains how Base44's conversational, chat-based builder let him model user personas, flows, and entities, then iteratively refine an interactive analytics dashboard with real (anonymized) data, all inside a time‑boxed, low‑risk experiment that still respected security constraints.Join Matt, Moshe, and Yaron as they explore:Why Yaron needed to validate a new product idea without pulling scarce R&D resources off other prioritiesHow he moved from static mockups to interactive prototypes with real data, and where Gemini helped and fell shortWhat made Base44 stand out versus other vibe coding tools like Lovable: full-stack, hosted, and truly end-to-endThe importance of “context engineering” over simple prompt engineering when building with LLM-based buildersUsing Base44's discussion mode, live preview, and QA test generation to shape the product before committing to codeReal-world limits: hitting a ceiling on UX depth, inflated code, and friction with design systems and engineering standardsHow he transitioned from a Base44 prototype to a ground-up rebuild with the core dev team, using the prototype to generate user storiesPractical pros and cons: integrations, multi-currency support, database control, and when full-stack vibe coding is “good enough”Where Yaron sees vibe coding going next, and how PMs can use it responsibly for experimentation and usability testingAnd much more!Want to connect with Yaron or learn more?LinkedIn: https://il.linkedin.com/in/yaronlavieYou can also connect with us and find more episodes:Product for Product Podcast: http://linkedin.com/company/product-for-product-podcastMatt Green: https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky: http://www.linkedin.com/in/mikanovskyNote: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
"How many states are there in the United States?" Attackers are actively scanning for LLMs, fingerprinting them using the query How many states are there in the United States? . https://isc.sans.edu/diary/%22How%20many%20states%20are%20there%20in%20the%20United%20States%3F%22/32618 Closing the Door on Net-NTLMv1: Releasing Rainbow Tables to Accelerate Protocol Deprecation Mandiant is publicly releasing a comprehensive dataset of Net-NTLMv1 rainbow tables to underscore the urgency of migrating away from this outdated protocol. https://cloud.google.com/blog/topics/threat-intelligence/net-ntlmv1-deprecation-rainbow-tables Out-of-band update to address issues observed with the January 2026 Windows security update Microsoft has identified issues upon installing the January 2026 Windows security update. To address these issues, an out-of-band (OOB) update was released today, January 17, 2026 https://learn.microsoft.com/en-us/windows/release-health/windows-message-center
As religion bleeds followers, it's looking for a digital transfusion from the high-tech elite. We're dissecting the rise of techno-cults, the Vatican's desperate attempt to regulate AI with its Antiqua et Nova policy, and why tech bros like Peter Thiel are now becoming guest speakers at megachurch revivals. It turns out that when you can't get answers from a burning bush, a large language model that tells you exactly what you want to hear is the next best thing. Silicon Valley is building the new gods of a digital age where faith and profit finally merge.News Source:Tech revival after Peter Thiel's Antichrist talks: There's hope and warinessBy Religion News ServiceJanuary 2, 2026
Shopify Masters | The ecommerce business and marketing podcast for ambitious entrepreneurs
MANSCAPED, the men's grooming brand that pioneered below-the-belt care, sold out its first product in two weeks and scaled to $300 million in just three years. Founder Paul Tran shares how rapid iteration, customer feedback, and a razor-sharp focus turned a taboo idea into a global brand.For more on MANSCAPED and show notes click here Subscribe and watch Shopify Masters on YouTube!Sign up for your FREE Shopify Trial here.
In this episode of Tacos & Tech, Neal Bloom sits down with longtime friend, founder, and self-described “Maine Melon,” Jared Ruth, founder of Ripcurrent. What starts as a walk down memory lane through San Diego's early startup ecosystem turns into a wide-ranging conversation about entrepreneurship, marketing, AI, and the human moments technology should protect - not replace.Jared shares his journey from decades in telecom and corporate innovation to building Ripcurrent, a marketing and automation agency focused on Main Street businesses. Together, Neal and Jared unpack how generative AI and no-code tools have radically lowered the barrier to building, why small businesses are both overwhelmed and empowered by tech, and how the next era of marketing isn't about shouting louder - it's about removing friction so humans can show up where it matters most.Key Topics Covered* Jared's path from telecom and corporate innovation to founding Ripcurrent* Early days of San Diego's startup ecosystem, Founder Dinners, and CTO roundtables* Building “startups inside big companies” and why that experience matters* The moment GenAI unlocked solo building and rapid experimentation* Vibe coding, no-code tools, and the rise of AI-native workflows* Why small and Main Street businesses struggle with modern marketing tech* Google Business Profiles, search, and what visibility means in an LLM-driven world* Automation as a way to remove transactional work - not human connection* Where AI agents help brands and where they can quietly destroy trust* Why trust and brand moments matter more than the underlying technology* Parallels between AI adoption and autonomous driving trust curves* Using technology to give business owners their time - and humanity - back* The optimism (and responsibility) that comes with building in the AI eraLinks & Resources* RipcurrentConnect with Jared & Neal* Jared Ruth* Neal Bloom This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit risingtidepartners.substack.com/subscribe
פבל גורביץ הקים ב-2013 עם שותפיו את גארדיקור, שנמכרה לאקאמי בכ-650 מיליון דולר ב-2021. לאחר הרכישה הוביל את קבוצת הסייבר בתאגיד האמריקאי הוותיק, עד לשנה שעברה כשהחליט לצאת לדרך חדשה. אותה דרך חדשה התממשה לחברה מבטיחה במיוחד בשם טנזאי, שמשתמשת במודלי ה-LLM החדשים והחזקים כדי לבצע מהפכה בעולם בדיקות החולשות. משתמשים במילה הזו פעמים רבות כשזה מגיע לחברות סייבר, כשלרוב המילים המפוצצות חסרות כיסוי לחלוטין. כאן יש סיכוי טוב שנגלה אחרת. נותני החסות לפרק: חברת 2sit שבה תקבלו 25% הנחה על הכסא הראשון שתקנו אם תגידו שהגעתם דרך גיקונומי חברת חפשו בגוגל R&D משכנתאות וגיקונומי האימייל של ראם
In this month's episode of The Cisco AI Insights Podcast, hosts Rafael Herrera and Sónia Marques welcome Giota Antonakaki, a Cisco leader in machine learning engineering, for an illuminating deep dive into a recent academic research shaping AI agent technology. Together, they unpack “The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs”, a paper challenging the common belief that scaling large language models (LLMs) results in only marginal gains. The conversation explores fresh concepts like “long horizon” and “self-conditioning,” revealing how even small improvements in LLM step accuracy can dramatically boost performance in complex, multi-step tasks. Giota breaks down the difference between planning, reasoning, and execution in LLMs, and why reasoning-focused models outperform even the largest standard LLMs for long-running AI agents. We also extend a special thank you to Akshit Sinha and his team of researchers for developing this month's paper. If you are interested in reading the paper yourself, please visit this link: https://arxiv.org/abs/2509.09677.
Wilder Lopes is the CEO and Founder of Ogre.run, working on AI-driven dependency resolution and reproducible code execution across environments.How Universal Resource Management Transforms AI Infrastructure Economics // MLOps Podcast #357 with Wilder Lopes, CEO / Founder of Ogre.runJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractEnterprise organizations face a critical paradox in AI deployment: while 52% struggle to access needed GPU resources with 6-12 month waitlists, 83% of existing CPU capacity sits idle. This talk introduces an approach to AI infrastructure optimization through universal resource management that reshapes applications to run efficiently on any available hardware—CPUs, GPUs, or accelerators.We explore how code reshaping technology can unlock the untapped potential of enterprise computing infrastructure, enabling organizations to serve 2-3x more workloads while dramatically reducing dependency on scarce GPU resources. The presentation demonstrates why CPUs often outperform GPUs for memory-intensive AI workloads, offering superior cost-effectiveness and immediate availability without architectural complexity.// BioWilder Lopes is a second-time founder, developer, and research engineer focused on building practical infrastructure for developers. He is currently building Ogre.run, an AI agent designed to solve code reproducibility.Ogre enables developers to package source code into fully reproducible environments in seconds. Unlike traditional tools that require extensive manual setup, Ogre uses AI to analyze codebases and automatically generate the artifacts needed to make code run reliably on any machine. The result is faster development workflows and applications that work out of the box, anywhere.// Related LinksWebsite: https://ogre.runhttps://lopes.aihttps://substack.com/@wilderlopes https://youtu.be/YCWkUub5x8c?si=7RPKqRhu0Uf9LTql~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Wilder on LinkedIn: /wilderlopes/Timestamps:[00:00] Secondhand Data Centers Challenges[00:27] AI Hardware Optimization Debate[03:40] LLMs on Older Hardware[07:15] CXL Tradeoffs[12:04] LLM on CPU Constraints[17:07] Leveraging Existing Hardware[22:31] Inference Chips Overview[27:57] Fundamental Innovation in AI[30:22] GPU CPU Combinations[40:19] AI Hardware Challenges[43:21] AI Perception Divide[47:25] Wrap up
My guest on this week's episode of the podcast is Alfred Succer, the founder of Nova Nuggets, which provides private, end-to-end secure operating environments for AI applications.The topic of our conversation is sovereign AI, or the importance of controlling the environment on which AI-based applications, but especially chatbots, operate. Among other things, we discuss:The definition of a sovereign AI systemThe privacy vulnerabilities inherent in web-scale LLMsThe purpose of hosting an LLM on-premises or within a company's own data environmentThe use cases that most necessitate private model training or hostingThe cost profile of private model hosting and trainingThe liabilities that companies face in sending sensitive data to publicly-accessible LLMs in their workflowsThanks to the sponsors of this week's episode of the Mobile Dev Memo podcast:INCRMNTAL. True attribution measures incrementality, always on.Xsolla. With the Xsolla Web Shop, you can create a direct storefront, cut fees down to as low as 5%, and keep players engaged with bundles, rewards, and analytics.Branch. Branch is an AI-powered MMP, connecting every paid, owned, and organic touchpoint so growth teams can see exactly where to put their dollars to bring users in the door and keep them coming back
Guest: Elie Burstein, Distinguished Scientist, Google Deepmind Topics: What is Sec-Gemini, why are we building it? How does DeepMind decide when to create something like Sec-Gemini? What motivates a decision to focus on something like this vs anything else we might build as a dedicated set of regular Gemini capabilities? What is Sec-Gemini good at? How do we know it's good at those things? Where and how is it better than a general LLM? Are we using Sec-Gemini internally? Resources: Video version EP238 Google Lessons for Using AI Agents for Securing Our Enterprise EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side Big Sleep, CodeMender blogs
The way app users discover, evaluate, and even build apps is starting to shift — and large language models are at the center of that change. As AI platforms evolve beyond chat and into full ecosystems, app growth teams are facing a new set of questions around discovery, distribution, and long-term strategy. In this episode, we're sharing an App Talks interview with Dan Shabtay, Co-Founder of Z2A Digital. Dan explores how the emerging app ecosystem inside ChatGPT is opening the door to a new wave of app creation — from no-code tools and solo entrepreneurs to entirely new user acquisition channels embedded directly within AI platforms. Today's topics include: Dan explains how ChatGPT is evolving into a platform with its own internal app marketplace, giving developers access to hundreds of millions of users without relying on traditional app stores. Entrepreneurs and creators can now build and publish functional apps without engineering teams, fundamentally changing who can participate in app development and how quickly new ideas can reach the market. Unlike iOS or Google Play, apps built inside LLM ecosystems immediately tap into an existing, engaged user base, reducing friction around discovery and distribution. Traditional user acquisition isn't disappearing, but it is shifting. Classic channels will coexist with new LLM-based acquisition models, including potential in-chat placements and AI-native ad formats. Beyond IQ and EQ, adaptability becomes the defining skill for app growth teams navigating rapid, AI-driven change. Links and Resources: Dan Shabtay on LinkedIn Z2A Digital website Business Of Apps - connecting the app industry Quotes from Dan Shabtay: “It's the most efficient infrastructure ever that has been created for entrepreneurs.” “For me as a business owner and as an entrepreneur, it's a gold rush.” “AQ is adaptability… you have to be adaptable to the high-paced technology that's going on right now.” Host Business Of Apps - connecting the app industry since 2012
We want answers fast. Not just any answers. We want relevant, peer-contributed, and verified answers. What happens if those answers are wrong, primarily when referring to your business and the offerings you provide?In this episode of Politely Pushy, Eric Chemi peels back the curtain on GEO (generative engine optimization). The episode features critical considerations outlined by:Bospar Principal Curtis Sparrer underscores the need for a GEO and AI search solution to uphold brand safety and keep pace with LLM updates.Bospar VP of Social Media, Connor Grant, highlights how social media platforms have become archives that LLMs scrape for relevant data. One of the key platforms contributing to AI search results is Reddit. Yet, while Reddit is one of many sources for assessing brand visibility, Jennifer Devine, founder of Freshwater Creative, shares that diversification across signals and platforms is paramount.It's time for a new playbook. Tune into this episode for expert insight into GEO and its impact on your brand.
In this episode, Corey LeBleu, a veteran penetration tester, shares a raw and intense story from his early days in offensive security. Corey walks through a social engineering engagement that took a sharp turn, from being closely watched by a security guard to receiving the call that changed everything. What followed was a confrontation with authority, handcuffs, and a moment that forced him to confront the legal and emotional consequences of impersonation.Through honest storytelling, Corey reflects on the pressure of physical security testing, the thin line between authorization and trouble, and the lessons he carried forward in his career. This episode serves as a cautionary tale about understanding boundaries, respecting authority, and the unseen risks behind revealing what's hidden.00:00 Introduction to Corey LeBleu and His Journey03:34 Corey's Early Career and Learning Path06:34 The Role of Mentorship in Pen Testing09:19 Experiences in Social Engineering and Physical Pen Testing12:22 The Handcuff Incident: A Lesson in Risk15:12 Transitioning to Web Application Pen Testing18:01 The Evolution of Pen Testing Practices20:48 The Impact of AI on Pen Testing23:42 The Future of Pen Testing and Learning for Beginners26:28 Navigating Active Directory and Pen Testing Tools27:35 Essential Training for Web App Pen Testing30:34 Advice for Aspiring Pen Testers32:30 Exploring AI and Learning Resources37:05 Personal Interests and Hobbies39:17 Living in Austin and Local Music SceneSYMLINKS[LinkedIn] – https://www.linkedin.com/in/coreylebleu/Primary platform Corey recommends for connecting with him professionally.[Relic Security] – https://www.relixsecurity.com/Cybersecurity consulting firm founded and run by Corey LeBleu, focused primarily on web application penetration testing and offensive security work.[PortSwigger Academy] – https://portswigger.net/web-securityA free and advanced online training platform for web application security, created by the makers of Burp Suite. Recommended by Corey as one of the best learning resources for modern web app pentesting.[Burp Suite] – https://portswigger.net/burpA widely used web application security testing tool. Corey emphasizes learning Burp Suite as a core skill for anyone entering web app penetration testing.[OWASP Juice Shop] – https://owasp.org/www-project-juice-shop/An intentionally vulnerable web application created by OWASP for learning and practicing web security testing.[OWASP – Open Web Application Security Project] – https://owasp.orgA global nonprofit organization focused on improving software security. Corey previously ran an OWASP project and references OWASP tools and resources throughout his career.[SANS Institute] – https://www.sans.orgA major cybersecurity training and certification organization, referenced in relation to early penetration testing education and the high cost of formal training.[Hack The Box] – https://www.hackthebox.comAn online platform for practicing penetration testing skills in simulated environments.[PromptFoo] – https://promptfoo.devA tool for testing, evaluating, and securing LLM prompts. Mentioned in the context of prompt injection and AI security experimentation.[PyTorch] – https://pytorch.orgAn open-source machine learning framework widely used for deep learning and AI research. Corey mentions it as part of his learning path for understanding how LLMs work.[Hugging Face] – https://huggingface.coAn AI platform providing open-source models, datasets, and tools for machine learning and LLM experimentation.
From building internal AI labs to becoming CTO of Brex, James Reggio has helped lead one of the most disciplined AI transformations inside a real financial institution where compliance, auditability, and customer trust actually matter. We sat down with Reggio to unpack Brex's three-pillar AI strategy (corporate, operational, and product AI) [https://www.brex.com/journal/brex-ai-native-operations], how SOP-driven agents beat overengineered RL in ops, why Brex lets employees “build their own AI stack” instead of picking winners [https://www.conductorone.com/customers/brex/], and how a small, founder-heavy AI team is shipping production agents to 40,000+ companies. Reggio also goes deep on Brex's multi-agent “network” architecture, evals for multi-turn systems, agentic coding's second-order effects on codebase understanding, and why the future of finance software looks less like dashboards and more like executive assistants coordinating specialist agents behind the scenes. We discuss: Brex's three-pillar AI strategy: corporate AI for 10x employee workflows, operational AI for cost and compliance leverage, and product AI that lets customers justify Brex as part of their AI strategy to the board Why SOP-driven agents beat overengineered RL in finance ops, and how breaking work into auditable, repeatable steps unlocked faster automation in KYC, underwriting, fraud, and disputes Building an internal AI platform early: LLM gateways, prompt/version management, evals, cost observability, and why platform work quietly became the force multiplier behind everything else Multi-agent “networks” vs single-agent tools: why Brex's EA-style assistant coordinates specialist agents (policy, travel, reimbursements) through multi-turn conversations instead of one-shot tool calls The audit agent pattern: separating detection, judgment, and follow-up into different agents to reduce false negatives without overwhelming finance teams Centralized AI teams without resentment: how Brex avoided “AI envy” by tying work to business impact and letting anyone transfer in if they cared deeply enough Letting employees build their own AI stack: ChatGPT vs Claude vs Gemini, Cursor vs Windsurf, and why Brex refuses to pick winners in fast-moving tool races Measuring adoption without vanity metrics: why “% of code written by AI” is the wrong KPI and what second-order effects (slop, drift, code ownership) actually matter Evals in the real world: regression tests from ops QA, LLM-as-judge for multi-turn agents, and why integration-style evals break faster than you expect Teaching AI fluency at scale: the user → advocate → builder → native framework, ops-led training, spot bonuses, and avoiding fear-based adoption Re-interviewing the entire engineering org: using agentic coding interviews internally to force hands-on skill upgrades without formal performance scoring Headcount in the age of agents: why Brex grew the business without growing engineering, and why AI amplifies bad architecture as fast as good decisions The future of finance software: why dashboards fade, assistants take over, and agent-to-agent collaboration becomes the real UI — James Reggio X: https://x.com/jamesreggio LinkedIn: https://www.linkedin.com/in/jamesreggio/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction 00:01:24 From Mobile Engineer to CTO: The Founder's Path 00:03:00 Quitters Welcome: Building a Founder-Friendly Culture 00:05:13 The AI Team Structure: 10-Person Startup Within Brex 00:11:55 Building the Brex Agent Platform: Multi-Agent Networks 00:13:45 Tech Stack Decisions: TypeScript, Mastra, and MCP 00:24:32 Operational AI: Automating Underwriting, KYC, and Fraud 00:16:40 The Brex Assistant: Executive Assistant for Every Employee 00:40:26 Evaluation Strategy: From Simple SOPs to Multi-Turn Evals 00:37:11 Agentic Coding Adoption: Cursor, Windsurf, and the Engineering Interview 00:58:51 AI Fluency Levels: From User to Native 01:09:14 The Audit Agent Network: Finance Team Agents in Action 01:03:33 The Future of Engineering Headcount and AI Leverage
Join Scott as he discusses his #CircuitPython2026 thoughts. He'll also try and answer any questions that folks have. 0:00 getting setup 2:27 hello everyone - welcome to deep dive 3:50 wow - 10 years with Adafruit 4:55 hacking the yoto player ( esp32 based ) 6:51 mini yoto web cam with PCbite 8:24 refer to adafruit playground for some yoto resources 11:20 yoto testpoints - default software vs. circuitpython 14:14 yoto teardown link https://adafruit-playground.com/u/BlitzCityDIY/pages 17:13 CircuitPython 2026 - reflections on 2025 and 2026 thoughts 18:35 link to https://blog.adafruit.com/2026/01/02/circuitpython2026-kickoff/ 18:50 claude code support for tedious things / LLMs 23:40 Python to C, circuitpython shared-bindings modules - sources on github 25:30 Python style documentation 26:19 shared-module implementation 27:17 io expander module generated with LLM assist 27:50 Micropython native code, suggest merging it into circuit python 28:50 link to embeded code in /lib 29:50 see the eve shared-module 32:12 CP 2026 and LLMs - but you need to test it! 36:05 links to LLM articles 36:55 discuss getting rid of "never reset" using LLM / persistent display 39:40 VM resets / code.py 41:40 out of memory exception 44:00 thread support 44:40 other ways to add wifi to rp2040 system 46:36 FruitJam is the highlight of 2025 for games 52:16 we need to get "matter" support wrapped up 54:33 macropad with screen on every key - lilygo ? 56:40 Foamyguy's 2026 post 58:58 LLMs on common microcontrollers ? 1:00:25 Looking for LLM adafruit-playground article 1:02:30 Wrapping up ----------------------------------------- LIVE CHAT IS HERE! http://adafru.it/discord Subscribe to Adafruit on YouTube: http://adafru.it/subscribe New tutorials on the Adafruit Learning System: http://learn.adafruit.com/ -----------------------------------------
The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on. Follow Omar Khattab on X: https://x.com/lateinteractionFollow Martin Casado on X: https://x.com/martin_casadoCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this week's show we have compiled a list of home theater and home automation/smart home products that received notable awards or honors at CES 2026. News: Amazon has started automatically upgrading Prime members to Alexa Plus Apple Reveals 'Record-Breaking' Year For Apple TV And Other Services Bears vs. Packers on Prime Video sets streaming record 2026 CES Award Winners Samsung S95H (OLED TV with enhanced brightness, anti-burn-in art display, wireless features) Awarded Best TV or Home Theater (CNET) and Winner in Home Theater category (ZDNET/CNET Group awards). Samsung Music Studio 5 (Compact smart speaker with artistic design, Wi-Fi/Bluetooth connectivity for whole-home audio) Winner in Best Audio category (CNET) and Winner in Audio category (ZDNET/CNET Group awards). Samsung 140" Micro LED - The Samsung 140" Micro LED TV creates a seamless, immersive 3D-like experience by using AI to extend on-screen content onto innovative Micro LED Mirror bezels that blend with the image. When not in use, it folds in half via a hidden hinge to function as an elegant art frame, eliminating the traditional "black box" appearance and blending beautifully into home décor. CES 2026 Best of Innovation in Video Displays KLIPSCH THE 9S II - The Klipsch The 9s II powered speakers feature Onkyo audio processing and an updated Tractrix horn for wide dispersion and precise clarity, while supporting both two-channel music and Dolby Atmos content with versatile connectivity including AirPlay 2, USB-C, HDMI, and XLR inputs.They include Dirac Live auto-room calibration and deliver exceptional sound quality, though the pair carries a premium price of $2,399. Tom's Guide Best Audio LG H7 FlexConnect soundbar (Dolby Atmos soundbar with modular FlexConnect surround extension to any TV, part of LG Sound Suite) Best Audio category (CNET). LG W6 (Ultra-thin "wallpaper" OLED TV, flush wall mount, bright display, supports Dolby Atmos FlexConnect) Best TV or Home Theater category (CNET). LG CLOiD - The LG CLOiD is a wheeled household robot that connects to LG ThinQ smart appliances and uses its two arms, cameras, sensors, and voice recognition to autonomously handle tasks like loading laundry, folding clothes, organizing the fridge, tidying up, running errands, and assisting with cooking. By learning the user's routines, understanding context and emotions, and proactively acting with gestures, voice, and expressions, it reduces household labor and enhances quality of life and emotional well-being. 2026 Honoree in Smart Home Hisense 116UXS (Massive 116-inch mini-LED TV with advanced RGB + cyan backlight for wide color gamut) Highlighted in Best TV or Home Theater category (CNET) and CES 2026 Best of Innovation in Video Displays Hisense 163MX The Hisense 163MX is the world's first 163-inch MicroLED TV to use a four-primary RGBY (QuadColor) pixel design, which adds a yellow subpixel to achieve 95% BT.2020 color coverage—a 5% improvement over traditional RGB MicroLED systems. This self-emissive technology delivers perfect blacks, infinite contrast, precise brightness control, and stunning visual quality in any lighting without a backlight. 2026 Best of Innovation in Video Displays. Samsung EdgeAware AI Home - processing sounds, videos, and data from Samsung and third-party devices locally on your Samsung tech to generate detailed event summaries, contextual recommendations, and health insights displayed on your TV—all without sending private data to the cloud. It detects 12 distinct sounds (like running water or breaking glass), provides actionable suggestions such as launching telemedicine for persistent coughing or triggering emergency services for intrusions/fires, and enables fast AI-driven searches for moments like "doorbell rang." 2026 Honoree in Smart Home Doma Intelligent Door - Doma is pioneering secure home intelligence by integrating advanced technology directly into the front door, starting with keyless entry, intruder protection, and real-time awareness of activity inside and around the home. Founded by the team behind August and Yale smart locks, it delivers a holistic system that elevates the home experience, monitors health and safety, senses surroundings, and takes personalized actions to provide true peace of mind. 2026 Honoree in Smart Home Roborock Saros Rover (Advanced two-legged/smart robot vacuum with stair-climbing, AI navigation for multi-level homes) Winner in Smart Home category (ZDNET/CNET Group awards). AQARA U400 - Thanks to its use of Ultra Wide Band (UWB) technology, it can sense when you (and your iPhone or Apple Watch) are approaching your door, and will unlock it automatically. No fuss. And, the technology is good enough that it can recognize if you're merely walking past your door rather than to it, or if you're inside, rather than outside your house. Tom's Guide Best Smart Lock JBL Tour One M3 Smart Tx JBL Tour One M3 Smart Tx delivers all the powerful features of the Tour One M3 headphones such as world class noise cancellation, crystal clear calls and legendary Hi-res certified JBL Pro Sound. The Smart Tx audio transmitter connects you to almost any audio source and elevates your wireless experience. Connect wirelessly to digital devices using the USB-C connection, or analog devices with a 3.5mm audio jack, such as in-flight entertainment systems. No need to pull out your phone and search for the app. Full access to all controls is right there on the touch screen of the transmitter. 2026 Honoree in Headphones & Personal Audio. $450 Timekettle W4 AI Interpreter Earbuds The Timekettle W4 AI Interpreter Earbuds are the world's first in-ear translation device to use Bone-Voiceprint Sensor technology combined with LLM-powered, context-aware AI, achieving 98% accurate, noise-immune speech recognition with just 0.2-second latency across 42 languages and 95 accents.Designed for all-day comfort with up to 18 hours of battery life, sleek styling, one-flip sharing, automatic mode switching, and audio/video translation capabilities, the W4 delivers natural, real-time multilingual conversations and is now available for purchase. 2026 Honoree in Artificial Intelligence, Headphones & Personal Audio, Mobile Devices, Accessories & Apps. $350
Send us a textAaron Eden brings more than three decades of building, testing, and shipping practical innovation. At Intuit, he focuses on AI-driven process automation; partnering with product, operations, and analyst communities to eliminate manual toil and design customer-centric solutions at scale. His posts highlight ongoing hiring and growth around intelligent automation and a practitioner's mindset toward measurable impact.Before Intuit, Aaron co-founded Moves the Needle, where he helped Fortune-scale organizations adopt lean startup and design thinking behaviors. Through executive mentoring and enterprise programs, he guided leaders to shorten time-to-market and increase employee engagement while staying grounded in customer outcomes. He's also held multiple roles inside Intuit's broader innovation ecosystem, including Design for Delight leadership and talent initiatives aimed at spreading experimentation across the company. Outside of the enterprise, Aaron's entrepreneurial streak shows up in community and advisory work. He co-leads the Artificial Intelligence Trailblazers meetup—an open community designed to make modern AI approachable—and frequently speaks on translating buzz into business results. He also mentors founders through Startup Tucson and participates in local panels like the University of Arizona's “Technology for Good,” where he advocates for responsible, accessible AI.If you're an engineer or technical leader, you'll appreciate Aaron's bias toward running small, smart experiments, measuring what matters, and shipping value fast—principles he's applied from customer care analytics to RPA/AI platforms. Expect a conversation rich with playbooks for automating high-variance processes, empowering analysts, and building an innovation culture that sticks. LINKS:Guest LinkedIn: https://www.linkedin.com/in/aaroneden/Guest website: https://www.brainbridge.app/Guest NPO: https://www.aitrailblazers.io/ Aaron Moncur, hostDownload the Essential Guide to Designing Test Fixtures: https://pipelinemedialab.beehiiv.com/test-fixtureAbout Being An Engineer The Being An Engineer podcast is a repository for industry knowledge and a tool through which engineers learn about and connect with relevant companies, technologies, people resources, and opportunities. We feature successful mechanical engineers and interview engineers who are passionate about their work and who made a great impact on the engineering community. The Being An Engineer podcast is brought to you by Pipeline Design & Engineering. Pipeline partners with medical & other device engineering teams who need turnkey equipment such as cycle test machines, custom test fixtures, automation equipment, assembly jigs, inspection stations and more. You can find us on the web at www.teampipeline.us
Why do AI's fabricated memories "feel" so true?Hotel Bar Sessions is currently between seasons and while our co-hosts are hard at work researching and recording next season's episodes, we don't want to leave our listeners without content! So, as we have in the past, we've given each co-host the opportunity to record a "Minibar" episode-- think of it as a shorter version of our regular conversations, only this time the co-host is stuck inside their hotel room with whatever is left in the minibar... and you are their only conversant!AI engineers and designers are currently, and rightly, focused on minimizing the deleterious effects of AI's three primary "memory problems"-- hallucinations, catastrophic forgetting, and bias-- but in this Minibar episode, HBS co-host Leigh M. Johnson argues that none of these problems can be design-engineered away. They are, according to Johnson, baked-in and unavoidable structural elements of any language-based system reliant on an archive.Borrowing from Jacques Derrida's work on archives, language, and memory, Johnson argues that we should think more seriously about the manner in which LLM's outputs come to us cloaked in the garb of memory. We take AI hallucinations, for example, to be true because they inspire in us a feeling of nostalgia... something that we could have remembered, perhaps even should have remembered, but didn't.Or didn't we?Tune in for the first episode of Season 15 on January 23, 2026!Full episode notes available at this link:https://hotelbarpodcast.com/minibar-algorithmic-noslagia---------------------SUBSCRIBE to the podcast now to automatically download new episodes!SUPPORT Hotel Bar Sessions podcast on Patreon here! (Or by contributing one-time donations here!)BOOKMARK the Hotel Bar Sessions website here for detailed show notes and reading lists, and contact any of our co-hosts here.Hotel Bar Sessions is also on Facebook, YouTube, BlueSky, and TikTok. Like, follow, share, duet, whatever... just make sure your friends know about us! ★ Support this podcast on Patreon ★
Episode DescriptionAI is fundamentally changing how SaaS companies should think about pricing. When your software makes teams 70% more efficient, charging per seat means you're literally shrinking your own market. In this conversation, product management veteran Lee Bridges explains why seat-based pricing is a burning platform and what comes next.Lee, returning to the podcast after five years, recently led a pricing transformation project that forced him to confront an uncomfortable truth: AI-driven efficiency gains directly reduce the number of seats customers need. His solution? Outcome-based pricing that aligns incentives between vendors and customers while future-proofing against AI disruption.GuestLee Bridges - Cheif Product Officer at Inn-Flow, father, audio engineer, and vibe coder who recently completed a major pricing transformation project for a B2B SaaS company in the field service space.Key Topics CoveredThe Seat-Based Pricing ProblemHow AI efficiency reduces Total Addressable Market (TAM)The misalignment of incentives between vendors and customersInternal team conflicts created by per-seat modelsWhy "reducing a team from 10 to 3" destroys 70% of your revenue potentialOutcome-Based Pricing ExplainedThe difference between usage-based and outcome-based pricingHow to identify and price meaningful outcomesThe psychology of "you make money when your customer makes money"Avoiding the "nickel and diming" feeling of usage-based modelsReal-World ImplementationCase study: Field service sales teams (20 minutes to 90 seconds per quote)Tiered prepayment models with outcome "credits"Combining platform fees with outcome pricingWhen outcome-based pricing works (and when it doesn't)The Future of SaaS and AIWhy B2B SaaS isn't going anywhere despite AI hypeThe problem with expecting everyone to be a product managerConsistency, training, and the limits of LLM-generated experiencesVibe coding and no-code tools in practiceNotable Quotes"If you create efficiencies that make a process so efficient that some number of people will no longer be necessary... you reduce the number of potential seats. You reduce the Tam.""If I give you a dollar and you're going to give me $10 back, I'd be insane to not do that as many times as I can.""You're really expecting everyone on Earth to be a product manager. That's just not going to happen.""The most people don't have a high level of agency. They don't know what they want, when they want it, and they don't know how to describe it."Practical TakeawaysEvaluate your pricing model now - If you're charging per seat and building AI features, you're creating a strategic vulnerabilityStart with new products - Test outcome-based pricing with new offerings rather than risking existing revenueIdentify measurable outcomes tied to customer revenue - What metrics does your sales team already use when discussing ROI?Consider hybrid models - Platform fees plus outcome pricing can balance predictability with value alignmentThe complexity trade-off - Outcome-based pricing must remain simple enough to avoid litigation-inducing confusion
Faiza has gone from Student to AI Engineer, developing valuable solutions for MicroYES and Finely Fettled clients. Her skills include AWS, Linux, and DevOps. She hails from Southern India and will complete her MSc in International Management at York St John University in early 2026. She is currently developing lead generation AI solutions for Finely Fettled and MicroYES clients.Summary of PodcastKey TakeawaysFaiza Khan's career progressed from student to AI Engineer via a structured path: internship → placement → full-time hire.Her role involves building AI agents (e.g., "Phone to Agent") and Answer Engine Optimisation (AEO) to help clients get found in LLM answers, a critical shift from traditional SEO.The hiring process used Handshake, a university student-focused job platform, and video interviews, where key advice for students is to speak up, slow down, smile, and make eye contact.AI is shifting the workforce from manual research to higher-value roles like AI architecture, with low-code/no-code tools enabling non-technical entry.Faiza's Career ProgressionBackground: From Kadapa, Southern India, with a Bachelor of Commerce.Early Skill-Building: Completed a 6-month course in AWS, Linux, and DevOps in Bangalore while working in inside sales.UK Education: Choose York St John University for its placement year option, which Manchester Metropolitan lacks.Hiring Process:Platform: Found via Handshake, a university job platform.Video Interview: A key step where students answer AI-generated questions on camera.Career Path:Internship: Initial role at Finely Fettled and its brand MicroYES.Placement: Extended 9-month contract.Full-Time: Hired as an AI Engineer/Architect and Marketing Manager.AI in Business & MarketingMeclabsAI Platform: Faiza's work on this AI solutions platform includes:AI Agent Delivery Systems: Personalised agents, not generic chatbots.AI Workflows: Self-service tools, like a database query workflow on the https://finelyfettled.co.uk website."Phone to Agent": A new service for small businesses.An AI agent answers calls using the client's specific policies and pricing.Designed for natural conversation (e.g., "mm-hmm" confirmations, background noise).Rationale: Provides cost-effective, consistent phone support for busy professionals and small businesses.Answer Engine Optimisation (AEO):Rationale: Anticipates ChatGPT providing more answers than Google by early 2028, making AEO a critical marketing strategy.Goal: Structure website content to be found and cited in LLM answers.Execution: An AI agent guides clients through the process.The Value of Diversity: Kevin noted Faiza's value comes from her diverse perspective (age, gender, culture), which provides fresh insights.Advice for StudentsSet a Clear Goal: Define a career path and stay focused.Use University Resources: Actively leverage career services and platforms like...
Corey Zumar is a Product Manager at Databricks, working on MLflow and LLM evaluation, tracing, and lifecycle tooling for generative AI.Jules Damji is a Lead Developer Advocate at Databricks, working on Spark, lakehouse technologies, and developer education across the data and AI community.Danny Chiao is an Engineering Leader at Databricks, working on data and AI observability, quality, and production-grade governance for ML and agent systems.MLflow Leading Open Source // MLOps Podcast #356 with Databricks' Corey Zumar, Jules Damji, and Danny ChiaoJoin the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterShoutout to Databricks for powering this MLOps Podcast episode.// AbstractMLflow isn't just for data scientists anymore—and pretending it is is holding teams back. Corey Zumar, Jules Damji, and Danny Chiao break down how MLflow is being rebuilt for GenAI, agents, and real production systems where evals are messy, memory is risky, and governance actually matters. The takeaway: if your AI stack treats agents like fancy chatbots or splits ML and software tooling, you're already behind.// BioCorey ZumarCorey has been working as a Software Engineer at Databricks for the last 4 years and has been an active contributor to and maintainer of MLflow since its first release. Jules Damji Jules is a developer advocate at Databricks Inc., an MLflow and Apache Spark™ contributor, and Learning Spark, 2nd Edition coauthor. He is a hands-on developer with over 25 years of experience. He has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, Anyscale, and Databricks, building large-scale distributed systems. He holds a B.Sc. and M.Sc. in computer science (from Oregon State University and Cal State, Chico, respectively) and an MA in political advocacy and communication (from Johns Hopkins University)Danny ChiaoDanny is an engineering lead at Databricks, leading efforts around data observability (quality, data classification). Previously, Danny led efforts at Tecton (+ Feast, an open source feature store) and Google to build ML infrastructure and large-scale ML-powered features. Danny holds a Bachelor's Degree in Computer Science from MIT.// Related LinksWebsite: https://mlflow.org/https://www.databricks.com/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Corey on LinkedIn: /corey-zumar/Connect with Jules on LinkedIn: /dmatrix/Connect with Danny on LinkedIn: /danny-chiao/Timestamps:[00:00] MLflow Open Source Focus[00:49] MLflow Agents in Production[00:00] AI UX Design Patterns[12:19] Context Management in Chat[19:24] Human Feedback in MLflow[24:37] Prompt Entropy and Optimization[30:55] Evolving MLFlow Personas[36:27] Persona Expansion vs Separation[47:27] Product Ecosystem Design[54:03] PII vs Business Sensitivity[57:51] Wrap up
OpenAI releases ChatGPT Translate, its version of Google Translate. How does it compare and can it successfully compete against the search giant? Social news aggregation and discussion platform Digg has officially relaunched. What's it look like? Is an LLM's output protected by the US Constitution's first amendment? And are we witnessing a growth in independent media? Starring Sarah Lane, Tom Merritt, Robb Dunewood, Andrea Jones-Rooy, Len Peralta, Roger Chang, Joe. To read the show notes click here! Support the show on Patreon by becoming a supporter!
In the security news: KVMs are a hacker's dream Hacking an e-scooter Flipper Zero alternatives The best authentication bypass Pwning Claude Code ForiSIEM, vulnerabilities, and exploits Microsoft patches and Secure Boot fun Making Windows great, again? Breaching the Breach Forum Congressional Emails unsolicited Instagram password reset requests - Is Meta doing enough to secure the platform? LLMs are HIPAA compliant? Threat actors target LLM honeypots Visit https://www.securityweekly.com/psw for all the latest episodes! Show Notes: https://securityweekly.com/psw-909
It's YOUR time to #EdUp with Dr. Robert Wilson, Provost & Vice President for Academic Affairs, Cedar Crest CollegeIn this episode, part of our Academic Integrity Series, sponsored by Integrity4EducationYOUR cohost is Thomas Fetsch, CEO, Integrity4EducationYOUR host is Elvin FreytesHow does Cedar Crest College treat AI as a design issue rather than a policing problem by building academic integrity into assignments & helping students understand ethical boundaries instead of just catching cheaters?How is Cedar Crest's math department using a custom LLM called Alchemy to teach students at their learning level instead of jumping straight to answers like popular AI models do & what does this mean for equity across student populations?How does a provost who's been at 1 institution for 23 years keep his fire burning by building programs from writing centers to MFA degrees to AI ready initiatives & what does he see as the future relevance question for higher education?Listen in to #EdUpThank YOU so much for tuning in. Join us on the next episode for YOUR time to EdUp!Connect with YOUR EdUp Team - Elvin Freytes & Dr. Joe Sallustio● Join YOUR EdUp community at The EdUp ExperienceWe make education YOUR business!P.S. Want to get early, ad-free access & exclusive leadership content to help support the show? Become an #EdUp Premium Member today!
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Digitization was just the first step. True digital transformation in healthcare is only beginning. Dr. Michael Pfeffer, Chief Information and Digital Officer at Stanford Health Care, shares how he and his team are moving beyond electronic health records to deliver real-time, AI-powered care. From building ChatEHR, a secure, embedded LLM interface, to developing Stanford's FIRM framework for responsible AI, Pfeffer provides a behind-the-scenes look at one of the nation's most advanced digital health systems. In this episode, you'll learn: Why “digitized” isn't the same as “digital” How Stanford built the first integrated LLM in clinical workflows What makes healthcare AI safe, useful, and equitable Where AI adds real clinical value and where it doesn't The vision behind precision health at scale
Send us a textRecommendation-driven search is here, and it's changing who gets discovered. We sit down with Chris Donnelly—serial entrepreneur and founder of Searchable—to unpack how large language models like ChatGPT, Gemini, Claude, and Perplexity are becoming the first stop for product advice and purchase decisions. When a single prompt returns five personalized options, the brands that earn those spots win the moment of truth. The question is: how do you become one of them?We walk through the practical playbook for showing up in conversational search. Chris explains why the shift isn't the death of SEO but a re-centering on structure, context, and freshness. From llm.txt to robust product and article schema, from mapping micro-intents to deploying query fan-outs, you'll learn how to make your site legible to AI crawlers and present across the exact scenarios customers ask about. We also explore how social signals—especially from LinkedIn and Reddit—now inform LLM answers, and why a thoughtful personal brand can supercharge testing, feedback, and distribution.Chris shares the rapid build story behind Searchable, including community-led iteration and a data engine that aggregates thousands of annotated site changes to prioritize the actions most likely to boost visibility, clicks, and revenue. We cover platform nuances without getting lost in complexity, highlighting a time-efficient strategy that reaches the majority of demand while adapting to new features like ChatGPT's shopping research. The takeaway is clear: AI won't replace great marketers; it amplifies those who use it to research faster, write smarter, and ship with purpose.Ready to earn a place in the answers that matter? Follow the show, share this with a teammate who needs a strategy refresh, and leave a quick review to help others find it. Your next customer might be one prompt away.This episode was recorded through a Descript call on December 11, 2025. Read the blog article and show notes here: https://webdrie.net/ai-search-is-reshaping-brand-discovery/..........................................................................
In the security news: KVMs are a hacker's dream Hacking an e-scooter Flipper Zero alternatives The best authentication bypass Pwning Claude Code ForiSIEM, vulnerabilities, and exploits Microsoft patches and Secure Boot fun Making Windows great, again? Breaching the Breach Forum Congressional Emails unsolicited Instagram password reset requests - Is Meta doing enough to secure the platform? LLMs are HIPAA compliant? Threat actors target LLM honeypots Show Notes: https://securityweekly.com/psw-909
For episode 665 of the BlockHash Podcast, host Brandon Zemp is joined by Chris Zhu, CEO & Founder of Donut (Donut Browser). Donut Browser is the world’s first agentic crypto browser built for trading. It integrates signal discovery, risk analysis, strategy generation and on-chain execution directly at the browser layer, allowing users to go from idea to live trade without switching tools.Donut Labs raised $22M to build the first agentic AI crypto browser for traders. Investors include BITKRAFT, Makers Fund, HSG, Sky9 Capital, MPCi, Altos Ventures, Hack VC, and others, with support from leaders across Solana, Sui, Monad, Jupiter, Drift, and DeFi App. With more than 160K users in their waitlist, Donut will offer a full product suite including a Chrome extension, web app, mobile app, and a Chromium based browser.
Brand vs Demand: Why B2B Marketing Is Stuck in a Measurement TrapIn this episode of The Metrics Brothers, Dave "CAC" Kellogg and Ray "Growth" Rike tackle one of the most persistent and controversial questions in B2B marketing: Brand vs. Demand.The discussion is grounded in new data from the 2026 B2B Brand vs Demand Benchmark Report. While most marketing teams say they believe brand and demand are complementary, the numbers tell a more complicated story.Today's reality?Marketing budgets are still heavily skewed toward short-term demand generation, with roughly 70% of spend allocated to demand and only ~25% to brand. Yet when asked how they want to invest, marketing leaders overwhelmingly say they'd prefer a much more balanced future, closer to 50% demand and 40% brand.So why the disconnect?Ray and Dave dig into the root cause: measurement.Demand generation is tied to metrics CFOs understand like pipeline dollars, opportunities, and ARR. Brand, on the other hand, is still largely measured using proxy metrics like website traffic and awareness, leaving many executives unable to confidently link brand investments to revenue outcomes. Only 28% of companies say they can directly tie brand activity to pipeline, and when budgets are cut, brand is sacrificed five times more often than demand.The episode also explores:Why performance marketing struggles are pushing CMOs back toward brandThe growing inefficiency of demand spend aimed at “future buyers”How much of the “demand” budget is effectively unmeasured brand spendThe dangerous gap between belief in brand and proof of impactWhy AEO, AI search, and LLM visibility will make brand ROI even harder and more urgent to measureRay and Dave don't just highlight the findings, they discuss the reality of Chief Marketing Officers making the Brand vs Demand budget allocation trade-offs.One key takeaway? Until brand investments can be credibly connected to pipeline efficiency, win rates, and ARR, it will remain more a faith-based investment instead of a financial one the CFOs understand.If you're a CMO trying to defend brand spend, or a CFO trying to understand where marketing dollars truly drive growth, this episode is required listening.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.