POPULARITY
Categories
We explore how digital PR, entity SEO, and shifting social algorithms shape reputation, discovery, and trust. Paige Donald explains why coherent signals across platforms now drive both reporter interest and AI overviews, and how to play the long game without chasing vanity metrics.• personal speech risk and employer brand alignment• entity identity across profiles and the knowledge graph• echo chambers, LinkedIn's niche pivot, and Reddit research• newsjacking with intent vs vanity metrics• LLM visibility, AI overviews, and third‑party authority• long‑game PR, reporter relationships, and useful measurement• analytics gaps and mapping content to real demand• trust recession and multi‑channel credibility• media training for executives and scalable video content• agile startups outpacing legacy brands onlineGuest Contact Information: Website: paigepr.comLinkedIn: linkedin.com/in/paigeprMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
In this episode of Content, Briefly, Jimmy chats with Nate Turner, co-founder and CEO of Ten Speed, about the evolving world of organic marketing and how AI is reshaping content strategy.They unpack what organic marketing really means today — beyond SEO — and explore how brands can unify SEO, content, LLM visibility, and digital PR into a powerful growth engine. Nate shares insights on the changing skills marketers need, the importance of leadership buy-in, and how to balance long-term strategy with agile execution.Tune in for a practical conversation on thriving in the new era of organic marketing — with plenty of real-world examples and actionable tips to help you plan for 2026 and beyond.This episode is sponsored by Ahrefs. Sign up for Ahrefs Webmaster Tools for free to improve your site's SEO performance and grow traffic from search.************************Useful Links:Follow Jimmy on LinkedIn: https://www.linkedin.com/in/jimmydaly/Follow Nate on LinkedIn: https://www.linkedin.com/in/nateturner1/TenSpeed: https://www.tenspeed.io/************************Stay Tuned:► Website: https://www.superpath.co/► YouTube: https://www.youtube.com/@superpath► LinkedIn: https://www.linkedin.com/company/superpath/► Twitter: https://twitter.com/superpathco************************Don't forget to leave us a five-star review and subscribe to our YouTube channel.
EP 287 - How to Scale when Revenue Goes Flat Nathan welcomes back Don Elliott from Gravitate to drop some absolute diamonds on the reality of running a marketing agency in today's economy. Don gets incredibly transparent about Gravitate hitting a plateau, the struggle of selling high-ticket websites to small businesses, and the major pivot they are making to fix it. We dive deep into "Home Care Web Solutions," a new subscription-based model designed specifically for the trades (Roofing, HVAC, Plumbing). Plus, we geek out on the future of search. If you haven't heard of LLM.txt or GEO (Generative Engine Optimization), you are falling behind. Don explains why "pretty" websites don't rank anymore and what you need to do to future-proof your digital presence. Watch the full episode. Guest Name: Don Elliot Title: CEO | Co-Founder | Writer Company: Gravitate | Pour Choices Expertise: Website Development and Digital Marketing Website: https://www.gravitatedesign.com/ Website: https://pourchoices.pub/ Website: https://donelliott.us/ Watch the LTM Podcast Shorts playlist. Watch the The Entrepreneur Grind playlist.
Lords: * Hallie * Peter * https://www.paschaefer.com/ Topics: * Do you know where all your things are? * How education doesn't melt * Bebop Bytes Back * Augustus Gloop by Roald Dahl * https://allpoetry.com/-Augustus-Gloop...- Microtopics: * The Directrix of Cybernetic Security. * Unity licensing from Unity as Unity. * Fantasy Book of the Month. (FBOM) * Part zombie, part ghost. * Accidentally GenMoing your WriMo. * A house with a bunch of your things in it, and they're everywhere. * Knowing someone who knows how to find things and knowing someone who knows where things are. * Knowing where to put something because it's where you first thought to look for it. * A person who itches when they see somebody not using a switch statement. * Having been gradually removing yourself from social media since back when Twitter was Twitter. * Back when you could get out of a chair without grunting. * Getting the whooping cough and coughing your disc out. (And you're in your twenties.) * Whether your dad named you after the murderous robot in 2001. * Seeing your students cheating poorly and teaching them how to do it well. * Scaffolding it pedagogically. * Big boat: hard turn. * How do we get education to exhibit swarm behavior? * A brand new exciting way to be bummed. * Education by Panopticon. * LLMs exposing how much of people's jobs and education are bullshit busywork. * When does the salt jump? * Putting together the 50s and then putting together the tens and then putting together the fours. * The simplest shallowest version of active listening that exists. * Doritos hacking the learning loop. * Continually finding new opiates of the masses. * Typing hex opcodes into the Beboputer. * An effective educational tool that has never been less appealing to the youth it's targeted at. * Steve Jobs coming out of his grave and slitting your throat if you install a programming tool on your iPhone. * Making the sun wink and realizing that this is the rest of your life. * Deescalating your LLM partner when it has an anxiety attack. * Your Socratic Oxford Don persona. * The Life Cycle of Software Objects. * There is a mistake, and it is being overcome. * Steps you can take to avoid Godzilla coming back and nature reclaiming the earth. * A poem written by a beloved children's author who absolutely loathes fat people. * Whether the terrible children in Charlie and the Chocolate Factory are all based on people that Roald Dahl knew. * SwitchBitch, Roald Dahl's famous Typescript library. * Making sure your weirdness is a kindness. * Roald Dahl: boy did he do the stuff. * Penning The Twits in an effort to "do something against beards." * Why Stephen King? * The Dollar Babies. * Whether Stephen King is still on MySpace. * Walking down the road and hearing Stephen King yelling at cloud. * The Dave Barry game jam. * Going into the sewer and solving puzzle platformer problems. * Group hug vs. forming a blob. * Tube Hippo is back! * The game engine sorting hat. * Coming out of character to talk about Inform 7. * The year that you fucked around with interactive fiction but never shipped anything. * Presuming that interactive fiction has continued to be great even after you stopped playing. * Choosing Twine over Inform 7 because of your absolutely enormous forelimbs. * LLMs as an extremely fancy Tarot deck.
How do you teach an LLM to write code you can actually trust? Carol's federal government team has been tasked with exploring unattended AI code generation, so she came to Adam and Tim for advice. Their first piece of guidance: whatever tools you pick today will be obsolete by the time you're done evaluating them. The real goal isn't adopting a specific workflow—it's building the skills to ride the wave.Follow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @workingcode.dev on Bluesky. New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.Full show notes and transcript here.
How can you scale AI at the enterprise, yet still hit your climate goals? And can heavy AI usage and an enterprise's ESG mission co-exist? Ashutosh Ahuja lays it out for us. Aligning AI With Climate And Business Goals -- An Everyday AI Chat with Jordan Wilson and Ashutosh AhujaNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:AI's Environmental Impact and Climate ConcernsCompanies Aligning AI with ESG GoalsAI Adoption Versus Carbon Footprint TradeoffsMetrics for Measuring AI's Environmental ImpactBusiness Efficiency Gains from AI AdoptionReal-World Examples: AI Offsetting Carbon FootprintIndustry Opportunities for Sustainable AI IntegrationFuture Trends: Efficient AI Models and Edge ComputingTimestamps:00:00 Everyday AI Podcast & Newsletter05:52 Balancing Progress and Legacy07:03 "Should Companies Limit AI Usage?"12:02 "Sentiment Analysis for Business Growth"17:07 "Energy Efficiency Impacts ESG Metrics"19:40 Robots, Energy, and AI Opportunity21:41 AI Efficiency and Climate Balance25:04 "Trust Instincts in Investments"Keywords:AI and climate, climate goals, aligning AI with ESG, environmental impact of AI, carbon footprint, energy use in AI data centers, water cooling for GPUs, sustainable business practices, enterprise AI strategy, ESG compliance, climate pledges, AI adoption in business, carbon footprint metrics, machine learning for sustainability, predictive analytics, ethical AI, green AI solutions, renewable energy sector, AI in waste management, camera vision for waste sorting, delivery robots, edge AI, small business AI implementation, AI efficiency, sentiment analysis, customer patterns, predictive maintenance, IoT data, auto scaling, cloud computing, resource optimization, SEC filings, brand sentiment tracking, LLM energy consumption, environmental considerations for AI, future of AI in climate action, business efficiency, human in the loop, philanthropic business practices, sustainable architecture, large language models and climate, tech industry climate initiatives, AI-powered resource savings, operational sustainability.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Fei-Fei Li is a Stanford professor, co-director of Stanford Institute for Human-Centered Artificial Intelligence, and co-founder of World Labs. She created ImageNet, the dataset that sparked the deep learning revolution. Justin Johnson is her former PhD student, ex-professor at Michigan, ex-Meta researcher, and now co-founder of World Labs.Together, they just launched Marble—the first model that generates explorable 3D worlds from text or images.In this episode Fei-Fei and Justin explore why spatial intelligence is fundamentally different from language, what's missing from current world models (hint: physics), and the architectural insight that transformers are actually set models, not sequence models. Resources:Follow Fei-Fei on X: https://x.com/drfeifeiFollow Justin on X: https://x.com/jcjohnssFollow Shawn on X: https://x.com/swyxFollow Alessio on X: https://x.com/fanahova Stay Updated:If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.Follow a16z on X: https://x.com/a16zFollow a16z on LinkedIn:https://www.linkedin.com/company/a16zFollow the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXFollow the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details, please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Today on the show, we have Susan Keplinger, CEO of Force of Nature, a boutique performance marketing shop behind the growth of brands like Ring, Oofos, and other category leaders. With experience building data-driven growth engines across e-commerce and DTC, Suzanne has helped companies shift from chaotic marketing to orchestrated, scalable systems.In this episode, we uncover why the future of performance marketing isn't about hacking audiences or chasing the “one winning creative”—but building portfolio-driven creative systems, trusting your data, and showing up clearly for both humans and algorithms.We explore why Google and Meta still dominate the ecosystem, why clean data is the new competitive edge, and how brands should think about programmatic everything—from ads to email to retargeting.We also discuss how Suzanne built the original performance infrastructure at Ring, how brands like Oofos leverage their “growth signal system,” and why the best-performing companies today treat robots as part of their audience.Finally, we dig deep into retention: why 10 customers leaving is the same as your 11th leaving, why the “leaky bucket” kills great brands, and how to know when it's time to stop relying on founder intuition — and start trusting the data.As always, I'd love to hear from you. You can email me directly at andrew@churn.fm, and don't forget to follow us on X.Churn FM is sponsored by Vitally, the all-in-one Customer Success Platform.
Mental health expert Dr Fred Moss joins The Next 100 Days podcast to talk about being an un-doctor.Summary of PodcastIntroductions and BanterKevin and Graham introduce their guest, Dr. Fred Moss, a psychiatrist who refers to himself as the "un-doctor". They engage in some lighthearted banter about pets and technical difficulties before welcoming Dr. Moss to the podcast.Dr. Moss' Unconventional ApproachDr. Moss explains his untraditional approach to mental health, emphasising that he does not diagnose or medicate patients. Instead, he aims to help people see that their experiences are a normal part of the human condition, rather than pathological issues that need to be "fixed".The Problem with Psychiatric DiagnosesThe group discusses how psychiatric diagnoses can become a way for people to avoid responsibility for their actions and experiences. Dr. Moss argues that these labels can become a crutch, preventing people from taking ownership of their lives and growth.Embracing the Human ExperienceDr. Moss emphasises the importance of embracing the full range of human emotions and experiences, rather than trying to eliminate or medicate them. He suggests that being "well-adjusted to a profoundly sick society" is not a sign of mental health.Finding Your Authentic VoiceDr. Moss explains how many people learn to suppress their true selves in order to fit in and protect themselves. He encourages finding one's authentic voice and frequency as a path to healing and connection.The Moss MethodDr. Moss outlines his "Moss Method", a list of 20 practices and habits that can support mental and emotional well-being, including gratitude, nature, creativity, and authentic communication.Closing Thoughts and ReflectionsThe group reflects on the key insights from the conversation and the value of Dr. Moss' unconventional perspective on mental health. They discuss how his approach could be applied to support business owners and entrepreneurs. The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads and conversions. Now, Graham is offering Answer Engine Optimisation that gets you ready to be found by LLM search.Kevin ApplebyKevin specialises in finance transformation and implementing business change. He's the COO of GrowCFO, which provides both community and CPD-accredited training designed to grow the next generation of finance leaders. You can find Kevin on LinkedIn and at kevinappleby.com
Connect with Will at: https://bsky.app/profile/wiverson.com https://www.linkedin.com/in/wiverson Mentioned during the series: Will's Youtube channel: https://changenode.com/ https://www.youtube.com/@ChangeNode Mentioned in this episode: killed by google: https://killedbygoogle.com build your own LLM: https://www.freecodecamp.org/news/code-an-llm-from-scratch-theory-to-rlhf/ The post 314 Deflation Economy and Unemployment—Why although unemployment is low you still don't have a job first appeared on Agile Thoughts.
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing
In this episode, Paul sits down with estate planning attorney Derek Wayne Jensen to demystify what estate planning really involves and why it's far more than drafting a simple will. Derek explains how planning evolves over a lifetime, beginning with young adults choosing beneficiaries, through major life milestones, to complex multi-generational legacy planning. They explore the biggest misconceptions, including the belief that estate planning is "one and done," and discuss why outdated beneficiary designations routinely undermine even the best plans. Derek also highlights triggers that should prompt a review: health scares, major purchases, business exits, new children or grandchildren, and significant increases in wealth. The conversation ends with practical guidance for high earners, business owners, and anyone whose documents are more than five years old. -- About Derek: Derek W. Jensen, JD, LLM, is a tax attorney and the founder of the North Seattle law firm, Jensen Estate Law, to assist individuals and their families with complex issues surrounding federal income tax, state and federal estate tax, estate law, business law, elder law and asset protection. Derek is a trusted advisor, with over 25 years of experience helping clients achieve their estate and tax planning goals. He makes it his priority to listen and mold documents for each unique case that happens upon him, while addressing all concerns his clients may have. Podcast: Who Gets What on Apple, Spotify, Amazon, Youtube, and RSS Instagram: @ jensenestatelaw Facebook: Jensen Estate Law X (Twitter): @ JensenEstateLaw -- Timestamps: 01:11 – What estate planning really is (beyond wills and trusts) 02:15 – Why planning starts with young adults and evolves over life 03:37 – Biggest misconception: estate planning is "one and done" 05:05 – When to DIY vs. when to hire an attorney 06:29 – Why beneficiary designations often override your will 09:47 – Estate tax thresholds and the "life-stage" planning framework 13:00 – Essential documents every adult should have (POA, healthcare directives) -- This Material is Intended for General Public Use. By providing this material, we are not undertaking to provide investment advice for any specific individual or situation or to otherwise act in a fiduciary capacity. Please contact one of our financial professionals for guidance and information specific to your individual situation. Sound Financial LLC dba Sound Financial Group is a registered investment adviser. Information presented is for educational purposes only and does not intend to make an offer or solicitation for the sale or purchase of any specific securities, investments, or investment strategies. Investments involve risk and, unless otherwise stated, are not guaranteed. Be sure to first consult with a qualified financial adviser and/or tax professional before implementing any strategy discussed herein. Past performance is not indicative of future performance. Insurance products and services are offered and sold through Sound Financial LLC dba Sound Financial Group and individually licensed and appointed agents in all appropriate jurisdictions. This podcast is meant for general informational purposes and is not to be construed as tax, legal, or investment advice. You should consult a financial professional regarding your individual situation. Guest speakers are not affiliated with Sound Financial LLC dba Sound Financial Group unless otherwise stated, and their opinions are their own. Opinions, estimates, forecasts, and statements of financial market trends are based on current market conditions and are subject to change without notice. Past performance is not a guarantee of future results.
#EP322 Today on the Clean Power Hour, we dive into one of the biggest bottlenecks strangling the clean energy transition: the slow, messy world of zoning, permitting, and early-stage development. Our guests are Julia Wu and Anuj Saigal, co-founders of Spark AI, a Y Combinator-backed platform that has ingested and summarized nearly every zoning code, ordinance, and permitting pathway in the United States.Julia Wu is CEO and co-founder of Spark AI. She's a software engineer who cut her teeth at Microsoft, Apple (working on Siri), and Brex before founding Spark. Her background spans AI, natural language processing, and financial technology.Anuj Saigal is the co-founder of Spark AI and a veteran of the energy sector since 2010. He worked at the Department of Energy Loan Programs Office during the Obama administration, then moved through SunEdison, EVgo, and Natto (a video AI telematics company). Anuj has been a developer of utility-scale and distributed generation projects and understands the pain points of permitting and interconnection firsthand.Key Discussion Points:Spark AI uses large language models to aggregate zoning regulations, permitting requirements, local sentiment, and news articles for every jurisdiction in AmericaThree core use cases: early-stage greenfield site selection, monitoring projects in development for regulatory changes, and rapid acquisition due diligenceStandard Solar (Brookfield-owned) reduced acquisition diligence from months to one week, cutting time by 75%Dynamic Energy accelerated site screening while improving community sentiment risk assessmentUGE used Spark to assess a massive portfolio acquisition, determining which parcels had viable zoning pathsThe platform is LLM-neutral, easily swapping between OpenAI, Anthropic, DeepSeek, and other models to optimize for accuracy and speedUnlike off-the-shelf AI tools, Spark delivers consistency, accuracy, and citations developers need for trust-but-verify workflowsConnect with Anuj and Julia Julia Wu: https://www.linkedin.com/in/wu-julia/Anuj Saigal: https://www.linkedin.com/in/anujsaigal/Spark AI Website: https://www.sparkhq.ai/ Support the showConnect with Tim Clean Power Hour Clean Power Hour on YouTubeTim on TwitterTim on LinkedIn Email tim@cleanpowerhour.com Review Clean Power Hour on Apple PodcastsThe Clean Power Hour is produced by the Clean Power Consulting Group and created by Tim Montague. Contact us by email: CleanPowerHour@gmail.com Corporate sponsors who share our mission to speed the energy transition are invited to check out https://www.cleanpowerhour.com/support/The Clean Power Hour is brought to you by CPS America, maker of North America's number one 3-phase string inverter, with over 6GW shipped in the US. With a focus on commercial and utility-scale solar and energy storage, the company partners with customers to provide unparalleled performance and service. The CPS America product lineup includes 3-phase string inverters from 25kW to 275kW, exceptional data communication and controls, and energy storage solutions designed for seamless integration with CPS America systems. Learn more at www.chintpowersystems.com
How to Build a Winning Strategy for Your B2B Brand In a fast-paced business environment, marketers, agencies, and consultants must proactively help clients differentiate their brands in the marketplace. One way of doing this is by analyzing the strategy, messaging, and brand positioning, both for their own brands and key competitors. So how can teams conduct this kind of brand research and competitive analysis in a way that's insightful, efficient, and actionable for planning the next steps? Tune in as the B2B Marketers on Mission Podcast presents the Marketing DEMO Lab Series, where we sit down with Clay Ostrom (Founder, Map & Fire) and his SmokeLadder platform designed for brand research, messaging and positioning analysis, and competitive benchmarking. In this episode, Clay explained the platform's origins and features, emphasizing its role in analyzing brand positioning, core messaging, and competitive landscapes. He also stressed the importance of clear, consistent brand positioning and messaging, and how standardized make it easier to compare brands across multiple business values. Clay also highlighted the value of objective, data-driven analysis to identify brand strengths, weaknesses, and gaps, and how tools like SmokeLadder can save significant time in gathering insights to build trust with clients. He provided practical steps for generating, refining, and exporting brand messaging and analysis for internal or client-facing use. Finally, Clay also discussed how action items and recommendations generated from analysis can immediately support smart brand strategy decisions and expedite trust-building with clients. https://www.youtube.com/watch?v=h4_o1PzF1Kk Topics discussed in episode: [1:31] The purpose behind building SmokeLadder and why it matters for B2B teams [12:00] A walkthrough of the SmokeLadder platform and how it works [14:51] SmokeLadder's core features [17:48] How positioning scores and category rankings are calculated [35:36] How differentiation and competitors are analyzed inside SmokeLadder [44:07] How SmokeLadder builds messaging and generates targeted personas [50:24] The key benefits and unique capabilities that set SmokeLadder apart Companies and links: Clay Ostrom Map & Fire SmokeLadder Transcript Christian Klepp 00:00 In an increasingly competitive B2B landscape, marketers, agencies and consultants, need to proactively find ways to help their clients stand out amidst the digital noise. One way of doing this is by analyzing the strategy, messaging and positioning of their own brands and those of their competitors. So how can they do this in a way that’s insightful, efficient and effective? Welcome to this first episode of the B2B Marketers in the Mission podcast Demo Lab Series, and I’m your host, Christian Klepp. Today, I’ll be talking to Clay Ostrom about this topic. He’s the owner and founder of the branding agency Map and Fire, and the creator of the platform Smoke Ladder that we’ll be talking about today. So let’s dive in. Christian Klepp 00:42 All right, and I’m gonna say Clay Ostrom. Welcome to this first episode of the Demo Lab Series. Clay Ostrom 00:50 I am super excited and very honored to be the first guest on this new series. It’s awesome. Christian Klepp 00:56 We are honored to have you here. And you know, let’s sit tight, or batten down the hatches and buckle up, and whatever other analogy you want to throw in there, because we are going to unpack a lot of interesting features and discuss interesting topics around the platform that you’ve built. And I think a good place to start, perhaps Clay before we start doing a walk through of the platform is, but let’s start at the very beginning. What motivated you to create this platform called Smoke Ladder. Clay Ostrom 01:31 So we should go all the way back to my childhood. I always dreamed of, you know, working on brand and positioning. You know, that was something I’ve always thought of since the early days, but no, but I do. I own an agency called Map and Fire, so I’ve been doing this kind of work for over 10 years now, and have worked with lots and lots of different kinds of clients, and over that time, developed different frameworks and a point of view about how to do this kind of work, and when the AI revolution kind of hit us all, it just really struck me that this was an opportunity to take a lot of that thinking and a lot of that, you know, again, my perspective on how to do this work and productize that and turn it into something that could be used by people when we’re not engaged with them, in some kind of service offering. So, so that was kind of the kernel of it. I actually have a background in computer science and product. So it was sort of this natural Venn diagram intersection of I can do some product stuff, I can do brand strategy stuff. So let’s put it together and build something. Christian Klepp 02:46 And the rest, as they say, is history. Clay Ostrom 02:49 The rest, as they say, is a lot of nights and weekends and endless hours slaving away at trying to build something useful. Christian Klepp 02:58 Sure, sure, that certainly is part of it, too. Clay Ostrom 03:01 Yeah. Christian Klepp 03:02 Let’s not keep the audience in suspense for too long here, right? Like, let’s start with the walk through. And before you share your screen, maybe I’ll set this up a little bit, right? Because you, as you said, like, you know, you’ve built this platform. It’s called Smoke Ladder, which I thought was a really clever name. It’s, you like to describe it as, like, your favorite SEO (Search Engine Optimization) tool, but for brand research and analysis. So I would say, like, walk us through how somebody would use this platform, like, whether they be a marketer that’s already been like in the industry for years, or is starting out, or somebody working at a brand or marketing agency, and how does the platform address these challenges or questions that people have regarding brand strategy, analysis and research? Clay Ostrom 03:49 Yeah, yeah. I use that analogy of the SEO thing, just because, especially early on, I was trying to figure out the best way to describe it to someone who hasn’t seen it before. I feel like it’s a, I’m not going to fall into the trap of saying, this is the only product like this, but it has its own unique twists with what it can do. And I felt like SEO tools are something everybody has touched at one point or another. So I was using this analogy of, it’s like the s, you know, Semrush of positioning and messaging or Ahrefs, depending on your if you’re a Coke or Pepsi person. But I always felt like that was just a quick way to give a little idea of the fact that it’s both about analyzing your own brand, but it’s also about competitive analysis and being able to see what’s going on in the market or in your landscape, and looking specifically at what your competitors are doing and what their strengths and weaknesses are. So does that resonate with you in terms of, like, a shorthand way, I will say, I don’t. I don’t say that. It’s super explicitly on the website, but it’s been in conversation. Christian Klepp 05:02 No, absolutely, absolutely, that resonated with me. The only part that didn’t resonate with me is that I’m neither a coke or a Pepsi person. I’m more of a ginger ale type of guy. I digress. But yeah, let’s what don’t you share your screen, and let’s walk through this, right? Like, okay, if a marketing person were like, use the platform to do some research on, perhaps that marketers, like own company and the competitors as well, right? Like, what would they do? Clay Ostrom 05:32 Yeah, so that’s, that is, like you were saying, there’s, sort of, I guess, a few different personas of people who would potentially use this. And initially I was thinking a little more about both in house, people who, you know, someone who’s working on a specific brand, digging really deep on their own brand, whether they’re, you know, the marketing lead or whatever, maybe they’re the founder, and then this other role of agency owners, or people who work at an agency where they are constantly having to look at new brands, new categories, and quickly get up to speed on what those brands are doing and what’s the competitive space look like, you know, for that brand. And that’s something that, if you work at an agency, which obviously we both have our own agencies, we do this stuff weekly. I mean, every time a new lead comes in, we have to quickly get up to speed and understand something about what they do. And one of the big gaps that I found, and I’d be curious to kind of hear your thoughts on this, but I’ve had a lot of conversations with other agency owners, and I think one of the biggest gaps is often that brands are just not always that great at explaining their own brand or positioning or differentiation to you, and sometimes they have some documentation around it, but a lot of times they don’t. A lot of it’s word of mouth, and that makes it really hard to do work for them. If whatever you’re doing for them, whether that’s maybe you are working on SEO or maybe you’re working on paid ads or social or content, you have to know what the brand is doing and kind of what they’re again, what their strengths and weaknesses are, so that you can talk about that. I mean, do you come across that a lot in your work? Christian Klepp 07:33 How do I say this without offending anybody? I find, I mean jokes aside, I find, more often than not, in the especially in the B2B space, which is an area that I operate in, I find 888 point five times out of 10. We are dealing with companies that have a they, have a very rude, rudimentary, like, framework of something that remotely resembles some form of branding. And I know that was a very long winded answer, but it’s kind of sort of there, but not really, if you know what I mean. Clay Ostrom 08:17 Yeah. Christian Klepp 08:17 And there have been other extreme cases where they’ve got the logo and the website, and that’s as far as their branding goals. And I would say that had they had all these, this discipline, like branding system and structure in place, then people like maybe people like you and I will be out on a job, right and it’s something, and I’m sure you’ve come across this, and we’ll probably dig into this later, but like you, it’s something I’ve come across several times, especially in the B2B space, where branding is not taken seriously until it becomes serious. I know that sounds super ironic, right, but, and it’s to the point of this platform, right, which we’re going to dig into in a second, but it’s, it’s things, for instance, positioning right, like, are you? Are you, in fact, strategically positioned against competitors? Is your messaging resonating with, I would imagine, especially in the B2B context, with the multiple group target groups that you have, or that your company is, is going after? Right? Is that resonating, or is this all like something that I call the internal high five? You’ve this has all been developed to please internal stakeholders and and then you take it to market, and it just does not, it just does not resonate with the target audience at all. Right? So there’s such a complex plethora of challenges here, right? That people like yourself and like you and I are constantly dealing with, and I think that’s also part of the reason why I would say a platform like this is important, because it helps to not just aggregate data. I mean, certainly it does that too, but it helps. To put things properly, like into perspective at speed. I think that might be, that might be something that you would have talked about later, but it does this at speed, because I think, from my own experience, one of the factors in our world that sometimes works against us is time, right? Clay Ostrom 10:19 No, I totally agree, yeah, and, you know, we’re lucky, I guess would be the word that we are often hired to work on a company strategy with them and help them clarify these things. Christian Klepp 10:33 Absolutely. Clay Ostrom 10:34 There are a million other flavors of agencies out there who are being hired to execute on work for a brand, and not necessarily being brought in to redefine, you know what the brand, you know they’re positioning and their messaging and some of these fundamental things, so they’re kind of stuck with whatever they get. And like you said, a lot of times it’s not much. It might be a logo and a roughly put together website, and maybe not a whole lot else. So, yeah, but I think your other point about speed is that was a huge part of this. I think the market is only accelerating right now, because it’s becoming so much easier to start up new companies and new brands and new products. And now we’ve got vibe coding, so you can technically build a product in a day, maybe launch it the next day, start marketing it, you know, by the weekend. And all of this is creating noise and competition, and it’s all stuff that we have to deal with as marketers. We have to understand the landscape. We’ve got to quickly be able to analyze all these different brands, see where the strengths and weaknesses are and all that stuff. So… Christian Klepp 11:46 Absolutely. Clay Ostrom 11:46 But, yeah, that, I think that the speed piece is a huge part of this for sure. Christian Klepp 11:51 Yeah. So, so we’re okay, so we’re on the I guess this, this will probably be the homepage. So just walk us through what, what a marketing person would do if they want to use this platform, yeah? Clay Ostrom 12:00 So the very first thing you do when you come in, and this was when I initially conceived of this product, one of the things that I really wanted was the ability to have very quick feedback, be able to get analysis for whatever brand you’re looking at, you know, right away to be able to get some kind of, you know, insight or analysis done. So the first thing you can do, and you can do this literally, from the homepage of the website, you can enter in a URL for a brand, come into the product, even before you’ve created an account, you can come in and you can do an initial analysis, so you can put in whatever URL you’re looking at, could be yours, could be a competitor, and run that initial analysis. What we’re looking at here, this is, if you do create an account, this is, this becomes your, as we say, like Home Base, where you can save brands that you’re looking at. You can see your history, all that good stuff. And it just gives you some quick bookmarks so that you can kind of flip back and forth between, maybe it’s your brand, maybe it’s some of the competitors you’re looking at and then it gives you just some quick, kind of high level directional info. And I kind of break it up into these different buckets. Clay Ostrom 13:23 And again, I’d love to kind of hear if this is sort of how you think about it, too. But there’s sort of these different phases when you’re working on a brand. And again, this is sort of from an agency perspective, but you first got the sort of the research and the pitch piece. So this is before maybe you’re even working with them. You’re trying to get an understanding of what they do. Then we have discovery and onboarding, where we’re digging in a little bit deeper. We’re trying to really put together, what does the brand stand for, what are their strengths and weaknesses? And then we have the deeper dive, the strategy and differentiation. And this is where we’re really going in and getting more granular with the specific value points that they offer, doing some of that messaging analysis, finding, finding some of the gaps of the things that they’re talking about or not talking about, and going in deeper. So it kind of break it up into these buckets, based on my experience of how we engage with clients. Does that? Does that make sense to you, like, does that? Christian Klepp 14:28 It does make sense, I think. But what could be helpful for the audience is because this, this almost looks like it’s a pre cooked meal. All right, so what do we do we try another I mean, I think you use Slack for the analysis. Why don’t we use another brand, and then just pop it into that analysis field, and then see what it comes out with. Clay Ostrom 14:51 So the nice thing about this is, if you are looking at a brand that’s been analyzed, you’re going to get the data up really quickly. It’ll be basically pop up instantly. But you can analyze a brand from scratch as well. Just takes about a minute or so, basically, to kind of do some of the analysis. So for the sake of a demo, it’s a little easier just to kind of look at something that we’ve got in there. But if it’s a brand that you know, maybe you’re looking at a competitor for one of your brands, you know, there’s a good chance, because we’ve got about 6000 brands that we’ve analyzed in here, that there’s a good chance there’ll be some info on them. But so this is pipe drive. So whoever’s not familiar Pipedrive is, you know, it’s a CRM (Customer Relationship Management), it’s, it’s basically, you know, it’s a lighter version of a HubSpot or Salesforce basically track deals and opportunities for business, but this so I flipped over. I don’t know if it was clear there, but I flipped over to this brand brief tab. And this is where we we get, essentially, a high level view of some key points about the brand and and I think about this as this would be something that you would potentially share with a client if you were, you know, working with them and you wanted to review the brand with them and make sure that your analysis is on point, but you’ll see it’s kind of giving you some positioning scores, where you rank from a category perspective, message clarity, and then we’ve got things like a quick overview, positioning summary, who their target persona is, in this case, sales manager, sales operation lead, and some different value points. And then it starts to get a little more granular. We get into like key competitors, Challenger brands. We do a little SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis, and then maybe one of the more important parts is some of these action items. So what do we do with this? Yeah, and obviously, these are, these are starting points. This is not, it’s not going to come in and, you know, instantly be able to tell you strategically, exactly what to do, but it’s going to give you some ideas of based on the things we’ve seen. Here are some reasonable points that you might want to be looking at to, you know, improve the brand. Make it make it stronger. Christian Klepp 17:13 Gotcha. Gotcha. Now, this is all great clay, but like, I think, for the benefit of the audience, can we scroll back up, please. And let’s just walk through these one by one, because I think it’s important for the audience/potential future users,/ customers of Smoke Ladder, right? To understand, to understand this analysis in greater depth, and also, like, specifically, like, let’s start with a positioning score right, like, out of 100 like, what is this? What is this based on? And how was this analyzed? Let’s start with that. Clay Ostrom 17:48 Yeah, and this is where the platform really started. And I’m going to actually jump over to the positioning tab, because this will give us the all the detail around this particular feature. But this is, this was where I began the product this. I kind of think of this as being, in many ways, sort of the heart and soul of it. And when I mentioned earlier about this being based on our own work and frameworks and how we approach this, this is very much the case with this. This is, you know, the approach we use with the product is exactly how we work with clients when we’re evaluating their positioning. And it’s, it’s basically, it’s built off a series of scores. And what we have here are 24 different points of business value, which, if we zoom in just a little bit down here, we can see things like reducing risk, vision, lowering cost, variety, expertise, stability, etc. So there’s 24 of these that we look at, and it’s meant to be a way that we can look across different brands and compare and contrast them. So it’s creating, like, a consistent way of looking at brands, even if they’re not in the same category, or, you know, have slightly different operating models, etc. But what we do is we go in and we score every brand on each of these 24 points. And if we scroll down here a little bit, we can see the point of value, the exact score they got, the category average, so how it compares against, you know, all the other brands we’ve analyzed, and then a little bit of qualitative information about why they got the score. Christian Klepp 19:27 Sorry, Clay, Can I just jump in for a second so these, these attributes, or these key values that you had in the graph at the top right, like, are these consistent throughout regardless of what brand is being analyzed, or the least change. Clay Ostrom 19:42 It’s consistent. Christian Klepp 19:43 Consistent? Clay Ostrom 19:44 Yeah, and that was one of the sort of strategic decisions we had to make with the product. Was, you know, there’s a, maybe another version of this, where you do different points depending on maybe the category, or, you know, things like that. But I wanted to do it consistent because, again, it allows us to look at every brand through the same lens. It doesn’t mean that every brand you know there are certain points of value that just aren’t maybe relevant for a particular brand, and that’s fine, they just won’t score as highly in those but at least it gives us a consistent way to look at so when you’re looking at 10 different competitors, you know you’ve got a consistent way to look at them together,. Christian Klepp 20:26 Right, right, right. Okay, okay, all right, thanks for that. Now let’s go down to the next section there, where you’ve got, like this table with like four different columns here. So you mentioned that these are being scored against other brands in their category. Like, can you share it with the audience? Like, how many other brands are being analyzed here? Clay Ostrom 20:51 Yeah, well, it depends on the category. So again, we’ve got six, you know, heading towards 7000 brands that we’ve analyzed collectively. Each category varies a little bit, but, you know, some categories, we have more brands than others. But what this allows us to do is, again, to quickly look at this and say, okay, for pipe drive, a big focus for pipe drive is organization, simplification. You know, one of their big value props is we’re an easier tool to use than Salesforce or HubSpot. You can get up to speed really quickly. You don’t have all the setup and configurations and all that kind of stuff. So this is showing us that, yes, like their messaging, their content, their brand, does, in fact, do a good job of making it clear that simplicity is a big part of pipe drive’s message. And they do that by talking about it a lot in their messaging, having case studies, having testimonials, all these things that support it. And that’s how we come up with these scores. Is by saying, like the brand emphasizes these points well, they talk about it clearly, and that’s what we base it on. Christian Klepp 22:04 Okay, okay. Clay Ostrom 22:06 But as you come, I was just gonna say as you come down here, you can see, so the green basically means that they score well above average for that particular point. Yellow is, you know, kind of right around average, or maybe slightly above, and then red means that they’re below average for that particular point. So for example, like variety of tools, they don’t emphasize that as much with pipe drive, maybe compared to, again, like a Salesforce or a HubSpot that has a gazillion tools, pipe drive, that’s not a big focus for them. So they don’t score as highly there, but you can kind of just get a quick view of, okay, here are the things that they’re really strong with, and here are the things that maybe they’re, you know, kind of weak or below average. Christian Klepp 22:58 Yeah, yeah. Well, that’s certainly interesting, because I, you know, I’ve, I’ve used the, I’ve used the platform for analyzing some of my clients, competitor brands. And, you know, when I’m looking at this, like analysis with the scoring, with the scoring sheet, it, I think it will also be interesting perhaps in future, because you’ve got a very detailed breakdown of, okay, the factors and how they’re scored, and what the brand value analysis is also, because, again, in the interest of speed and time, it’d be great if the platform can also churn out maybe a one to two sentence like, summary of what is this data telling us, right? Because I’m thinking back to my early days as a product manager, and we would spend hours, like back then on Excel spreadsheets. I’m dating myself a little bit here, but um, and coming up with this analysis and charts, but presenting that to senior management, all they wanted to know was the one to two sentence summary of like, come on. What are you telling me with all these charts, like, what is the data telling you that we need to know? Right? Clay Ostrom 24:07 I know it’s so funny. We again, as strategists and researchers, we love to nerd out about the granular details, but you’re right. When you’re talking to a leader at a business, it does come down to like, okay, great. What do we do? And so, and I flipped back over to slacks. I knew I had already generated this but, but we’re still in the positioning section here, but we have this get insights feature. So basically it will look at all those scores and give you kind of, I think, similar to what you’re describing. Like, here’s three takeaways from what we’re seeing. Okay, okay, great, yeah, so we don’t want to leave you totally on your own to have to figure it all out. We’ll give you, give you a little helping hand. Christian Klepp 24:53 Yeah. You don’t want to be like in those western movies, you’re on your own kid. Clay Ostrom 24:59 Yeah. We try not to strand you again. There’s a lot of data here. I think that’s one of the strengths and and challenges with the platform, is that we try to give you a lot of data. And for some people, you may not want to have to sift through all of it. You might want just sort of give me the three points here. Christian Klepp 25:19 Absolutely, absolutely. And at the very least they can start pointing you in the right direction, and then you could be, you could then, like, through your own initiative, and perhaps dig a little bit deeper and perhaps find some other insights that may be, may be relevant, right? Clay Ostrom 25:35 Totally. Christian Klepp 25:36 Hey, it’s Christian Klepp here. We’ll get back to the episode in a second. But first, I’d like to tell you about a new series that we’re launching on our show. As the B2B landscape evolves, marketers need to adapt and leverage the latest marketing tools and software to become more efficient. Enter B2B Marketers on a Mission Marketing Demo Lab where experts discuss the latest tools and software that empower you to become a better B2B marketer. Tune in as we chat with product experts. Provide unbiased product reviews, give advice and deliver insights into real world applications and actionable tips on tools and technologies for B2B marketing. Subscribe to the Marketing Demo Lab, YouTube channel and B2B Marketers on a Mission, on Apple podcasts, Spotify, or wherever you get your favorite podcasts. Christian Klepp 26:21 All right. Now, back to the show, if we can, if we could jump back, sorry, to the, I think it was the brand brief, right? Like, where we where we started out, and I said, let’s, let’s dig deeper. Okay, so then, then we have, okay, so we talked about positioning score. Now we’re moving on to category rank and message clarity score. What does that look like? Clay Ostrom 26:41 Yeah. So the category rank is, it’s literally just looking at the positioning score that you’ve gotten for the brand and then telling you within this category, where do you sort of fall in the ranking, essentially, or, like, you know, how do we, you know, for comparing the score against all the competitors, where do you fall? So you can see, with Slack, they’re right in the middle. And it’s interesting, because with a product like Slack, even though we all now know what slack is and what it does and everything. Christian Klepp 27:18 Yeah. Clay Ostrom 27:19 The actual messaging and content that they have now, I think maybe doesn’t do as good of a job as it maybe did once upon a time, and it’s gotten as products grow and brands grow, they tend to get more vague, a little more broad with what they talk about, and that kind of leads to softer positioning. So that’s sort of what we’re seeing reflected here. And then the third score is the message clarity score, which we can jump into, like, a whole different piece. Christian Klepp 27:48 Four on a tennis not a very high score, right? Clay Ostrom 27:52 Yeah. And again, I think it’s a product, of, we can kind of jump into that section. Christian Klepp 27:57 Yeah, let’s do that, yeah. Clay Ostrom 27:59 But it’s, again, a product, I think of Slack being now a very mature product that is has gotten sort of a little vague, maybe a little broader, with their messaging. But the message clarity score, we basically have kind of two parts to this on the left hand side are some insights that we gather based on the messaging. So what’s your category, quick synopsis of the product. But then we also do some things, like… Christian Klepp 28:33 Confusing part the most confusing. Clay Ostrom 28:36 Honestly to me, as I get I’d love to hear your experience with this, but coming into a new brand, this is sometimes one of the most enlightening parts, because it shows me quickly where some gaps in what we’re talking about, and in this case, just kind of hits on what we were just saying a minute ago. Of the messaging is overloaded with generic productivity buzzwords, fails to clearly differentiate how Slack is better than email or similar tools, etc. But also, this is another one that I really like, and I use this all the time, which is the casual description. So rather than this technical garbage jargon, you know, speak, just give me. Give it to me in plain English, like we’re just chatting. And so this description of it’s a workplace chat app for teams to message, collaborate, share files. Like, okay, cool. Like, yeah, you know, I get it. Yeah, I already know what slack is. But if I didn’t, that would tell me pretty well. Christian Klepp 29:33 Absolutely, yeah, yeah. No, my experience with this is has been, you know, you and I have been in the branding space for a while. So for the trained eye, when you look at messaging, you’ll know if it’s good or not, right. And we come I mean, I’m sure you do the same clay, but I also come to my own like conclusions based on experience of like, okay, so why do I think that that’s good messaging, or why do I think that that’s confusing messaging? Or it falls short, and why and how can that be improved? But it’s always good to have validation with either with platforms like this, where you have a you have AI, or you have, you have a software that you can use that analyzes, like, for example, like the messaging on a website, and it dissects that and says, Well, okay, so this is what they’re getting, right? So there’s a scoring for that, so it’s in the green, and then this is, this is where it gets confusing, right? So even you run that through, you run that through the machine, and the machine analyzes it as like, Okay, we can’t clearly, clearly define what it is they’re doing based on the messaging, right? And for me, that’s always a it’s good. It’s almost like getting a second doctor’s opinion, right? And then you go, Aha. So I we’ve identified the symptoms now. So let’s find the penicillin, right? Like, let’s find the remedy for this, right? Clay Ostrom 30:56 Yeah, well, and I like what you said there, because part of the value, I think, with this is it’s an objective perspective on the brand, so it doesn’t have any baggage. It’s coming in with fresh eyes, the same way a new customer would come into your website, where they don’t know really much about you, and they have to just take what you’re giving at face value about what you present. And we as people working on brands get completely blinded around what’s actually working, what’s being communicated. There’s so much that we take for granted about what we already know about the brand. And this comes in and just says, Okay, I’m just, I’m just taking what you give me, and I’m going to tell you what I see, and I see some gaps around some of these things. You know, I don’t have the benefit of sitting in your weekly stand up meeting and hearing all the descriptions of what you’re actually doing. Christian Klepp 31:59 I’m sorry to jump in. I’m interested to know, like, just, just based on what we’ve been reviewing so far, like, what has your experience been showing this kind of analysis to clients, and how do they respond to some of this data, for example, that you know, you’re walking us through right now? Clay Ostrom 32:18 Yeah, I think it’s been interesting. Honestly, I think it can sometimes feel harsh. And I think again, as someone who’s both run an agency and also built worked on brands, we get attached to our work on an emotional level. Christian Klepp 32:42 Absolutely. Clay Ostrom 32:42 Even if we think about it as, you know, this is just work, and it’s, you know, whatever, we still build up connections with our work and we want it to be good. And so I think there’s sometimes a little bit of a feeling of wow, like that’s harsh, or I would have expected or thought we would have done better or scored better in certain areas, but that is almost always followed up with but I’m so glad to know where, where we’re struggling, because now I can fix it. I can actually know what to focus on to fix, and that, to me, is what it’s all about, is, yes, there’s a little bit of feelings attached to some of these things, maybe, but at the end of the day, we really want it to be good. We want it to be clear. We don’t want to be a 4 out of 10. We want to be a 10 out of 10. And what specifically do we need to do to get there? And that’s really what we’re trying to reveal with this. So I think, you know, everybody’s a little different, but I would say the reactions are typically a mix of that. It’s like, maybe an ouch, but a Oh, good. Let’s work on it. Christian Klepp 33:55 Absolutely, absolutely. Okay. So we’ve got brand summary, we’ve got fundamentals, then quality of messaging is the other part of it, right? Clay Ostrom 34:02 So, yeah, so this, this is, this is where the actual 4 out of 10 comes. We have these 10 points that we look at and we say, Okay, are you communicating these things clearly? Are you communicating who your target customer is, your category, your offering, where you’re differentiated benefits? Do you have any kind of concrete claim about what you do to support you know what you’re what you’re selling? Is the messaging engaging? Is it concise? You’ll see here a 7% on concise. That’s basically telling us that virtually no brands do a good job of being concise. Only about 7% get a green check mark on this, and kind of similar with the jargon and the vague words big struggle points with almost every brand. Christian Klepp 34:55 Streamline collaboration. Clay Ostrom 34:58 So we can see here with Slack. You know some of the jargon we got, KPIs (Key Performance Indicators), MQLs (Marketing Qualified Lead), if you’re in the space, you could argue like, oh, I kind of know what those things are. But depending on your role, you may not always know. In something like Salesforce marketing cloud, unless you’re a real Salesforce nerd, you probably have no idea what that is. But again, it’s just a way to quickly identify some of those weak points, things that we could improve to make our message more clear. Christian Klepp 35:27 Yes, yes. Okay, so that was the messaging analysis correct? Clay Ostrom 35:33 Yeah. Christian Klepp 35:33 Yeah. Okay. So what else have we got? Clay Ostrom 35:36 Yeah, so I think one other thing we could look at just for a sec, is differentiation, and this is this kind of plays off of what we looked at a minute ago with the positioning scores. But this is a way for us to look head to head with two different brands. So in this case, we’ve got Slack in the red and we’ve got Discord in the greenish blue. And I think of these, these patterns, as sort of the fingerprint of your brand. So where you Where are you strong? Where are you weak? And if we can overlay those two fingerprints on top of each other, we can see, where do we have advantages, and where does our competitor have advantages? So if we come down, we can sort of see, and this is again, for the nerds like me, to be able to come in and go deep, do kind of a deep dive on specifically, why did, why does Discord score better than Slack in certain areas. And at the bottom here we can see a kind of a quick summary. So slack is stronger in simplification, saving time, Discord has some better messaging around generating revenue, lowering costs, marketability. But again, this gives us a way to think about what are the things we want to double down on? So what do we want to actually be known for in the market? Because we can’t be known for everything. You know, buyers can maybe only remember a couple things about us. What are those couple things where we’re really strong, where we really stand out, and we’ve got some separation from the competitors. Christian Klepp 37:18 Right, okay, okay, just maybe we take a step back here, because I think this is great. It’s very detailed. It gets a bit granular, but I think it’s also going back to a conversation that you and I had previously about, like, Okay, why is it so important to be armed with this knowledge, especially if you’re in the marketing role, or perhaps even an agency talking to a potential client going in there already armed with the information about their competitors. And we were talking about this being a kind of like a trust building mechanism, right? For lack of a better description, right? Clay Ostrom 38:03 Yeah, I think to me, what I like about this, and again, this does come out of 10 years of doing work, this kind of work with clients as well, is it’s so easy to fall into a space of soft descriptions around things like positioning and just sort of using vague, you know, wordings or descriptions, and when you can actually put a number on it, which, again, it’s subjective. This isn’t. This isn’t an objective metric, but it’s a way for us to compare and contrast. It allows us to have much more productive conversations with clients, where we can say we looked at your brand, we we what based on our analysis, we see that you’re scoring a 10 and a 9 on simplicity and organization, for example. Is that accurate to you like do you think that’s what you all are emphasizing the most? Does that? Does that resonate and at the same time, we can say, but your competitors are really focused on there. They have a strong, strong message around generating revenue and lowering costs for their customers. Right now, you’re not really talking about that. Is that accurate? Is that like, what you is that strategically, is that what you think you should be doing so really quickly, I’ve now framed a conversation that could have been very loose and kind of, you know, well, what do you think your strategy is about? What do you know? And instead, I can say, we see you being strong in these three points. We see your competitors being strong in these three points. What do you think about that? And I think that kind of clarity just makes the work so much more productive with clients, or just again, working on your own brand internally. So what do you think about that kind of perspective? Christian Klepp 40:08 Yeah, no, no, I definitely agree with that. It’s always and I’ve been that type of person anyway that you know you go into a especially with somebody that hasn’t quite become a client yet, right? One of the most important things is also, how should I put this? Certainly the trust building part of it needs to be there. The other part is definitely a demonstration of competence and ability, but it’s also that you’ve been proactive and done your homework, versus like, Okay, I’m I’m just here as an order taker, right? And let’s just tell me what to do, and I’ll do it right? A lot and especially, I think this has been a trend for a long time already, but a lot of the clients that I’ve worked with now in the past, they want to, they’re looking for a partner that’s not just thinking with them, it’s someone that’s thinking ahead of them. And this type of work, you know what we’re seeing here on screen, this is the type of work that I would consider thinking ahead of them, right? Clay Ostrom 41:18 No, I agree. I think you framed that really well. Of we’re trying to build trust, because if we’re going to make any kind of recommendations around a change or a shift, they have to believe that we know what we’re talking about, that we’re competent, that we’ve done the work. And I think I agree with you. I think like this, it’s kind of funny, like we all, I think, on some base level, are attracted to numbers and scores. It just gives us something to latch on to. But I think it also, like you said, it gives you a feeling that you’ve done your work, that you’ve done your homework, you’ve studied, you’ve you’ve done some analysis that they themselves may have never done on this level. And that’s a big value. Christian Klepp 42:08 Yes, and a big part of the reason just to, just to build on what you said, a big part of the reason why they haven’t done this type of work is because it’s not so much. The cost is certainly one part of it, but it’s the time, it’s a time factor and the resource and the effort that needs to be put into it. Because, you know, like, tell me if you’ve never heard this one before, but there are some, there are some companies that we’ve been working with that don’t actually have a clearly, like, you know, a clear document on who their their target personas are, yeah, or their or their ICPs, never mind the buyer’s journey map. They don’t, they don’t even have the personas mapped out, right? Clay Ostrom 42:52 100% Yeah, it’s, and it’s, I think you’re right. It’s, it’s a mix of time and it’s a mix of just experience where, if you are internal with a brand, you don’t do this kind of work all the time. You might do it at the beginning. Maybe you do a check in every once in a while, but you need someone who’s done this a lot with a lot of different brands so that they can give you guidance through this kind of framework. But so it’s, you know, so some of it is a mix of, you know, we don’t have the time always to dig in like this. But some of it is we don’t even know how to do it, even if we did have the time. So it’s hopefully giving, again, providing some different frameworks and different ways of looking at it. Christian Klepp 43:41 Absolutely, absolutely. So okay, so we’ve gone through. What is it now, the competitor comparison. What else does the platform provide us that the listeners and the audience should be paying attention to here? Clay Ostrom 43:55 So I’ll show you two more quick things. So one is this message building section. So this is… Christian Klepp 44:03 Are you trying to put me out of a job here Clay? Clay Ostrom 44:07 Well, I’ll say this. So far in my experience with this, it’s not going to put us out of a job, but it is going to hopefully make our job easier and better. It’s going to make us better at the work we do. And that’s really, I think that’s, I think that’s kind of, most people’s impression of AI at this point is that it’s not quite there to replace us, but it’s sure, certainly can enhance what we do. Christian Klepp 44:36 Yeah, you’ll excuse me, I couldn’t help but throw that one out. Clay Ostrom 44:38 Yeah, I know, trust me, I’m this. It’s like I’m building a product that, in a sense, is undercutting, you know, the work that I do. So it is kind of a weird thing, but this message building section, which is a new part of the platform. It will come in, and you can see on the right hand side. And there’s sort of a quick summary of all these different elements that we’ve already analyzed. And then it’s going to give you some generated copy ideas, including, if I zoom in a little bit here, we’ve got an eyebrow category. This is again for Slack. It’s giving us a headline idea, stay informed without endless emails. Sub headline call to action, three challenges that your customers are facing, and then three points about your solution that help address those for customers. So it’s certainly not writing all of your copy for you, but if you’re starting from scratch, or you’re working on something new, or even if you’re trying to refresh a brand. I think this can be helpful to give you some messaging that’s hopefully clear. That’s something that I think a lot of messaging misses, especially in B2B, it’s, it’s not always super clear, like what you even do. Christian Klepp 45:56 Don’t get me started. Clay Ostrom 45:59 So hopefully it’s clear. It’s, you know, again, it’s giving you some different ideas. And that you’ll see down here at the bottom, you can, you can iterate on this. So we’ve got several versions. You can actually come in and, you know, you can edit it yourself. So if you say, like, well, I like that, but not quite that, you know, I can, you know, get my human touch on it as well. But yeah, so it’s a place to iterate on message. Christian Klepp 46:25 You can kind of look at it like, let’s say, if you’re writing a blog article, and this will give you the outline, right? Yeah. And then most of the AI that I’ve worked with to generate outlines, they’re not quite there. But again, if you’re starting from zero and you want to go from zero to 100 Well, that’ll, that’ll at least get you to 40 or 50, right? But I’m curious to know, because we’re looking at this now, and I think this, I mean, for me, this is, this is fascinating, but, like, maybe, maybe this will be part of your next iteration. But will this, will this generate messaging that’s already SEO optimized. Clay Ostrom 47:02 You know, it’s not specifically geared towards that, but I would say that it ends up being maybe more optimized than a lot of other messaging because it puts such an emphasis on clarity, it naturally includes words and phrases that I think are commonly used in the space more so than you know, maybe just kind of typical off the shelf Big B2B messaging, Christian Klepp 47:27 Gotcha. I had a question on the target persona that you’ve got here on screen, right? So how does the platform generate the information that will then populate that field because, and when I’m just trying to think about like, you know, because I’ve been, I’ve been in the space for as long as you have, and the way that I’ve generated target personas in the past was not by making a wild guess about, like, you know, looking at the brand’s website. It’s like having conducting deep customer research and listening to hours and hours of recordings, and from there, generating a persona. And this has done it in seconds. So… Clay Ostrom 48:09 Yeah, it’s so the way the system works in a couple different layers. So it does an initial analysis, where it does positioning, messaging analysis and category analysis, then you can generate the persona on top of that. So it takes all the learnings that it got from the category, from the product, from your messaging, and then develops a persona around that. And it’s, of course, able to also pull in, you know, the AI is able to reference things that it knows about the space in general. But I have found, and this is true. I was just having a conversation with someone who works on a very niche brand for a very specific audience, and I was showing him what it had output. And I said, Tell me, like, Don’t hold back. Like, is this accurate? He said, Yeah, this is, like, shockingly accurate for you know, how we view our target customer. So I think it’s pretty good. It’s not again, not going to be perfect. You’re going to need to do some work, and you still got to do the research, but, but, yeah. Christian Klepp 49:13 Okay, fantastic, fantastic. How do, I guess there’s the option, I see it there, like, download the PDF. So anything that’s analyzed on the platform can then be exported in a PDF format, right? Like, like, into a report. Clay Ostrom 49:28 Yeah, right now you can export the messaging analysis, or, sorry, the the messaging ideation that you’ve done, and then in the brand brief you can also, you can download a PDF of the brand brief as well. So, those are the two main areas. I’m still working on some additional exports of data so that people can pull it into a spreadsheet and do some other stuff with it. Christian Klepp 49:49 Fantastic, fantastic. That’s awesome, Clay. I’ve got a couple more questions before I let you go. But this has been, this has been amazing, right? Like and I really hope that whoever’s in the one listening and, most importantly, watching this, I hope that you really do consider like, you know, taking this for a test drive, right? How many I might have asked you this before, because, you know, I am somebody that does use, you know, that does a lot of this type of research. But how much time would you say companies would save by using Smoke Ladder? Clay Ostrom 50:24 It’s a good question. I feel like I’m starting to get some feedback around that with from our users, but I mean, for me personally, I would typically spend an hour or two just to get kind of up to speed initially, with a brand and kind of look at some of their competitors. If I’m doing a deep dive, though, if I’m actually doing some of the deeper research work, it could be several hours per client. So I don’t know. On a given week, it might depend on how many clients you’re talking to. Could be anywhere from a few hours to 10 hours or more, depending on how much work you’re doing. But, yeah, I think it’s a decent amount. Christian Klepp 51:07 Absolutely, absolutely. I mean, this definitely does look like a time saver. Here comes my favorite question, which you’re gonna look at me like, Okay, I gotta, I gotta. Clay Ostrom 51:17 Now bring it on. Let’s go. Christian Klepp 51:22 Folks that are not familiar with Smoke Ladder are gonna look at this, um, and before they actually, um, take it upon themselves to, like, watch, hopefully, watch this video on our channel. Um, they’re gonna look at that and ask themselves, Well, what is it that Smoke Ladder does that? You know that other AI couldn’t do, right, like, so I guess what I’m trying to say is, like, Okay, why would they use? How does the platform differ from something like ChatGPT, Perplexity or Claude, right? To run a brand analysis? Clay Ostrom 52:00 Yeah, no, I think it’s a great question. I think it’s sort of the it’s going to be the eternal AI question for every product that has an AI component. And I would say to me, it’s three things. So one is the data, which we talked about, and I didn’t show you this earlier, but there is a search capability in here to go through our full archive of all the brands we’ve analyzed, and again, we’ve analyzed over 6000 brands. So the data piece is really important here, because it means we’re not just giving you insights and analysis based on the brand that you’re looking at now, but we can compare and contrast against all the other brands that we’ve looked at in the space, and that’s something that you’re not going to get by just using some off the shelf standard LLM (Large Language Model) and doing some, you know, some quick prompts with that. The next one, I think, to me that’s important is it’s the point of view of the product and the brand. Like I said, this is built off of 10 plus years of doing positioning and messaging work in the space. So you’re getting to tap into that expertise and that approach of how we do things and building frameworks that make this work easier and more productive that you wouldn’t get, or you wouldn’t know, just on your own. And then the last one, the last point, which is sort of the kind of like the generic software answer, is you get a visual interface for this stuff. It’s the difference between using QuickBooks versus a spreadsheet. You can do a lot of the same stuff that you do in QuickBooks and a spreadsheet, but wouldn’t you rather have a nice interface and some easy buttons to click that make your job way, way easier and do a lot of the work for you and also be able to present it in a way that’s digestible and something you could share with clients? So the visual component in the UI is sort of that last piece. Christian Klepp 54:01 Absolutely. I mean, it’s almost like UX and UI one on one. That’s, that’s pretty much like a big part of, I think what it is you’re trying to build here, right? Clay Ostrom 54:13 Yeah, exactly. It’s just it’s making all of those things that you might do in an LLM just way, way easier. You know, you basically come in, put in your URL and click a button, and you’re getting access to all the data and all the insights and all this stuff so. Christian Klepp 54:29 Absolutely, absolutely okay. And as we wrap this up, this has been a fantastic conversation, by the way, how can the audience start using Smoke Ladder, and how can they get in touch with you if they have questions, and hopefully good questions. Clay Ostrom 54:47 Yeah, so you can, if you go to https://smokeladder.com/ you can, you can try it out. Like I said, you can basically go to the homepage, put in a URL and get started. You don’t even have to create an account to do the initial analysis. But you can create FREE account. You can dig in and see, you know, play around with all the features, and if you use it more, you know, we give you a little bit of a trial period. And if you use it beyond that, then you can pay and continue to use it, but, but you can get a really good flavor of it for free. Christian Klepp 55:16 Fantastic, fantastic. Oh, last question, because, you know, it’s looking me right in the face now, industry categories. How many? How many categories can be analyzed on the platform? Clay Ostrom 55:26 Yeah, yeah. So right now, we have 23 categories in the system currently, which sounds like a lot, but when you start to dig into especially B2B, it’s we will be evolving that and continuing to add more, but currently, there’s 23 different categories of businesses in there. Christian Klepp 55:46 All right, fantastic, fantastic. Clay, man. This has been so awesome. Thank you so much for your time and for your patience and walking us through this, this incredible platform that you’ve built and continue to build. And you know, I’m excited to continue using this as it evolves. Clay Ostrom 56:06 Thank you. Yeah, no. Thanks so much. And you know, if anybody, you know, anybody who tries it out, tests it out, please feel free to reach out. We have, you know, contact info on there. You can also hit me up on LinkedIn. I spend a lot of time there, but I would love feedback, love getting notes, love hearing what’s working, what’s not, all those things. So yeah, anytime I’m always open. Christian Klepp 56:30 All right, fantastic. Once again, Clay, thanks for your time. Take care, stay safe and talk to you soon. Clay Ostrom 56:36 Thanks so much. Talk to you soon. Christian Klepp 56:37 All right. Bye for now.
In today's Cloud Wars Agent and Copilot Minute, I unpack why Microsoft Copilot will no longer be available on WhatsApp starting January 15 and what users should do next.Highlights00:03 — Starting January 15, Microsoft Copilot will no longer be available to users via WhatsApp. This feature has been offered since 2024, but due to changes in WhatsApp platform policies — which include the removal of all LLM chatbots, Copilot users will need to access the assistant through alternative means.00:30 — Unfortunately, because the version of Copilot used on WhatsApp is unauthenticated, it won't be possible to transfer chat history. Instead, users will need to manually export their conversations using WhatsApp's exportation tools.01:00 — Microsoft users who have grown accustomed to Copilot will now need to access the tool through Microsoft-controlled environments. In these settings, Microsoft can offer better functionality, enhanced security, and a wider range of use cases outside of a third-party platform. Visit Cloud Wars for more.
This is Vibe Coding 001. Have you ever wanted to build your own software or apps that can just kinda do your work for you inside of the LLM you use but don't know where to start? Start here. We're giving it all away and making it as simple as possible, while also hopefully challenging how you think about work. Join us. Beginner's Guide: How to visualize data with AI in ChatGPT, Gemini and Claude -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Combining Multiple Features in Large Language ModelsVisualizing Data in ChatGPT, Gemini, and ClaudeCreating Custom GPTs, Gems, and ProjectsUploading Files for Automated Data DashboardsComparing ChatGPT Canvas, Gemini Canvas, and Claude ArtifactsUsing Agentic Capabilities for Problem SolvingVisualizing Meeting Transcripts and Unstructured DataOne-Shot Mini App Creation with AITimestamps:00:00 "Unlocking Superhuman LLM Capabilities"04:12 Custom AI Model and Testing07:18 "Multi-Mode Control for LLMs"12:33 "Intro to Vibe Coding"13:19 "Streamlined AI for Simplification"19:59 Podcast Analytics Simplified21:27 "ChatChibuty vs. Google Gemini"26:55 "Handling Diverse Data Efficiently"28:50 "AI for Actionable Task Automation"33:12 "Personalized Dashboard for Meetings"36:21 Personalized Automated Workflow Solution40:00 "AI Data Visualization Guide"40:38 "Everyday AI Wrap-Up"Keywords:ChatGPT, Gemini, Claude, data visualization with AI, visualize data using AI, Large Language Models, LLM features, combining LLM modes, custom instructions, GPTs, Gems, Anthropic projects, canvas mode, interactive dashboards, agentic models, code rendering, meeting transcripts visualization, SOP visualization, document analysis, unstructured data, structured insights, generative AI workflows, personalized dashboards, automated reporting, chain of thought reasoning, one-shot visualizations, data-driven decision-making, non-technical business leaders, micro apps, AI-powered interfaces, action items extraction, iterative improvement, multimodal AI, Opus 4.5, Five One Thinking, Gemini 3 Pro, artifacts, demos over memos, bespoke software, digital transformation, automated analyticsSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This week on the California Sports Lawyer Podcast, host Jeremy Evans unpacks the fast-moving "shadow library" controversy now reshaping generative AI. After a major court setback for OpenAI, the spotlight is on the allegedly unlicensed books and internet materials used to train large language models (LLM)—and why "chain of title" for data is becoming the next legal battleground. We dig into how a pay-to-license future could trigger a full-blown licensing gold rush, turning clean, cleared datasets into the most valuable asset in AI. Jeremy explains what this means for platform costs, access, and the next wave of dealmaking—especially for entertainment, media, and sports libraries packed with talent NIL and archival content. Finally, we look ahead: possible policy and market solutions (from opt-out training exemptions to rights societies, dataset marketplaces, and even blockchain-verified licensing), and why lawmakers and developers must balance enforcement with public access and innovation. If you care about who gets paid when AI "uses" human creativity—and who gets to build the future—this episode is for you. (Season 7, Episode 47). Copyright 2025. California Sports Lawyer. All Rights Reserved. (www.CSLlegal.com). Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The world changed last week—Opus 4.5 is the best coding model Dan has ever used.It can keep coding and coding autonomously without tripping over itself—and it marks a completely new horizon for the craft of programming. The dream is here: You can write English, and make software.We had Paul Ford on AI & I to talk about it. Ford is the co-founder of Aboard and also a prolific writer. He authored one of Dan's favorite pieces of technology writing What Is Code?—so he's the perfect person to unpack this with him.We talk about the wonder—and genuine unease—that comes with using tools this powerful. We also get into what people who love technology should care about as the ground under us shifts faster than we can imagine.If you found this episode interesting, please like, subscribe, comment, and share!Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribeFollow him on X: https://twitter.com/danshipperHead to ai.studio/build to create your first appReady to build a site that looks hand-coded—without hiring a developer? Launch your site for free at Framer.com, and use code DAN to get your first month of Pro on the house!Timestamps:00:00:00 - Start00:01:57 - Introduction00:03:28 - How Claude Opus 4.5 made the future feel abruptly close00:08:12 - The design principles that make Claude Code a powerful coding tool00:10:57 - How Ford uses Claude Code to build real software00:20:12 - Why collapsing job titles and roles can feel overwhelming00:22:56 - Ford's take on using LLMs to write00:24:09 - A metaphor for weathering existential moments of change00:25:45 - What GLP-1s taught Ford about how people adapt to big shifts00:49:36 - Why you should care what your LLM was trained on00:52:15 - Ford prompts Claude Code to forecast the future of the consulting industry00:59:18 - Recognize when an LLM is reflecting your assumptions back to you01:12:39 - How large enterprises might adopt AILinks to resources mentioned in the episode:Paul Ford: Paul FordFord's company Aboard: https://aboard.com/The piece Ford wrote for Bloomberg in 2015: What Is Code?Alan Kay's concept of a personal computer for children: https://en.wikipedia.org/wiki/Dynabook
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet's approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling. The complete show notes for this episode can be found at https://twimlai.com/go/757.
It was a pleasure to welcome Jessica Stauth, CIO for Systematic Equities at Fidelity Investments, to the Alpha Exchange. Our discussion explores how quant investing has evolved through cycles of market stress, technological change, and today's extraordinary concentration in the equity landscape. Reflecting on her start in markets in the aftermath of the 2007 Quant Quake and the onset of the global financial crisis, Jessica highlights the foundational lesson that markets contain far more uncertainty than models can fully capture — a theme as relevant today as investors confront narrow leadership and elevated fragility. She explains how early dislocations demonstrated the limits of traditional risk models and the dangers of crowding, especially when many quantitative strategies rely on similar signals or hedging techniques. Turning to the present, Jessica describes how her team builds equity strategies designed to function across regimes, emphasizing the need for diversified risk models, guardrails that prevent overfitting, and a clear understanding of how macro shocks can overwhelm bottom-up stock selection. She details the evolution of factor research, including the durability of broad categories such as value, momentum, and quality, while outlining how competition and data availability reshape their effectiveness over time. Lastly, she discusses the growing role of non-traditional data — from earnings-call text to machine-learning tools and LLM-driven sentiment extraction — while underscoring the importance of broad, consistent datasets that can be applied across global universes. Against the backdrop of the S&P 500's heavy top-weighting, Jessica details how diminished breadth affects opportunity sets, investor demand for alternative approaches, and the search for alpha outside the most crowded areas of the market. I hope you enjoy this episode of the Alpha Exchange, my conversation with Jessica Stauth.
Hey CX Nation,In this week's episode of The CXChronicles Podcast #273, we welcomed Varun Sharma, Co-Founder & CEO of Enterpret based in New York, NY. Enterpret provides custom AI to transform how you understand customers – from feedback chaos into clear, confident action. Harness superintelligence that feels like intuition, so your product and CX leaders never miss a signal.The Enterpret platform supercharges teams via advanced LLMs to help brands like Notion, The Farmer's Dog, and Perplexity build better products and experiences.In this episode, Varun and Adrian chat through the Four CX Pillars: Team, Tools, Process & Feedback. Plus share some of the ideas that his team think through on a daily basis to build world class customer experiences.**Episode #273 Highlight Reel:**1. On a mission to connect product leaders with their customers2. Pioneering customer intelligence with AI 3. Understand your customers wants & needs 4. Creating actionable reporting to lift your CX & EX 5. VOC support to help grow your business Click here to learn more about Varun SharmaClick here to learn more about EnterpretHuge thanks to Varun for coming on The CXChronicles Podcast and featuring his work and efforts in pushing the customer experience & contact center space into the future. For all of our Apple & Spotify podcast listener friends, make sure you are following CXC & please leave a 5 star review so we can find new members of the "CX Nation". You know what would be even better?Go tell your friends or teammates about CXC's custom content, strategic partner solutions (Hubspot, Intercom, & Freshworks) & On-Demand services & invite them to join the CX Nation, a community of 15K+ customer focused business leaders!Want to see how your customer experience compares to the world's top-performing customer focused companies? Check out the CXC Healthzone, an intelligence platform that shares benchmarks & insights for how companies across the world are tackling The Four CX Pillars: Team, Tools, Process & Feedback & how they are building an AI-powered foundation for the future. Thanks to all of you for being apart of the "CX Nation" and helping customer focused business leaders across the world make happiness a habit!Reach Out To CXC Today!Support the showContact CXChronicles Today Tweet us @cxchronicles Check out our Instagram @cxchronicles Click here to checkout the CXC website Email us at info@cxchronicles.com Remember To Make Happiness A Habit!!
Episode Highlights[00:00:48] What Makes Software MaintainableDon explains why unnecessary complexity is the biggest barrier to maintainability, drawing on themes from A Philosophy of Software Design.[00:03:14] The Cost of Clever AbstractionsA real story from a Node.js API shows how an unused abstraction layer around MongoDB made everything harder without delivering value.[00:04:00] Shaping Teams and Developer ToolsDon describes the structure of the Search Craft engineering team and how the product grew out of recurring pain points in client projects.[00:06:36] Reducing Complexity Through SDK and Infra DesignWhy Search Craft intentionally limits configuration to keep setup fast and predictable.[00:08:33] Lessons From ConsultingRobby and Don compare consulting and product work, including how each environment shapes developers differently.[00:15:34] Inherited Software and Abandoned DependenciesDon shares the problems that crop up when community packages fall behind—especially in ecosystems like React Native.[00:18:00] Evaluating Third-Party LibrariesSignals Don looks for before adopting a dependency: adoption, update cadence, issue activity, and whether the library is “done.”[00:19:40] Designing Code That Remains UnderstandableWhy clear project structure and idiomatic naming matter more than cleverness.[00:20:29] RFCs as a Cultural AnchorHow Don's team uses RFCs to align on significant changes and avoid decision churn.[00:23:00] Documentation That Adds ContextDocumentation should explain why, not echo code. Don walks through how his team approaches this.[00:24:11] Type Systems and MaintainabilityHow Don's journey from PHP and JavaScript to TypeScript and Rust changed his approach to structure and communication.[00:27:05] Testing With TypesStable type contracts make tests cleaner and less ambiguous.[00:27:45] Building Trust in AI SystemsDon discusses repeatability, hallucinations, and why tools like MCP matter for grounding LLM behavior.[00:29:28] AI in Developer ToolsSearch Craft's MCP server lets developers talk to the platform conversationally instead of hunting through docs.[00:33:21] Improving Legacy Systems SlowlyThe Strangler pattern as a practical way to replace old systems one endpoint at a time.[00:34:11] Deep Work and Reducing Reactive NoiseDon encourages developers to carve out time for uninterrupted thinking rather than bouncing between notifications.[00:36:09] Measuring ProgressBuild times, test speeds, and coverage provide signals teams can use to track actual improvement.[00:38:24] Changing Opinions Over a CareerWhy Don eventually embraced TypeScript after originally writing it off.[00:39:15] Industry Trends and Repeating CyclesSPAs, server rendering, and the familiar pendulum swing in web architecture.[00:41:26] Experimentation and Team AutonomyHow POCs and side projects surface organically within Don's team.[00:44:42] Growing Skills Through Intentional GoalsSetting learning targets in 1:1s to support long-term developer growth.[00:47:19] Where to Find DonLinkedIn, Blue Sky, and his site: donmckinnon.dev.Resources MentionedA Philosophy of Software Design by John OusterhoutJohn Ousterhout's Maintainable.fm Interview (Episode 131)Search CraftElasticAlgoliaWordPress Plugin DirectoryRequest for Comments (RFC)Strangler Fig PatternC2 WikiModel Context Protocol (MCP)Glam AIAubrey/Maturin Series by Patrick O'BrianMaster and Commanderdonmckinnon.devThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error-tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and other frameworks.It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications.Keep your coding cool and error-free, one line at a time! Use the code maintainable to get a 10% discount for your first year. Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.
This conversation explores how Refine Labs drives measurable B2B SaaS growth through demand strategy, paid media optimization, creative execution, and AI-era marketing fundamentals. Listeners gain a clear understanding of how modern demand generation, positioning, and strategic rigor create predictable pipeline and revenue outcomes.Topics CoveredRefine Labs' evolution, revenue milestones, and agency repositioningStrategic focus on digital strategy, paid media, creative production, and demand generationThe Brand–Demand–Expand model for allocating budget and improving pipelineData-driven onboarding: audits across paid media, creative, ICP, website, attribution, content, and journey frictionDemand creation vs. demand capture and how to rebalance budgetsFounder-led marketing vs. diversified marketing enginesRetention, upsell, and cross-sell as key growth levers in enterpriseAI's real impact on marketing, strategy, measurement, and competitive advantageWebsite clarity, LLM discoverability, and digital PR for AI-era visibilityAuthenticity, trust, and human content in an AI-saturated worldQuestions This Video Helps AnswerHow can B2B SaaS companies increase qualified pipeline by 50% or more within 8 months?What should a modern B2B demand generation agency actually deliver?How do you balance budget across brand, demand creation, and demand capture?How should companies approach self-reported attribution at scale?What's the role of founder-led marketing now that organic reach is declining?When should companies prioritize retention and expansion over new acquisition?How is AI affecting content, measurement, and go-to-market strategy?How do B2B brands optimize for ChatGPT, Perplexity, Grok, and Google AI overviews?Jobs, Roles, and Responsibilities MentionedCEOCOOChief Operating Officer (previous role)FounderAccount ManagementCustomer SuccessPaid Media ManagerMarketing LeaderCFOSalesPost-sale functionsDigital marketing teamsCreative and content teamsKey TakeawaysRefine Labs' strongest levers remain digital strategy, paid advertising, and creative execution—these consistently deliver measurable pipeline gains.Companies often overweight short-term demand capture; rebalancing budgets toward brand and demand creation improves long-term efficiency.A rigorous onboarding process—auditing ICP, messaging, media accounts, website friction, attribution, content, and revenue history—is essential for custom growth strategy.Founder-led marketing is an asset but not a sustainable long-term strategy; brands need diversified engines not tied to one person.Enterprise companies can drive massive growth from retention, upsell, and cross-sell, often surpassing net-new acquisition impact.Authenticity, human insight, and trust are becoming more valuable as AI makes generic content ubiquitous.LLM visibility depends on consistent positioning, clear messaging, and strong third-party brand mentions—not hacks or shortcuts.AI should be used only where it improves outcomes: better insights, faster execution, smarter experiments, and strategic amplification.Frameworks and Concepts MentionedBrand–Demand–Expand modelDemand creation vs. demand captureClosed-won / closed-lost analysisIdeal Customer Profile (ICP) validationSelf-reported attributionShare of searchDigital buying journey auditRetention / upsell / cross-sell leversAI-powered benchmarking and structured experimentation
Edge of the Web - An SEO Podcast for Today's Digital Marketer
This week's EDGE of the Web dives into the seismic shifts rocking the digital marketing landscape, from Adobe's blockbuster SEMrush acquisition to Google's long-awaited rollout of Search Console Annotations (well, not so seismic). The future of AI in SEO and the rising tide of data privacy laws set the tone for a fast-changing industry. Erin Sparks, Crystal Carter, and Jacob Mann break down what Adobe's $1.9 billion move means for marketers and explore how WIX is upping its AI tools, accessibility, and agentic web readiness. The crew also spotlights ChatGPT's rumored ad features, Google's continued experimental AI Mode ads, and the EU's Digital Omnibus, raising stakes for marketers everywhere. A side story on the dangers of AI SEO spam highlights how black hat tactics are evolving, bringing back old-school manipulation in a new LLM-driven context. The panel ponders whether brands and platforms are ready for the next frontier of search and content integrity. News from the EDGE: [00:07:55] Official: Adobe is acquiring Semrush for $1.9 billion [00:16:49] GSC has Annotations! Hoorah! Finally! [00:21:16] EU's Digital Omnibus and cookie consent - what you need to know AI / SEO News Segment: [00:23:26] EDGE of the Web Sponsor: Site Strategics [00:25:11] Gemini 3 refused to believe it was 2025, and hilarity ensued [00:32:17] ChatGPT Ads potential leaked [00:39:55] EDGE of the Web Sponsor: WAIKAI [00:41:29] Google Ads begin surfacing inside AI Mode as tests expand [00:45:52] AI Shopping Research [00:52:04] AI Poisoning: Black Hat SEO Is Back Thanks to our sponsors! Site Strategics: https://edgeofthewebradio.com/site Inlinks WAIKAY https://edgeofthewebradio.com/waikay Follow Us: X: @ErinSparks X: @CrystalontheWeb X: @TheMann00 X: @EDGEWebRadio
In this episode of How I Met Your Data: The Prompt, Anjali and Karen dig into one of the fastest-emerging patterns in development today: vibe coding - the practice of describing what you want and letting an LLM generate the code. It's new. It's evolving. And right now, it's causing as much frustration as it is excitement. Karen breaks down what vibe coding actually looks like in practice: developers prompting AI to produce entire features or files, navigating the wildly different “personalities” of today's LLMs, and learning how to guide systems that might generate brilliant structure… or unintended chaos. Together, they talk through the real friction points - overly eager model behavior, unexpected file changes, incomplete suggestions, and the creeping loss of hands-on debugging skills that used to tie engineers closer to their code. But underneath the surface is a bigger enterprise theme. The rise of vibe coding speaks to deeper issues: end users who still aren't getting what they need, bottlenecks in IT and data teams, and the rapid expansion of citizen development as people search for faster paths to outcomes. Anjali and Karen unpack the operational and governance implications, from maintainability and handoff challenges to compliance blind spots and the need for standards that can coexist with AI-assisted creation. They also dive into where AI does shine today - those repetitive, operational workflows that quietly save teams hours - and why focusing on value, ownership, and workflow design matters far more than chasing the next flashy LLM demo. This episode is an honest, grounded look at how AI-assisted development is taking shape: what's promising, what's painful, and what it means for teams trying to build responsibly, collaboratively, and at scale.
Thanks to Prosus Group for collaborating on the Agents in Production Virtual Conference 2025.Abstract //The discussion centers on highly technical yet practical themes, such as the use of advanced post-training techniques like Direct Preference Optimization (DPO) and Parameter-Efficient Fine-Tuning (PEFT) to ensure LLMs maintain stability while specializing for e-commerce domains. We compare the implementation challenges of Computer-Using Agents in automating legacy enterprise systems versus the stability issues faced by conversational agents when inputs become unpredictable in production. We will analyze the role of cloud infrastructure in supporting the continuous, iterative training loops required by Reinforcement Learning-based agents for e-commerce!Bio // Paul van der Boor (Panel Host) //Paul van der Boor is a Senior Director of Data Science at Prosus and a member of its internal AI group.Arushi Jain (Panelist) // Arushi is a Senior Applied Scientist at Microsoft, working on LLM post-training for Computer-Using Agent (CUA) through Reinforcement Learning. She previously completed Microsoft's competitive 2-year AI Rotational Program (MAIDAP), building and shipping AI-powered features across four product teams.She holds a Master's in Machine Learning from the University of Michigan, Ann Arbor, and a Dual Degree in Economics from IIT Kanpur. At Michigan, she led the NLG efforts for the Alexa Prize Team, securing a $250K research grant to develop a personalized, active-listening socialbot. Her research spans collaborations with Rutgers School of Information, Virginia Tech's Economics Department, and UCLA's Center for Digital Behavior.Beyond her technical work, Arushi is a passionate advocate for gender equity in AI. She leads the Women in Data Science (WiDS) Cambridge community, scaling participation in her ML workshops from 25 women in 2020 to 100+ in 2025—empowering women and non-binary technologists through education and mentorship.Swati Bhatia //Passionate about building and investing in cutting-edge technology to drive positive impact.Currently shaping the future of AI/ML at Google Cloud.10+ years of global experience across the U.S, EMEA, and India in product, strategy & venture capital (Google, Uber, BCG, Morpheus Ventures).Audi Liu //I'm passionate about making AI more useful and safe.Why? Because AI will be ubiquitous in every workflow, powering our lives just like how electricity revolutionized our society - It's pivotal we get it right.At Inworld AI, we believe all future software will be powered by voice. As a Sr Product Manager at Inworld, I'm focused on building a real-time voice API that empowers developers to create engaging, human-like experiences. Inworld offers state-of-the-art voice AI at a radically accessible price - No. 1 on Hugging Face and Artificial Analysis, instant voice cloning, rich multilingual support, real-time streaming, and emotion plus non-verbal control, all for just $5 per million characters.Isabella Piratininga //Experienced Product Leader with over 10 years in the tech industry, shaping impactful solutions across micro-mobility, e-commerce, and leading organizations in the new economy, such as OLX, iFood, and now Nubank. I began my journey as a Product Owner during the early days of modern product management, contributing to pivotal moments like scaling startups, mergers of major tech companies, and driving innovation in digital banking.My passion lies in solving complex challenges through user-centered product strategies. I believe in creating products that serve as a bridge between user needs and business goals, fostering value and driving growth. At Nubank, I focus on redefining financial experiences and empowering users with accessible and innovative solutions.
Last month, Senate Democrats warned that "Automation Could Destroy Nearly 100 Million U.S Jobs in a Decade." Ironically, they used ChatGPT to come to that conclusion. DAIR Research Associate Sophie Song joins us to unpack the issues when self-professed worker advocates use chatbots for "research."Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. They're a research associate at DAIR, where they're working with Alex on building the Luddite Lab Resource Hub.References:Senate report: AI and Automation Could Destroy Nearly 100 Million U.S Jobs in a DecadeSenator Sanders' AI Report Ignores the Data on AI and InequalityAlso referenced:MAIHT3k Episode 25: An LLM Says LLMs Can Do Your JobHumlum paper: Large Language Models, Small Labor Market EffectsEmily's blog post: Scholarship should be open, inclusive and slowFresh AI Hell:Tech companies compelling vibe codingarXiv is overwhelmed by LLM slop'Godfather of AI' says tech giants can't profit from their astronomical investments unless human labor is replacedIf you want to satiate AI's hunger for power, Google suggests going to spaceAI pioneers claim human-level general intelligence is already hereGen AI campaign against ranked choice votingChaser: Workplace AI Implementation BingoCheck out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.
30 listopada 2022 r. został publicznie uruchomiony ChatGPT. Od razu wywołał duże poruszenie, a potem wręcz lawinę innych modeli językowych i narzędzi generatywnej AI. Zatem gdzie po tych trzech latach jesteśmy na skali rewolucji AI? To dalej rewolucja, czy już bańka? Przyglądamy się, w jaki sposób AI przekształca wybrane branże - jednym dodając skrzydeł, a innym skrzydła podcinając. I w końcu sprawdzamy, jak zmieniło się nasze zachowanie pod wpływem wszechobecności narzędzi oraz treści AI. Czy potrafimy już odróżnić wideo wygenerowane przez AI od ludzkiej treści? Czy nauczyliśmy się, że AI koniecznie trzeba kontrolować i zawsze sprawdzać wyniki, jakie nam wypluje? GOŚCIE ODCINKA: - Dota Szymborska, doktora etyki UMCS, absolwentka etyki oraz socjologii na Uniwersytecie Warszawskim. Specjalizuje się w etyce GEN AI; - Michał "rysiek" Woźniak, specjalista ds. bezpieczeństwa informacji, współzałożyciel warszawskiego Hackerspace'a. Był dyrektorem ds. Bezpieczeństwa Informacji w Organized Crime and Corruption Reporting Project, konsorcjum ośrodków śledczych, mediów i dziennikarzy działających w Europie Wschodniej, na Kaukazie, w Azji Środkowej i Ameryce Środkowej; - Przemysław Dębiak, znany jako Psyho, mistrz kodowania, który w konkursie dla koderów pokonał sztuczną inteligencję; - dr hab. Tomasz Kajdanowicz z Politechniki Wrocławskiej, na której kieruje Katedrą Sztucznej Inteligencji. NA SKRÓTY: 07:38 Reklama Coca-Coli 14:42 Gdzie ta rewolucja? 17:41 Informatyka i programowanie 31:32 Koniec googlowania 35:55 Martwy internet 42:59 Wiarygodność 54:06 Co dalej? 01:09:58 Przyszłość ŹRÓDŁA: - Ile powstało LLM-ów: https://www.ibm.com/think/topics/large-language-models-list - Reklama Coca-Coli z 2024 r.: https://www.youtube.com/watch?v=IQWUKWM2JrQ - Reklama Coca-Coli z 2025 r.: https://www.youtube.com/watch?v=Yy6fByUmPuE - O ciężarówkach w reklamie Coca-Coli: https://www.businessinsider.com/coca-cola-ai-holiday-ad-glitches-highlight-ai-shortcomings-2025-11?IR=T - Raport McKinsey o wdrożeniach AI na świecie: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - Raport PIE o wdrożeniach AI w Polsce: https://pie.net.pl/z-ai-korzysta-mniej-niz-co-szosta-firma-w-polsce-a-pozostale-nie-planuja-jej-wdrazac/ - O tym, że 30 proc. kodu w Microsofcie pisze AI: https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-as-30percent-of-microsoft-code-is-written-by-ai.html - Wywiad z Psyho: https://wyborcza.biz/biznes/7,177150,32285683,co-to-znaczy-ze-ai-bedzie-lepsza-od-ludzi-kalkulator-jest.html - Raport o martwym internecie: https://www.imperva.com/blog/2025-imperva-bad-bot-report-how-ai-is-supercharging-the-bot-threat/ - O tym, jak Exdrog podchodził do przetargu: https://wyborcza.biz/biznes/7,177150,32358477,firmy-wpadaja-w-tarapaty-przez-ai-polska-spolka-stracila-kontrakt.html - OpenAI o halucynacjach: https://openai.com/index/why-language-models-hallucinate/ - archiwalne odcinki do odsłuchania: 130 i 134
Are you focusing your AI visibility and PR efforts on Reddit, Wikipedia, and media sites like The New York Times? You could be wasting your time and money.In this episode of the Grow and Convert Marketing Show, we dive into the data from our new study on Large Language Model (LLM) citation patterns. We reveal why the advice you see on LinkedIn and in broad industry reports—telling you to chase large, general-purpose sites—is completely misleading for most businesses, especially B2B and niche companies.What You'll Learn:86% of citations are for industry specific sites vs. 14% for "general purpose sites": See actual data from our client base that shows 86% of LLM citations for niche topics come from industry-specific publications, while Reddit, Wikipedia, and YouTube account for only 14%.Differences in how to approach AI visibility for your brand vs. Household name brands: Discover why B2C household names (like Tesla and Peloton) do get cited on general sites, but your niche B2B software company won't. The problem with measuring random prompts: Understand how broad studies that use 5,000 randomly selected keywords are statistically biased toward consumer queries, making their findings irrelevant to your specific business goals.A New Tactical Framework: Learn the exact, actionable steps for a Citation Outreach strategy that works : how to choose the right topics, identify the actual cited domains using tools or manual checks, and target your PR efforts where they will actually drive AI visibility.Stop doing general PR and start showing up in the AI answers that matter to your bottom line.Links & Resources:Read the full article for more detail on the study: https://www.growandconvert.com/research/llms-source-industry-sites-more-than-generic-sites/Check out our AI Visibility Tool: https://traqer.ai Catch up on our overall GEO Framework: https://www.growandconvert.com/ai/prioritized-geo/ Don't forget to like, subscribe, and comment to support the Grow and Convert Marketing Show!
В цьому епізоді ми обговорюємо, як найнадійніші системи можуть давати збій. Головна тема — нещодавнє падіння Cloudflare, яке паралізувало частину інтернету. Розбираємо технічні деталі інциденту, пов'язані зі статичним виділенням пам'яті, та аналізуємо, чому підхід, запозичений у NASA, цього разу не спрацював.Крім того, ми не оминули курйозну історію про Міжнародну асоціацію криптографічних досліджень, яка втратила доступ до власних зашифрованих даних через втрату ключів.Ми провели порівняльний аналіз нових інструментів для кодингу, зокрема Antigravity від Google та оновленого Cursor 2.0, обговоривши їхні підходи до управління AI-агентами.На завершення, розглянули цікавий тренд, коли іноземні розробники видають себе за українців, що свідчить про силу нашого національного ІТ-бренду. Також ми торкнулися складних умов праці та етичних дилем, з якими стикаються спеціалісти в Північній Кореї.00:26 — падіння Cloudflare06:42 — втрата доступу власних зашифрованих даних10:20 — мультипрогон задачі в AI17:49 — технології супутникового зв'язку20:45 — боротьба з ботами та тролями26:28 — огляд нових інструментів для кодування29:56 — використання AI-додатків у повсякденному житті36:30 — останні новини у світі AI та технологій40:00 — якість контенту та додатків46:07 — групові чати в ChatGPT49:38 — контроль та безпека в Північній Кореї
AWS Principal Solutions Architect Wallace Printz explains how agents are reshaping SaaS business models, pricing strategies, and technical architectures.Topics Include:Wallace Printz discusses agentic workloads transforming SaaS with largest AWS customersNew interaction models include generative UI, voice agents, and proactive workAgents extending SaaS products to interact with external systems and businessesVirtual teammates enabling cross-department collaboration and upskilling non-expert users effectivelyMonetization strategies evolving as predictable costs become variable with agentsThree patterns: dedicated agents, shared agents, and multi-tenant personalized agentsMulti-tenant agents enable hyper-personalized experiences using individual tenant context enrichmentAgent-centric business strategy requires real assessment beyond AI hype cycleAgent orchestration complexity grows with multiple specialized agents interacting togetherTenant isolation requires JWT tokens and AWS Bedrock Agent Core identityCost-per-tenant management needs LLM throttling, tiering, and unified control planeMulti-tenancy creates sticky personalized experiences; AWS white paper releasing soonParticipants:Wallace Printz - Principal Solution Architect, Amazon Web ServicesSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Ross welcomes Gaetano DiNardi, growth advisor and B2B SaaS strategist, to unpack the brutal questions CEOs are suddenly firing at their SEO teams in the AI era: traffic is flat or down, attribution is messy, LLM dashboards look scary, and everyone wants to know if SEO is still “working.” They dig into why traffic is no longer the primary health metric, how AI Overviews and LLMs are breaking the old “rank → CTR → traffic → leads → revenue” equation, and why CMOs should obsess less over sessions and more over demo requests, pipeline, deal size, and overall organic channel performance. Gaetano shares how he's reframing SEO for mid-market B2B SaaS: shifting from decaying top-of-funnel content to bottom-of-funnel “money pages,” optimizing for LLM citations and “money prompts,” and using concepts like “crash-the-party” comparison pages to win high-intent demand across both Google and AI platforms. They also break down why LLM prompt tracking is the new rank tracking (with all the same pitfalls), how AI platform traffic actually converts compared to Google, what “dark funnel SEO” looks like in practice, and why Gaetano thinks classic TOFU SEO might effectively go to zero while brand, data studies, and opinionated thought leadership pick up the slack. Show Notes0:08 – Gaetano returns & why last year's SEO playbook no longer works1:17 – CEOs' top question: “Traffic is down… is SEO still working?”2:17 – Why B2B SaaS is feeling the decline hardest3:16 – The old rank → CTR → traffic → revenue model is broken5:46 – Top-of-funnel content is collapsing (and why TOFU is “dead-ish”)6:13 – The shift to bottom-funnel: BOFU keywords, low volume, high intent7:11 – Flat traffic + rising revenue: the new SEO success story8:01 – What LLMs cite: comparisons, “best X” lists, crash-the-party pages8:38 – Crash-the-party SEO explained in 15 seconds13:21 – LLM prompt dashboards are misleading your executives15:53 – Branded vs. non-branded LLM prompts: why you MUST separate them18:06 – Build a tight list of “money prompts,” not 500 random ones20:06 – Why AI platforms show higher conversion rates than Google22:10 – Dark-funnel behavior: LLM → Google → homepage → convert 24:08 – Google leads have FAR larger deal sizes than AI leads27:35 – SEO attribution is officially “cooked” in the AI era29:27 – Traffic flat, pipeline up 3×: examples from real clients31:25 – Use top-10 stability to prove AI Overviews stole your clicks34:47 – Search is becoming BOFU-first — what that means35:34 – LLM citations as the new leading indicator for SEO39:00 – Exploding topics = more LLM citations due to low consensus41:22 – Engagement validates information gain (why POV content wins)47:20 – What happens to TOFU SEO? (Short answer: it shrinks massively) 48:49 – Why TOFU may survive only for authority, clustering, and links Gaetano on LinkedIn: https://www.linkedin.com/in/officialg/ Tough Questions CEOs Are Asking About AI SEO: https://docs.google.com/document/d/1T_IQXUCXBQtFcLekVFwqv23y8ViuRFJKBIRtn4wcImQ/edit?usp=sharing Does traffic from AI platforms convert “better” than Google search traffic?: https://marketingadvice.substack.com/p/869517ce-4629-429f-addd-518d9342cdff CMOs are frantically asking “what's our Reddit strategy?: https://marketingadvice.substack.com/p/cmos-are-frantically-asking-whats LLM prompt tracking is the new rank tracking: https://www.linkedin.com/posts/officialg_llm-prompt-monitoring-is-quickly-becoming-activity-737 7069814458097664-KLyi/?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAj OemoBsmGgXOOcFpcwGVG1B7IlDCvf8io Subscribe today for weekly tips: https://bit.ly/3dBM61f Listen on iTunes: https://podcasts.apple.com/us/podcast/content-and-conversation-seo-tips-from-siege-media/id1289467174 Listen on Spotify: https://open.spotify.com/show/1kiaFGXO5UcT2qXVRuXjsM Listen on Google: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9jT3NjUkdLeA Follow Siege on Twitter: http://twitter.com/siegemedia Follow Ross on Twitter: http://twitter.com/rosshudgens Directed by Cara Brown: https://twitter.com/cararbrown Email Ross: ross@siegemedia.com #seo | #contentmarketing
Send us a textCheck us out at: https://www.cisspcybertraining.com/Get access to 360 FREE CISSP Questions: https://www.cisspcybertraining.com/offers/dzHKVcDB/checkoutGet access to my FREE CISSP Self-Study Essentials Videos: https://www.cisspcybertraining.com/offers/KzBKKouvIf audits feel like paperwork purgatory, this conversation will change your mind. We unpack Domain 6 with a clear, practical path: how to scope a security audit that executives will fund, teams will follow, and regulators will respect. Along the way, we touch on a fresh angle in the news—an open source LLM tool sniffing out Python zero days—and connect it to what development shops can do right now to lower risk without slowing delivery.We start by demystifying what a security audit is and how it differs from an assessment. Then we get into the decisions that matter: choosing one framework to anchor your work (NIST CSF, ISO 27001, or PCI DSS where applicable), keeping policies lean enough to use under pressure, and building a scope that targets high-value processes like account provisioning or privileged access. You'll hear why internal audits build muscle, external audits unlock credibility, and third-party audits protect your supply chain when a vendor stalls or gets breached. We talk straight about cost, bias, and the communication gaps that derail progress—and how to fix them.From there we focus on outcomes. You'll learn to prioritize incident response and third-party risk for the biggest return, write right-to-audit clauses that actually help, and map findings to business impact so leaders say yes to headcount and tooling. We share ways to pair tougher controls with enablement—like deploying a password manager before lengthening passphrases—so adoption sticks. Expect practical reminders on interview planning, evidence collection, and keeping stakeholders aligned without burning goodwill. It's a playbook for turning findings into funding and audits into forward motion.If this helped you reframe how you approach Domain 6 and security audits, subscribe, leave a review, and share it with a teammate who's staring down their next audit. Your support helps more people find CISSP Cyber Training.Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don't miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!
Fobi AI CEO Rob Anson outlines how the company maintained operational progress during the past year, streamlining its structure, modernizing internal systems with AI, reducing costs, and preparing for a more commercially focused relaunch. Instead of losing momentum, the company concentrated on building a stronger, more scalable foundation for its next phase of growth.Fobi has transitioned from a collection of standalone technologies into a professional-services-driven platform built around AI-powered reporting, mobile wallet strategy, and Web3-ready applications.REINVENTION THROUGH COST DISCIPLINE AND AI INFRASTRUCTUREA major theme is how Fobi used this period to reset its cost base and refine its revenue model. The company narrowed its operational footprint, strengthened its data-reporting capabilities, and moved toward higher-margin service engagements supported by a proprietary LLM environment that accelerates internal analyses and client delivery.A significant step involved optimizing the audit process to improve efficiency and predictability. Audit expenses had previously exceeded $1 million over two years, and the transition to a new auditor is expected to create a more streamlined path forward.“We've put ourselves in a far more efficient position than we've ever been in — and at a fraction of the cost.” — Rob Anson, CEOEARLY SIGNS OF COMMERCIAL MOMENTUMWhile limited in what it can disclose, Anson indicates that the business continued progressing throughout 2025. Several dynamics appear to be strengthening Fobi's market position:• Growing demand from enterprises seeking mobile wallet integration and data modernization • Increased use of Fobi's AI-driven reporting automation • Rising joint-venture discussions combining licensing, IP, and professional services • A more scalable cost structure supported by a leaner operating modelPREPARING FOR A STRATEGIC MARKET RE-ENTRYWith major internal milestones nearing completion, Fobi has a full brand refresh ready — including updated products, corporate materials, and new client use cases — to deploy once the company is able to communicate more broadly. Many shareholders have not yet seen how extensively the business has transformed.OUTLOOK: A LEANER, MORE FOCUSED ENTERPRISE SOLUTION PROVIDERFor investors evaluating turnaround narratives, the interview highlights decisive cost management, proprietary AI infrastructure, a pivot toward professional services, and continued commercial activity. As the company completes its remaining steps and begins its next phase, Fobi is positioning itself with a stronger foundation for long-term enterprise growth.
In der 342. Episode von Irgendwas mit Recht spricht Marc mit Christian Wensing, Volljurist und heute in leitender Position in der Rechtsabteilung von Douglas. Christian erzählt, wie er über ein Faible für Sprachen, einen juristischen Vater und erste vage Berufsüberlegungen zum Jurastudium in Münster gekommen ist, warum ihn wirtschaftsnahe Themen wie Kartell- und Gesellschaftsrecht fasziniert haben und wie Stationen im Bundeskartellamt, in Großkanzleien und in der Rechtsabteilung von Henkel seinen Wunsch nach internationaler Arbeit geprägt haben. Es geht darum, weshalb er den eher ungewöhnlichen Weg gewählt hat, den LLM erst nach dem zweiten Examen in London zu machen, welche Rolle Sprache und Netzwerke spielen. Wie bereitet sich ein europaweit tätiger Retailer mit Milliardenumsätzen auf ein Börsen-Listing vor? Was bedeutet eine umfassende Due Diligence aus Unternehmenssicht konkret und wie koordiniert man intern Datenräume, Fragenkataloge und externe Berater? Was unterscheidet die Perspektive eines Kanzlei-Anwalts von der eines Unternehmensjuristen, der auch die Integration von Zukäufen und die langfristige Entwicklung im Konzern begleitet? Wie fühlt es sich an, wenn Kapitalmarktrecht plötzlich zum täglichen Handwerkszeug wird, obwohl man ursprünglich aus dem Private-Equity- und M&A-Bereich kommt? Und welche Überlegungen sollten Studierende und Referendarinnen anstellen, wenn sie zwischen Großkanzlei, LLM, Promotion und Unternehmensjuristerei abwägen? Antworten auf diese und viele weitere Fragen erhaltet Ihr in dieser Folge von IMR. Viel Spaß!
Talk Python To Me - Python conversations for passionate developers
In this episode, I'm talking with Vincent Warmerdam about treating LLMs as just another API in your Python app, with clear boundaries, small focused endpoints, and good monitoring. We'll dig into patterns for wrapping these calls, caching and inspecting responses, and deciding where an LLM API actually earns its keep in your architecture. Episode sponsors Seer: AI Debugging, Code TALKPYTHON NordStellar Talk Python Courses Links from the show Vincent on X: @fishnets88 Vincent on Mastodon: @koaning LLM Building Blocks for Python Co-urse: training.talkpython.fm Top Talk Python Episodes of 2024: talkpython.fm LLM Usage - Datasette: llm.datasette.io DiskCache - Disk Backed Cache (Documentation): grantjenks.com smartfunc - Turn docstrings into LLM-functions: github.com Ollama: ollama.com LM Studio - Local AI: lmstudio.ai marimo - A Next-Generation Python Notebook: marimo.io Pydantic: pydantic.dev Instructor - Complex Schemas & Validation (Python): python.useinstructor.com Diving into PydanticAI with marimo: youtube.com Cline - AI Coding Agent: cline.bot OpenRouter - The Unified Interface For LLMs: openrouter.ai Leafcloud: leaf.cloud OpenAI looks for its "Google Chrome" moment with new Atlas web browser: arstechnica.com Watch this episode on YouTube: youtube.com Episode #528 deep-dive: talkpython.fm/528 Episode transcripts: talkpython.fm Theme Song: Developer Rap
As the Thanksgiving weekend passes, let's look at some technology things that we're thankful for and excited about. SHOW: 980SHOW TRANSCRIPT: The Cloudcast #980 TranscriptSHOW VIDEO: https://youtube.com/@TheCloudcastNET CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"SHOW SPONSORS:[Mailtrap] Try Mailtrap for free[Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.[TestKube] TestKube is Kubernetes-native testing platform, orchestrating all your test tools, environments, and pipelines into scalable workflows empowering Continuous Testing. Check it out at TestKube.io/cloudcastSHOW NOTESTHINGS TO BE THANKFUL FOR:Thankful for healthHappy Birthday to ChatGPTThe feature on an iPhone that remembers the code sent in an SMS messageRole playing, brain storming, scenario planning on ChatGPT/GeminiTools like Vercel that let non-programmers use AI to build apps (the future of PaaS)Tools like NotebookLM making complex stuff like LLM training, RAG patterns, True competition in the cloud market as differentiation emergesThe beginnings of alternatives to NVIDIA in the AI HW Acceleration marketThe evolution of wearables like fitness trackers, or AirPods (better sound quality, noise cancelling, language translation, etc.) FEEDBACK?Email: show at the cloudcast dot netTwitter/X: @cloudcastpodBlueSky: @cloudcastpod.bsky.socialInstagram: @cloudcastpodTikTok: @cloudcastpod
We talk about some stuff and apologize for it all along the way. Remembering v. 'membering. Dictionaries force their words to compete for dominance. Suicide and litigation collide in dispiriting fashion. Device hoarding and planned obsolescence add new rungs to old ladders that turn out to've been cages all along. Korean ferry crashes, loneliness. A well-known LLM settles the mystery of Elliott Smith's death once and for all. The penetrating gaze of antizionism and the epic meme reactions to it. Amy Schumer and the SlapChop guy bottomfeed in unison, while Andrew Tate somehow bottomfeeds (from the top) even harder. Bolsonaro got curious about his ankle monitor. USA wants its death machines kindly returned from their intended victims and without any guff, thankyouverymuch. Giving thanks. Valhalla Patel. UK lectures USA about appreciating a good scandal and breaks news about the US healthcare being a titch suboptimal at the moment. Recorded on Saturday, November 29th, 2025 around 11.00 AM Korea Standard TimeCommiserate on Discord: discord.gg/aDf4Yv9PrYNever Forget: standwithdanielhale.orgGenral RecommendationsJosh's Recommendations: 1) Lençóis Maranhenses National Park 2) Hard EightTim's Recommendation: In holiday clothing out of the great darkness by Clarice JensenFurther Reading, Viewing, ListeningShow notes + Full list of links, sources, etc"I'm caught up in a storm that I don't need no shelter from." TDP"You are, as they say, Finished. You cannot get drunk and you cannot get sober; you cannot get high and you cannot get straight. You are behind bars; you are in a cage and can see only bars in every direction." DFWMore From Timothy Robert BuechnerPodcast: Q&T ARE / violentpeople.co Tweets: @ROHDUTCHLocationless Locationsheatdeathpod.comEvery show-related link is corralled and available here.Twitter: @heatdeathpodPlease send all Letters of Derision, Indifference, Inquiry, Mild Elation, et cetera to: heatdeathodtheuniversepodcast@gmailSend us a textSupport the showSupport: patreon / buzzsprout
AI Assisted Coding: Building Reliable Software with Unreliable AI Tools In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering. From Skeptic to Pioneer: Lada's AI Coding Journey "I got a new skill for free!" Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI. Understanding Vibecoding vs. AI-Assisted Development "AI assisted coding requires judgment, and it's never been as important to exercise judgment as now." Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices. The Answer Injection Anti-Pattern When Working With AI "You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect." One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability. Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you. Never Trust a Single LLM: Multi-Agent Collaboration "Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements." Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review. Code Quality Matters MORE with AI "This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!" Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively. Managing Complexity: The Open Question "If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it." One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever. Context is Everything: Highway vs. Parking Lot "You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt." Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down. The Era of Discovery: We've Only Just Begun "We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of." Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time. Resources For Further Study Video of Lada's talk: https://www.youtube.com/watch?v=_LSK2bVf0Lc&t=8654s Lada's Patterns and Anti-patterns website: https://lexler.github.io/augmented-coding-patterns/ Lada's Substack https://lexler.substack.com/ AI Assisted Coding episode with Dawid Dahl AI Assisted Coding episode with Llewellyn Falco Claude Flow - orchestration platform About Lada Kesseler Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering. Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools. You can link with Lada Kesseler on LinkedIn.
This week, we discuss the Cloudflare outage, their current business strategy, and paying OSS maintainers. Plus, thoughts on loading the dishwasher and managing your home. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/byFyPbe9HC0?si=DpOApdTKs9oh-bWl) 548 (https://www.youtube.com/live/byFyPbe9HC0?si=DpOApdTKs9oh-bWl) Runner-up Titles Mystery Knob Vegans are cursed vegetarians Skilled enough Defrag the dishwasher Design Intentions QR codes everywhere I don't know where we draw the line, but I know where we start SDT IoT CMBD, Home Edition. SDT Open Source Money Maker Lead with Nagware Stocks go up, stocks go down Safari's my naked browser Coté wanted to add periods to all of these but did not. Rundown FFmpeg to Google: Fund Us or Stop Sending Bugs (https://thenewstack.io/ffmpeg-to-google-fund-us-or-stop-sending-bugs/) Cloudflare blames massive internet outage on 'latent bug' (https://techcrunch.com/2025/11/18/cloudflare-blames-massive-internet-outage-on-latent-bug/) Cloudflare outage on November 18, 2025 (https://blog.cloudflare.com/18-november-2025-outage/) Replicate is joining Cloudflare (https://blog.cloudflare.com/replicate-joins-cloudflare/) Relevant to your Interests The Walt Disney Company Announces Multi-Year Distribution Agreement With YouTube TV (https://thewaltdisneycompany.com/the-walt-disney-company-announces-multi-year-distribution-agreement-with-youtube-tv/) Anthropic claims of Claude AI-automated cyberattacks met with doubt (https://www.bleepingcomputer.com/news/security/anthropic-claims-of-claude-ai-automated-cyberattacks-met-with-doubt/) Disrupting the first reported AI-orchestrated cyber espionage campaign (https://www.anthropic.com/news/disrupting-AI-espionage) Compact, human-readable serialization of JSON data for LLM prompts (https://github.com/toon-format/toon) Outage Tracker | Updog By Datadog (https://updog.ai/) Jeff Bezos Creates A.I. Start-Up Where He Will Be Co-Chief Executive (https://www.nytimes.com/2025/11/17/technology/bezos-project-prometheus.html) Power (https://a16z.com/powerpoint-is-your-therapist-gamma-is-your-coach/)P (https://a16z.com/powerpoint-is-your-therapist-gamma-is-your-coach/)oint is your therapist, Gamma is your coach | Andreessen Horowitz (https://a16z.com/powerpoint-is-your-therapist-gamma-is-your-coach/) Red Hat Introduces Project Hummingbird for “Zero-CVE” Strategies (https://www.redhat.com/en/about/press-releases/red-hat-introduces-project-hummingbird-zero-cve-strategies) A new era of intelligence with Gemini 3 (https://blog.google/products/gemini/gemini-3/) The platform that needs a platform (https://cote.io/2025/11/19/the-platform-that-needs-a.html) The AI Coding Startup Favored by Tech CEOs Is Now Worth $29.3 Billion (https://www.wsj.com/tech/ai/the-ai-coding-startup-favored-by-tech-ceos-is-now-worth-29-3-billion-14c72c02) The Smartest Fliers Use This App to Survive America's Travel Hell (https://www.wsj.com/tech/personal-tech/flighty-app-flight-cancellations-delays-900a8aad) Oracle's Market Cap Decline: Analyzing the Impact on Finance (https://platformonomics.com/2025/11/platformonomics-tgif-108-november-14-2025/) OpenAI's Fidji Simo Plans to Make ChatGPT Way More Useful—and Have You Pay For It (https://www.wired.com/story/fidji-simo-is-openais-other-ceo-and-she-swears-shell-make-chatgpt-profitable/) Europe's cookie nightmare is crumbling (https://www.theverge.com/news/823788/europe-cookie-prompt-browser-changes-proposal) Nonsense AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Knives (https://gizmodo.com/ai-powered-teddy-bear-caught-talking-about-sexual-fetishes-and-instructing-kids-how-to-find-knives-2000687140) Whipped Cream Worth $80K Stolen in Ontario (https://www.yahoo.com/news/articles/whipped-cream-worth-80k-stolen-135930616.html) Conferences DevOpsDayLA at SCALE23x (https://www.socallinuxexpo.org/scale/23x), March 6th, Pasadena, CA Use code: DEVOP for 50% off. CFP open until Dec. 1st. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: The Beast in Me (https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.netflix.com/title/81427733&ved=2ahUKEwiy4NnP_P6QAxWGnWoFHU37GesQFnoECGcQAQ&usg=AOvVaw0QnhTLbjScTHWLLBI4qs26) Matt: The Prestige (https://www.imdb.com/title/tt0482571/) Coté: Fantastic 4 (https://en.wikipedia.org/wiki/The_Fantastic_Four:_First_Steps) with that Boba Fet (https://en.wikipedia.org/wiki/The_Fantastic_Four:_First_Steps)t (https://en.wikipedia.org/wiki/The_Fantastic_Four:_First_Steps) guy (https://en.wikipedia.org/wiki/The_Fantastic_Four:_First_Steps), “Winter's Mourning,” from Uncaged God (https://www.dmsguild.com/en/product/382873/uncaged-goddesses)d (https://www.dmsguild.com/en/product/382873/uncaged-goddesses)esses (https://www.dmsguild.com/en/product/382873/uncaged-goddesses). Photo Credits Header (https://unsplash.com/photos/gray-and-white-spoon-and-fork-lot-closeup-photo-vZZfVCUOKfw)
At Qdrant Conference, builders, researchers, and industry practitioners shared how vector search, retrieval infrastructure, and LLM-driven workflows are evolving across developer tooling, AI platforms, analytics teams, and modern search research.Andrey Vasnetsov (Qdrant) explained how Qdrant was born from the need to combine database-style querying with vector similarity search—something he first built during the COVID lockdowns. He highlighted how vector search has shifted from an ML specialty to a standard developer tool and why hosting an in-person conference matters for gathering honest, real-time feedback from the growing community.Slava Dubrov (HubSpot) described how his team uses Qdrant to power AI Signals, a platform for embeddings, similarity search, and contextual recommendations that support HubSpot's AI agents. He shared practical use cases like look-alike company search, reflected on evaluating agentic frameworks, and offered career advice for engineers moving toward technical leadership.Marina Ariamnova (SumUp) presented her internally built LLM analytics assistant that turns natural-language questions into SQL, executes queries, and returns clean summaries—cutting request times from days to minutes. She discussed balancing analytics and engineering work, learning through real projects, and how LLM tools help analysts scale routine workflows without replacing human expertise.Evgeniya (Jenny) Sukhodolskaya (Qdrant) discussed the multi-disciplinary nature of DevRel and her focus on retrieval research. She shared her work on sparse neural retrieval, relevance feedback, and hybrid search models that blend lexical precision with semantic understanding—contributing methods like Mini-COIL and shaping Qdrant's search quality roadmap through end-to-end experimentation and community education.SpeakersAndrey VasnetsovCo-founder & CTO of Qdrant, leading the engineering and platform vision behind a developer-focused vector database and vector-native infrastructure.Connect: https://www.linkedin.com/in/andrey-vasnetsov-75268897/Slava DubrovTechnical Lead at HubSpot working on AI Signals—embedding models, similarity search, and context systems for AI agents.Connect: https://www.linkedin.com/in/slavadubrov/Marina AriamnovaData Lead at SumUp, managing analytics and financial data workflows while prototyping LLM tools that automate routine analysis.Connect: https://www.linkedin.com/in/marina-ariamnova/Evgeniya (Jenny) SukhodolskayaDeveloper Relations Engineer at Qdrant specializing in retrieval research, sparse neural methods, and educational ML content.Connect: https://www.linkedin.com/in/evgeniya-sukhodolskaya/
DNA Behavior CEO Hugh Massie joins The Next 100 Days podcast to talk about how he is set to transform the way behavioural data can transform financial and sales performance.Summary of PodcastIntroductions and banterGraham and Kevin welcome Hugh Massie, the founder of DNA Behavior, to the podcast. They engage in some lighthearted banter about Hugh's Australian background and his business name spelling.Overview of DNA Behavior's approachHugh explains the core of DNA Behavior's psychometric system. He measures 8 primary traits and 24 sub-traits to build a comprehensive profile of an individual's natural tendencies. He emphasises the importance of understanding people's instinctive behaviours under pressure, not just their learned social behaviours.Applying DNA to teams and organisationsThe group discusses how DNA Behavior's insights can be used to understand the dynamics of teams and organizations, drawing parallels to the characters in the TV show "The Office". Hugh explains how the system helps identify strengths, weaknesses, and potential conflicts within a group.Concerns about privacy and trust Graham expresses concerns about whether people will be willing to surrender the "deeper" aspects of their personality? Hugh addresses this by explaining the importance of psychological safety and how the system is designed to build trust and transparency.Quantifying the business impactHugh outlines how his approach can directly impact a company's financial performance and valuation. This is by helping leaders make better decisions around innovation, cost management, and other key business drivers. He cites data showing a 23% average revenue growth for customers.Potential applications beyond the workplaceThe group explores how his profiling capabilities could be applied to other domains, such as investor screening and selection. Hugh explains how the digital scanning technology allows for profiling large groups of people at scale.Wrap-up and reflectionsGraham and Kevin reflect on the insightful and engaging nature of the conversation, noting the unique capabilities of DNA Behavior's approach. They express interest in further exploring the system and its potential applications. The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled ten years ago to help business owners and marketers market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner for MeclabsAI, where he introduces AI Agents that you can talk to, that increase engagement, dwell time, leads and conversions. Now, Graham is offering Answer Engine Optimisation that gets you ready to be found by LLM search.Kevin ApplebyKevin specialises in finance transformation and implementing business change. He's the COO of GrowCFO, which provides both community and CPD-accredited training designed to grow the next generation of finance leaders. You can find Kevin on LinkedIn and at
Connect with Will at: https://bsky.app/profile/wiverson.com https://www.linkedin.com/in/wiverson Mentioned during the series: Will's Youtube channel: https://changenode.com/ https://www.youtube.com/@ChangeNode Mentioned in this episode: killed by google: https://killedbygoogle.com build your own LLM: https://www.freecodecamp.org/news/code-an-llm-from-scratch-theory-to-rlhf/ The post 313 How AI is Disrupting Social Contracts first appeared on Agile Thoughts.
This segment from the Omni Talk Retail Fast Five podcast, sponsored by the A&M Consumer and Retail Group, Mirakl, Ocampo Capital, Infios, and Quorso, explores Target's bold move to enable shopping through ChatGPT just days before Black Friday. Anne sees it as a necessary first step into LLM-powered commerce and applauds Target for testing publicly. Chris calls it a "big nothing burger"... questioning why Target launched an incomplete beta feature during their busiest week, especially when their own site search isn't optimized yet. Who's right? #target #chatgpt #aicommerce #llmsearch #conversationalcommerce #retailtech #targetapp #innovation #retailAI #voiceshopping
This segment from the Omni Talk Retail Fast Five podcast, sponsored by the A&M Consumer and Retail Group, Mirakl, Ocampo Capital, Infios, and Quorso, examines Bath & Body Works' decision to launch on Amazon early next year. With $60-80M in unauthorized gray market sales already happening, the company will test a curated assortment to control its brand and capture revenue. Anne argues LLM search makes marketplace presence essential for discovery and conversion. Chris believes Bath & Body Works should wait, perfect their own site experience first, and avoid feeding Amazon's dominance. Who has the better long-term strategy? #bathandbodyworks #amazon #marketplace #graymarket #ecommerce #brandstrategy #retailstrategy #thirdpartysellers #beautyretail #amazonmarketplace
How can marketers ensure their brand remains visible in an AI-dominated digital landscape?This special Hard Corps Marketing Show takeover episode features an episode from the Connect To Market podcast, hosted by Casey Cheshire. In this conversation, Casey sits down with David Reske, Founder and CEO of Nowspeed to explore how artificial intelligence is transforming the way companies achieve visibility and connect with customers. David emphasizes the urgency for marketers to adapt to AI technologies, especially large language models like ChatGPT, or risk becoming invisible in a rapidly evolving marketplace.He shares actionable strategies for creating an LLM briefing book to teach AI systems about your company, ensuring your brand appears in AI-driven search results. David also explains how to conduct AI-focused content audits, implement strategic content plans, and align SEO efforts with AI capabilities to stay ahead of the curve.In this episode, we cover:The rising influence of AI and LLMs in marketing visibilityHow to build an “LLM briefing book” to educate AI about your brandConducting audits and crafting content strategies for AI optimizationWhy marketers must act now to remain discoverable in AI-powered searches
AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes." Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time." Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt." Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up." As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes." Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want." Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity." Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it." When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources. About Lou Franco Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum. You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.
John and Bobby broke down how RAG actually works and why the real battle isn't the LLM, it's the chaos sitting in PDFs, Excel files, post-job reports, random folder structures, and handwritten scans. They showed how extraction, chunking, and embeddings fall apart when data is messy, and why clean structure, good metadata, and consistent organization matter way more than people want to admit. They also hit on how tools like MCP and text-to-SQL let AI pull from WellView, production systems, and databases in one place, instead of everyone living in 20 different apps. The takeaway was simple: AI gets powerful fast when the data is ready, but if the inputs are junk, you'll just get faster junk back.Click here to watch a video of this episode.Join the conversation shaping the future of energy.Collide is the community where oil & gas professionals connect, share insights, and solve real-world problems together. No noise. No fluff. Just the discussions that move our industry forward.Apply today at collide.ioClick here to view the episode transcript. https://twitter.com/collide_iohttps://www.tiktok.com/@collide.iohttps://www.facebook.com/collide.iohttps://www.instagram.com/collide.iohttps://www.youtube.com/@collide_iohttps://bsky.app/profile/digitalwildcatters.bsky.socialhttps://www.linkedin.com/company/collide-digital-wildcatters
Matthew Bertram and Jon Gillham unpack how AI content, plagiarism risk, and Google's crackdowns reshape SEO, then lay out guardrails that protect rankings while building real LLM visibility. The focus stays on practical governance, provenance checks, entity health, and adding value beyond words.• Rebrand context and why AI integrity matters• Study showing AI overviews citing AI content• Risks of synthetic data and value of human signals• School and workplace guardrails for AI use• Google's stance on helpful content vs scaled abuse• Penalty patterns, core updates, and indexing lags• Plagiarism trends, fair use thresholds, and QA checks• LLM visibility strategy and entity consolidation• Editorial workflows to detect copy‑paste AI• Actionable playbook for responsible AI adoptionGuest Contact Information: Website: originality.aiLinkedIn: linkedin.com/in/jon-gillhamMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
Prolonged conversations with ChatGPT and other LLM chatbots have created rapid developments of severe delusions, paranoia, and even death by suicide in some cases. In this episode, Dr. David Puder sits down with Columbia researchers Dr. Amandeep Jutla and Dr. Ragy Girgis to unpack five shocking real-world cases, explain why large language models are dangerously sycophantic, trained to agree, mirror, and amplify any idea instead of challenging it. By listening to this episode, you can earn 1.25 Psychiatry CME Credits. Link to blog Link to YouTube video