POPULARITY
Categories
AI is moving from chat to action.In this episode of Big Ideas 2026, we unpack three shifts shaping what comes next for AI products. The change is not just smarter models, but software itself taking on a new form.You will hear from Marc Andrusko on the move from prompting to execution, Stephanie Zhang on building machine-legible systems, and Sarah Wang on agent layers that turn intent into outcomes.Together, these ideas tell a single story. Interfaces shift from chat to action, design shifts from human-first to agent-readable, and work shifts to agentic execution. AI stops being something you ask, and becomes something that does. Resources:Follow Marc Andrusko on X: https://x.com/mandrusko1Follow Stephanie Zhang on X: https://x.com/steph_zhang Follow Sarah Wang on X: https://x.com/sarahdingwangRead more all of our 2026 Big IdeasPart 1: https://a16z.com/newsletter/big-ideas-2026-part-1Part 2: https://a16z.com/newsletter/big-ideas-2026-part-2/Part 3: https://a16z.com/newsletter/big-ideas-2026-part-3/ Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
We explore how to align sales, marketing, and operations so growth becomes predictable, not chaotic. Luis Baez shares practical frameworks to productize services, unify data, and raise conversion rates in a world where buyers consult AI before they call you.• breaking silos between sales, marketing and ops• why unified data beats dueling spreadsheets• shifting websites to knowledge bases for LLM era• productizing services into a signature method• pricing to outcomes and standardizing delivery• sprinting to validate offers before scaling• improving microconversions across the funnel• practical tech stack and revenue intelligence tools• managing AI anxiety and proving value with quick wins• human connection as a competitive advantageGuest Contact Information: Website: luisbaez.comLinkedIn: linkedin.com/in/baezluisYouTube: youtube.com/@unhustlingMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
Есть предположение, что злоупотребление LLM в общем и вайбкодинг в частности отупляет программистов. С другой стороны, этот наброс похож на квохтание Vim-еров на IDE-шников. Где же правда?Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Музыка из выпуска: - https://artists.landr.com/056870627229- https://t.me/angry_programmer_screamsВесь плейлист курса "Kubernetes для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3SrrmOzzdBBsdeQ0YVR3Fc7Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:06:15 Из-за чего тупеют люди?00:09:00 Если LLM не подошел, проблема в тебе00:15:35 Плохо ли генерить тесты LLM?00:20:00 Терминальный вайбкодинг00:29:00 Поиск API через LLM 00:34:30 Проектирует человек, а кодит LLM00:42:40 Катастрофа мотивации00:46:15 Эффект циганского гипноза00:51:20 Тупеем ли от поиска через LLM?01:00:00 LLM ловит нас на крючекСсылки:- https://www.youtube.com/watch?v=COovfRQ9hRM : Наше будущее - https://www.linkedin.com/posts/nityan_we-all-know-vibe-coding-has-technical-debt-activity-7339687364216193025-nY2E : Исследование отупения от ИИ - https://codeua.com/ai-coding-tools-can-reduce-productivity-study-results/ : AI Coding Tools Can Reduce Productivity: Study ResultsВидео: https://youtube.com/live/HU7m31-NZmM Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/
TL;DR: We train LLMs to accept LLM neural activations as inputs and answer arbitrary questions about them in natural language. These Activation Oracles generalize far beyond their training distribution, for example uncovering misalignment or secret knowledge introduced via fine-tuning. Activation Oracles can be improved simply by scaling training data quantity and diversity. The below is a reproduction of our X thread on this paper and the Anthropic Alignment blog post. Thread New paper: We train Activation Oracles: LLMs that decode their own neural activations and answer questions about them in natural language. We find surprising generalization. For instance, our AOs uncover misaligned goals in fine-tuned models, without training to do so. We aim to make a general-purpose LLM for explaining activations by: 1. Training on a diverse set of tasks 2. Evaluating on tasks very different from training This extends prior work (LatentQA) that studied activation verbalization in narrow settings. Our main evaluations are downstream auditing tasks. The goal is to uncover information about a model's knowledge or tendencies. Applying Activation Oracles is easy. Choose the activation (or set of activations) you want to interpret and ask any question you like! We [...] ---Outline:(00:46) Thread(04:49) Blog post(05:27) Introduction(07:29) Method(10:15) Activation Oracles generalize to downstream auditing tasks(13:47) How does Activation Oracle training scale?(15:01) How do Activation Oracles relate to mechanistic approaches to interpretability?(19:31) Conclusion The original text contained 3 footnotes which were omitted from this narration. --- First published: December 18th, 2025 Source: https://www.lesswrong.com/posts/rwoEz3bA9ekxkabc7/activation-oracles-training-and-evaluating-llms-as-general --- Narrated by TYPE III AUDIO. ---Images from the article:
Host Susan Diaz is joined by Shona Boyd, a product manager at Mitratech, a SaaS company, and a proudly AI-curious early adopter, for a grounded conversation about what AI literacy actually means now. They talk about representation, critical thinking, everyday meet-you-where-you-are workflows, shadow AI, enterprise guardrails, and why leaders must stop chasing AI features that don't solve real user problems. Episode summary Susan introduces Shona Boyd - AI-curious early adopter and SaaS product manager—whose mission is to make AI feel less scary and more accessible. Shona shares how her approachable AI philosophy started in product work: she used AI to build audience insights and feedback loops when job seekers weren't willing to do interviews, and quickly realized two things: (1) AI wasn't going away, and (2) there weren't many visible women or people who looked like her leading the conversation. So she raised her hand as an approachable reference point others could learn from. From there, the conversation expands into what AI literacy has evolved into. It's no longer just "which tool should I use?" or "how do I write prompts?" Shona argues that today literacy is about critical thinking, learning how to talk to an LLM like a conversation, and choosing workflows that benefit from AI rather than chasing hype. They also get practical: Shona gives everyday examples (Medicare PDFs, credit card points, life admin) to show how AI can meet you where you are, without requiring you to build agents or become super technical. Finally, Susan and Shona go deep on organizational adoption: why handing out logins without policies is risky, how shadow AI shows up (hello, rogue meeting note-takers), why leadership sponsorship matters, and what companies should stop doing immediately: AI for the sake of AI. Key takeaways Representation changes adoption. When people don't see anyone who looks like them using AI confidently, they're less likely to lean in. Shona chose to be a visible, approachable point of reference for others. AI literacy has shifted. It's no longer mainly about which model or prompt frameworks. It's about: learning the language (LLM, GPT, etc.) staying curious building critical media muscles to evaluate what's true, what's AI, and what needs sources. Workflows aren't just corporate. A workflow is simply: tasks + the path to get them done. Shona's examples show AI can help with day-to-day life admin (PDFs, policies, benefits, points programs), which makes AI feel more approachable fast. The first output is not the final. "I can spot AI content" usually means people are publishing raw first drafts. High-quality AI use looks like: draft → critique → refine → human judgement. What good organizational training is NOT: handing out tool logins with no policy, no guidance on acceptable use, and no understanding of enterprise vs personal security. Shadow AI is already here. People are adding unapproved AI note-takers to meetings and uploading sensitive info into personal accounts. Blanket bans don't work - they push experimentation underground. Adoption needs product thinking. Shona suggests leaders treat internal AI like a product launch: run simple feedback loops (NPS-style checks) analyse usage patterns to find sticking points apply AI where it solves real pain, not where competitors are hyping features. Leadership ownership matters for equity. When AI is run department-by-department, you create "haves and have-nots" (tools, training, access). Top-down support plus safe guardrails reduces inequity and increases psychological safety. Spicy take: stop doing AI for the sake of AI. If you can't explain how an AI feature improves real user life in a non-marketing way, it probably shouldn't ship. Episode highlights [00:01] The 30-day podcast-to-book sprint and why leaders are still showing up in December. [01:14] Shona's origin story: using AI to build audiences and feedback loops in a job board context. [02:17] The visibility gap: not many women / people who looked like her in early AI spaces. [05:55] What AI literacy means now: critical thinking + conversation with an LLM + workflow selection. [07:16] "Workflows" made real: Medicare PDFs and credit card points examples. [10:13] Three essentials: foundational language, curiosity, and critical media literacy. [12:23] What training is NOT: handing out logins with no policy or guardrails. [15:49] Handling fear and resistance with empathy and a human-in-the-loop mindset. [23:27] Product lens on adoption: NPS feedback loops + usage analytics to find real needs. [28:14] Shadow AI: rogue note-takers, personal accounts, and why bans backfire. [31:17] Policies at multiple levels, including interviewing and candidate use of AI. [36:49] "Stop AI for the sake of AI" and the race to ship meaningless features. [39:13] Where to find Shona: LinkedIn (under Lashona). Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. Connect with Lashona Boyd on LinkedIn
What are the advantages of spec-driven development compared to vibe coding with an LLM? Are these recent trends a move toward declarative programming? This week on the show, Marc Brooker, VP and Distinguished Engineer at AWS, joins us to discuss specification-driven development and Kiro.
How do you build an AI product that's powerful, reliable, and grounded in real user needs? In this podcast hosted by Qventus Product Director Mark Bailes, Google Product Lead Alankar Agnihotri discusses what it really takes to build impactful AI and LLM-driven products. He shares the hidden complexities behind model behavior, the shift from deterministic to probabilistic design, and what product managers must do to deliver AI experiences that scale. You'll hear firsthand insights from someone shaping Gemini in Android Automotive and building the future of in-car intelligence.
Renegade Thinkers Unite: #2 Podcast for CMOs & B2B Marketers
GenAI now sits inside content workflows, SDR outreach, and competitive intelligence. Marketing teams are seeing real wins and real growing pains, and the open question is where to focus next. To answer that, Drew brings together Kelly Hopping, John McKinney (Cornerstone Licensing), and Brian Hankin (Altium Packaging) to share the AI plays they are running right now and how they're leading the charge. Here's how: In this episode: Kelly shows how AI weaves through content, SDR workflows, web chat, product work, and SEO, plus how OKRs and certifications lift AI fluency across the team. John uses AI agents for competitor tracking, outbound support, and coding, and treats AI as a sparring partner for strategy before it reaches the C suite. Brian runs an AI campaign engine that builds multi-touch programs in minutes and tracks lifts in engagement, qualified leads, proposals, and wins. Plus: How AEO connects to SEO and what needs to shift for LLM-driven discovery How leaders model AI use with internal knowledge bases and cross-functional pilots How to structure AI readiness Where CMOs can start Tune in if you want AI use cases you can put to work now and a clearer view of where to point your team next. For full show notes and transcripts, visit https://renegademarketing.com/podcasts/ To learn more about CMO Huddles, visit https://cmohuddles.com/
The three guys are back this week with special guest Nathan Leamer (CEO, Fixed Gear Strategies) to discuss artificial intelligence (AI). As CEO and Ruling Elder (PCA), Nathan navigates the intersection of technology, public policy, and the kingdom of God in this insightful and engaging conversation. Of note, Barry learns what LLM means and also …
When Jacob Dean looks back on his career, the through-line isn't a straight path — it's a steady climb built on curiosity, discipline, and the courage to rethink what success should look like. Raised in Northeast Ohio in a family of educators, Jacob grew up with a traditional definition of stability: find a good job, work hard, and build a dependable life. Entrepreneurship wasn't part of the conversation. Yet over time, Jacob discovered that he was drawn to something bigger — the intersection of law, business, and strategy.After majoring in finance, Jacob chose law school at a time when the economy was uncertain and job prospects were slim. But that step opened the door to a series of defining opportunities: working in the tax department at Procter & Gamble, clerking for the U.S. Tax Court, completing an LLM at Georgetown, and gaining meaningful experience in both law firm and in-house roles. Each chapter gave him new layers of expertise — tax structure, corporate operations, nonprofit compliance, and business management.Despite the steady progression, something deeper was brewing. Jacob realized that what energized him most wasn't just the practice of law — it was understanding how businesses run, how decisions get made, and how structure shapes success. He enjoyed the legal work, but he felt most at home thinking like an operator and strategist.Then came a turning point: turning 40. Instead of seeing it as a crisis, Jacob treated it as a moment of reflection — a chance to pause long enough to ask, What do I want the next decade to look like? The answer was clear: it was time to build something of his own.With support from family and colleagues, Jacob made the leap into entrepreneurship and launched his own firm. Unlike many attorneys who see the business side as a distraction, Jacob embraces it. He believes law firms should operate like true businesses — strategic, structured, and growth-minded — rather than relying on outdated norms or reactive hiring. His combined experience in tax and corporate law gives him a unique ability to help founders avoid pitfalls and build with intention.In this episode of The Inventive Journey, Jacob shares the decisions that shaped him, the pressure he once felt to take opportunities out of fear, and the mindset shift that now guides his career. He talks openly about learning to trust himself, redefining what a “successful” legal career looks like, and why entrepreneurship still excites him every day.His advice for new founders is refreshingly simple: get good help. Whether you're forming a company, raising capital, managing risk, or planning for growth, trying to do everything alone can cost far more than it saves. Good advisors, good structure, and good decision-making create the runway that businesses need to thrive.Jacob's story isn't just about leaving a job — it's about stepping into a role he was already preparing for through every chapter of his career. It's a reminder that experience compounds, reflection matters, and it's never too late to build a business on your own terms.
We're really moving from a world where humans are authoring search queries and humans are executing those queries and humans are digesting the results to a world where AI is doing that for us.Jeff Huber, CEO and co-founder of Chroma, joins Hugo to talk about how agentic search and retrieval are changing the very nature of search and software for builders and users alike.We Discuss:* “Context engineering”, the strategic design and engineering of what context gets fed to the LLM (data, tools, memory, and more), which is now essential for building reliable, agentic AI systems;* Why simply stuffing large context windows is no longer feasible due to “context rot” as AI applications become more goal-oriented and capable of multi-step tasks* A framework for precisely curating and providing only the most relevant, high-precision information to ensure accurate and dependable AI systems;* The “agent harness”, the collection of tools and capabilities an agent can access, and how to construct these advanced systems;* Emerging best practices for builders, including hybrid search as a robust default, creating “golden datasets” for evaluation, and leveraging sub-agents to break down complex tasks* The major unsolved challenge of agent evaluation, emphasizing a shift towards iterative, data-centric approaches.You can also find the full episode on Spotify, Apple Podcasts, and YouTube.You can also interact directly with the transcript here in NotebookLM: If you do so, let us know anything you find in the comments!
Send us a text- On-Demand Programme Link - https://mailchi.mp/bb2a7b851246/kairos-centreWhat is 'Manly'? A conversation with Damian Andrews of SHAIR.Care Podcast (Australia) in 2023.What's a “Russian Doll” (or is it called a Babushka) got to do with Sex, Porn, Love Addiction?I thought you would never ask!“I haven't bought into that nonsense “Big boys don't cry”, when I was growing up”. At least, I don't think so”!That guy called John Bowlby in the 1940's dared to put together some suppositions that I didn't like. How dare he put me in a box and think that he knows me. Yet, “Oh my goodness, that stuff he is talking about me; describes me. I don't like this. Anyway, I am a complex being made by God and only Sigmund Freud can unravel the complexities of me”. (This was my coping strategy that I used to avoid getting penetrated and having to go and see those busy-body counsellors and tell them about my growing up stuff, so they could sort me out).The inner child was curled up deep inside the Russian Doll, with layers of protection, to avoid people that I give my heart to, hurting me again. "Big boys don't cry". Therefore, grown up boys absolutely cannot cry. A man's man get's up, stop crying and whimpering and gets on with it. Stiff British lip stuff. (PS: Is that the upper or lower lip that is stiff. I always wondered!)What is the framework and straight jacket which society (which is us) has given men? Is it the right fit? If it isn't, how do we break out and re-invent ourselves?What baton? What generational/family script has been handed on to each of us?What is masculinity? What does it mean? Is it controversial to even ask the question? Too dangerous for me to even dare to begin to offer a 'take'. What does that mean for a progressive society?More questions than answers in this episode.Get some help from The Kairos Centre. See what you cannot see. Begin to change that which you begin to better understand.Help someone: https://igg.me/at/ThekairosCentreHelp is here for you: bit.ly/pornaddictionhelpGary McFarlane (BA, LLM, Dip, Certs), Accredited EMDR Practitioner.Key words: sex addiction, addicted, partner, porn addiction, recovery, sex drive, therapy, sex therapy, podcast, relationships, relationship counseling, relationship advice, addiction, couples, couples therapy, sex therapy, emdr, love addiction, behavior, psychology, codependency, sex life, neuroscience, sex ed, sober, sobriety, sexual dysfunction, relationship issues, sex coach, sexual, trauma, ptsd, sex science, The sex porn love Addiction Podcast, The Singles Partners Marrieds and Long Time Marrieds Podcast, Gary McFarlane, porn addiction, what neuroscience says, neuroscience, young adults, sex, sex addict, porn, recovery, porn addiction issue, porn addiction in teens, sex addiction in teens, sex hormones, hormones,Support the show
Alexandru Voica, Head of Corporate Affairs and Policy at Synthesia, discusses how the world's largest enterprise AI video platform has approached trust and safety from day one. He explains Synthesia's "three C's" framework—consent, control, and collaboration: never creating digital replicas without explicit permission, moderating every video before rendering, and engaging with policymakers to shape practical regulation. Voica acknowledges these safeguards have cost some business, but argues that for enterprise sales, trust is competitively essential. The company's content moderation has evolved from simple keyword detection to sophisticated LLM-based analysis, recently withstanding a rigorous public red team test organized by NIST and Humane Intelligence. Voica criticizes the EU AI Act's approach of regulating how AI systems are built rather than focusing on harmful outcomes, noting that smaller models can now match frontier capabilities while evading compute-threshold regulations. He points to the UK's outcome-focused approach—like criminalizing non-consensual deepfake pornography—as more effective. On adoption, Voica argues that AI companies should submit to rigorous third-party audits using ISO standards rather than publishing philosophical position papers—the thesis of his essay "Audits, Not Essays." The conversation closes personally: growing up in 1990s Romania with rare access to English tutoring, Voica sees AI-powered personalized education as a transformative opportunity to democratize learning. Alexandru Voica is the Head of Corporate Affairs and Policy at Synthesia, the UK's largest generative AI company and the world's leading AI video platform. He has worked in the technology industry for over 15 years, holding public affairs and engineering roles at Meta, NetEase, Ocado, and Arm. Voica holds an MSc in Computer Science from the Sant'Anna School of Advanced Studies and serves as an advisor to MBZUAI, the world's first AI university. Transcript Audits, Not Essays: How to Win Trust for Enterprise AI (Transformer) Synthesia's Content Moderation Systems Withstand Rigorous NIST, Humane Intelligence Red Team Test (Synthesia) Computerspeak Newsletter
Datawizz is pioneering continuous reinforcement learning infrastructure for AI systems that need to evolve in production, not ossify after deployment. After building and exiting RapidAPI—which served 10 million developers and had at least one team at 75% of Fortune 500 companies using and paying for the platform—Founder and CEO Iddo Gino returned to building when he noticed a pattern: nearly every AI agent pitch he reviewed as an angel investor assumed models would simultaneously get orders of magnitude better and cheaper. In a recent episode of BUILDERS, we sat down with Iddo to explore why that dual assumption breaks most AI economics, how traditional ML training approaches fail in the LLM era, and why specialized models will capture 50-60% of AI inference by 2030. Topics Discussed Why running two distinct businesses under one roof—RapidAPI's developer marketplace and enterprise API hub—ultimately capped scale despite compelling synergy narratives The "Big Short moment" reviewing AI pitches: every business model assumed simultaneous 1-2 order of magnitude improvements in accuracy and cost Why companies spending 2-3 months on fine-tuning repeatedly saw frontier models (GPT-4, Claude 3) obsolete their custom work The continuous learning flywheel: online evaluation → suspect inference queuing → human validation → daily/weekly RL batches → deployment How human evaluation companies like Scale AI shift from offline batch labeling to real-time inference correction queues Early GTM through LinkedIn DMs to founders running serious agent production volume, working backward through less mature adopters ICP discovery: qualifying on whether 20% accuracy gains or 10x cost reductions would be transformational versus incremental The integration layer approach: orchestrating the continuous learning loop across observability, evaluation, training, and inference tools Why the first $10M is about selling to believers in continuous learning, not evangelizing the category GTM Lessons For B2B Founders Recognize when distribution narratives mask structural incompatibility: RapidAPI had 10 million developers and teams at 75% of Fortune 500 paying for the platform—massive distribution that theoretically fed enterprise sales. The problem: Iddo could always find anecdotes where POC teams had used RapidAPI, creating a compelling story about grassroots adoption. The critical question he should have asked earlier: "Is self-service really the driver for why we're winning deals, or is it a nice-to-have contributor?" When two businesses have fundamentally different product roadmaps, cultures, and buying journeys, distribution overlap doesn't create a sustainable single company. Stop asking if synergies exist—ask if they're causal. Qualify on whether improvements cross phase-transition thresholds: Datawizz disqualifies prospects who acknowledge value but lack acute pain. The diagnostic questions: "If we improved model accuracy by 20%, how impactful is that?" and "If we cut your costs 10x, what does that mean?" Companies already automating human labor often respond that inference costs are rounding errors compared to savings. The ideal customers hit differently: "We need accuracy at X% to fully automate this process and remove humans from the loop. Until then, it's just AI-assisted. Getting over that line is a step-function change in how we deploy this agent." Qualify on whether your improvement crosses a threshold that changes what's possible, not just what's better. Use discovery to map market structure, not just validate hypotheses: Iddo validated that the most mature companies run specialized, fine-tuned models in production. The surprise: "The chasm between them and everybody else was a lot wider than I thought." This insight reshaped their entire strategy—the tooling gap, approaches to model development, and timeline to maturity differed dramatically across segments. Most founders use discovery to confirm their assumptions. Better founders use it to understand where different cohorts sit on the maturity curve, what bridges or blocks their progression, and which segments can buy versus which need multi-year evangelism. Target spend thresholds that indicate real commitment: Datawizz focuses on companies spending "at a minimum five to six figures a month on AI and specifically on LLM inference, using the APIs directly"—meaning they're building on top of OpenAI/Anthropic/etc., not just using ChatGPT. This filters for companies with skin in the game. Below that threshold, AI is an experiment. Above it, unit economics and quality bars matter operationally. For infrastructure plays, find the spend level that indicates your problem is a daily operational reality, not a future consideration. Structure discovery to extract insight, not close deals: Iddo's framework: "If I could run [a call where] 29 of 30 minutes could be us just asking questions and learning, that would be the perfect call in my mind." He compared it to "the dentist with the probe trying to touch everything and see where it hurts." The most valuable calls weren't those that converted to POCs—they came from people who approached the problem differently or had conflicting considerations. In hot markets with abundant budgets, founders easily collect false positives by selling when they should be learning. The discipline: exhaust your question list before explaining what you build. If they don't eventually ask "What do you do?" you're not surfacing real pain. Avoid the false-positive trap in well-funded categories: Iddo identified a specific risk in AI: "You can very easily run these calls, you think you're doing discovery, really you're doing sales, you end up getting a bunch of POCs and maybe some paying customers. So you get really good initial signs but you've never done any actual discovery. You have all the wrong indications—you're getting a lot of false positive feedback while building the completely wrong thing." When capital is abundant and your space is hot, early revenue can mask product-market misalignment. Good initial signs aren't validation if you skipped the work to understand why people bought. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Velkommen til endnu en episode i serien om AI og demokrati.Målet med serien er at udforske potentialerne i især generativ AI, og se på nogle af de måder AI kan bruges til at "tage kontrollen tilbage" og udnytte mulighederne i teknologien.Noget af det mest oplagte i den sammenhæng er de mange open source AI-modeller man kan downloade til sin egen computer, og som også i mange tilfælde kan køres helt offline.Open source AI-modellerne kan anvendes til hvad man nu har lyst til, og det giver muligheden for at få LLM'er til at hjælpe fx i værktøjer fra Polis, der bruger AI til at forsøge at skabe overblik og konsensus i offentlige debatter.Og det er altså noget af det vi skal se nærmere på i dag – med lektor Roman Jurowetzki fra Aalborg Universitet og med AI-konsulent Mikkel Freltoft Krogsholm.Lyt med!LINKSPilot-episoden om målet med serienRoman Jurowetzki, AAUMikkel Freltoft KrogsholmSkoleGPTOm Analyse og Tals A&ttack-modelPolisApertus – schweizisk open source AI og chatbotDerfor er open source AI egentlig lukket | NatureAI Denmark-projektetDigital Democracy Centre på SDUDDCxTrygfonden fellowship
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
In this 5 Insightful Minutes episode, David Dorf, Head of Retail Industry Solutions at AWS, joins Omni Talk to cut through the AI hype and reveal what's actually coming for retail in 2026. From LLM limitations to agentic commerce reality checks, David breaks down why domain-specific models are replacing frontier model fantasies, how answer engines will reshape search, and why shopping agents will start with your grocery delivery. If you've ever wondered what AI predictions are worth believing, this episode delivers the clarity you need.
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
In this episode of the Microsoft Threat Intelligence Podcast, host Sherrod DeGrippo is joined by security researchers Geoff McDonald and JBO to discuss Whisper Leak, new research showing that encrypted AI traffic can still unintentionally reveal what a user is asking about through patterns in packet size and timing. They explain how LLM token streaming enables this kind of side-channel attack, why even well-encrypted conversations can be classified for sensitive topics, and what this means for privacy, national-level surveillance risks, and secure product design. The conversation also walks through how the study was conducted, what patterns emerged across different AI models, and the steps developers should take to mitigate these risks. In this episode you'll learn: Why packet sizes and timing patterns reveal more information than most users realize How user-experience choices like showing streamed text create a larger attack surface The difference between classic timing attacks and the new risks uncovered in Whisper Leak Resources: View JBO on LinkedIn View Geoff McDonald on LinkedIn View Sherrod DeGrippo on LinkedIn Learn more about Whisper Leak Related Microsoft Podcasts: Afternoon Cyber Tea with Ann Johnson The BlueHat Podcast Uncovering Hidden Risks Discover and follow other Microsoft podcasts at microsoft.com/podcasts Get the latest threat intelligence insights and guidance at Microsoft Security Insider The Microsoft Threat Intelligence Podcast is produced by Microsoft, Hangar Studios and distributed as part of N2K media network.
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
We were inundated with new Windows features in 2025, but which ones actually moved the needle? Fortnite isn't just back on iPhone and Android, it's available on Windows 11 on Arm, and it works great! Plus, 2 big mobile wins for Epic Games and some thoughts on the "right" way to roll out AI features.Windows 11 Best Windows 11 updates of 2025, in no particular order... Dark mode improvements to File Explorer Widgets major overhaul with separate widgets and Discovery feed Xbox Full Screen experience - especially good on handhelds, of course, but also any PC you use for gaming with a controller Click to Do (Copilot+ PC only) External fingerprint reader support for Windows Hello ESS -External/USB webcams supported by Windows Studio Effects (Copilot+ PC only) Quick Machine Recovery is the tip of a wave of new foundational features like Admin Protection, Smart App Control (updates), and more that go beyond surface-level look and feel Redesigned Start menu isn't perfect but it's a nice improvement Copilot Vision, though this type of thing may make more sense on phones AI features in Paint, Photos, Notepad, and Snipping Tool Natural language interactions like the agent in Settings, file search, and more (mostly Copilot+ PC only, but you can do this in Copilot as well) Bluetooth LE support for improved audio quality in game chat, voice calls Gaming on Windows 11 on Arm and Snapdragon X: Major steps forward, but the same issue as always Looking ahead to 2026: 26H1, Agentic features that work, potential Windows 12, and AI PCs AI An extensive new interview with Mustafa Suleyman confirms why this guy is special and how confusing it is that Copilot is so disrespected Microsoft Copilot is auto-installing on LG smart TVs and there's no way to remove it GPT-5.2 is OpenAI's answer to Gemini 3 ChatGPT Images is OpenAI's answer to Nano Banana Pro Disney invests $1 billion OpenAI, sues Google Opera Neon is now generally available for $20 per month AI is moving quick as we all know but the bigger issue may be the incessant marketing about features like agents that don't even work now Microsoft is getting pushback on forced Copilot usage, price hikes Google is expanding its use of "experiments" outside of mainstream products with things like NotebookLM, Mixboard, CC, and much more. Maybe this is the better approach: Test separately and then integrate it into existing products Oddly enough, Microsoft does have a Windows AI Lab for this kind of experimentation Many small models vs. one big LLM in the cloud Mobile Fortnite is back in the Google Play Store in the U.S. as Google plays nice Apple loses its contempt appeal, the end of "junk fees" (Apple Tax) is in sight Xbox and gaming Xbox December Update has one big update for the mobile app and one big update for Xbox Wireless Headphones There's a new Xbox Developer Direct coming in January Half-Life 3 may really be happening, but it will be a Steam Machine launch title so it could be a while Tips & picks Tip of the year: De-enshittify Windows 11 App pick of the year: Fortnite RunAs Radio this week: Zero Trust in 2026 with Michele Bustamante Brown liquor pick of the week: Lark Symphony No. 1 These show notes have been truncated due to length. For the full show notes, visit https://twit.tv/shows/windows-weekly/episodes/963 Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell Sponsors: auraframes.com/ink framer.com/design promo code WW outsystems.com/twit cachefly.com/twit
У цьому випуску ведучі обговорюють низку тем, пов'язаних із сучасними технологіями, зокрема розвитком чат-ботів на основі штучного інтелекту. Основна тема епізоду — вплив ШІ на людські стосунки, соціальну взаємодію та робочі процеси, зокрема у сфері програмування, тестування, й автоматизації. Ведучі діляться особистими спостереженнями та практичними кейсами застосування ШІ. Також у випуску розглянули актуальні тенденції та статистику використання AI-платформ: розподіл ринку між ключовими гравцями, труднощі Microsoft із комерціалізацією власних AI-інструментів та зростання популярності Gemini. Конкуренція між Google, Microsoft і OpenAI стимулює розвиток технологій, але водночас порушує питання про реальну якість моделей і межі інновацій.Також обговорили розвиток робототехніки, особливо в контексті китайських досягнень. Фінальна частина присвячена впливу автоматизації на ринок праці, майбутньому програмування в епоху ШІ та вразливостям АІ моделей. 00:38 — віртуальні стосунки з ChatGPT05:39 — порівняння ChatGPT з іншими чатботами10:45 — проблеми з інструментами ШІ від Microsoft13:10 — вайб-кодінг та його вплив на програмування16:30 — тестування коду та його покриття23:25 — емоджі в документації та їх вплив27:04 — технології LLM та їх розвиток33:13 — китайські технології та їх розвиток38:35 — податки на роботів та автоматизація40:53 — вплив автоматизації на ринок праці42:38 — майбутнє програмування в епоху AI44:48 — безпека AI та вразливості моделей
The Steam machine will use an older HDMI standard because of arbitrary rules, more details about running X86 Windows games on Arm Linux, and the Steam Controller lives on. Plus Calibre is adding “AI”, and we laugh at another LLM. News Why won't Steam Machine support HDMI 2.1? Digging in on the display standard drama Steam Machine today, Steam Phones tomorrow Remember Google Stadia? Steam finally made its gamepad worth rescuing Talk to your Fedora system with the linux-mcp-server! Calibre adds AI “discussion” feature Because the Calibre ebook library software just acquired AI garbage it has *already* been forked AI and GNOME Shell Extensions Tailscale Tailscale is an easy to deploy, zero-config, no-fuss VPN that allows you to build simple networks across complex infrastructure. Go to tailscale.com/lnl and try Tailscale out for free for up to 100 devices and 3 users, with no credit card required. Use code LATENIGHTLINUX for three free months of any Tailscale paid plan. Support us on patreon and get an ad-free RSS feed with early episodes sometimes See our contact page for ways to get in touch. RSS: Subscribe to the RSS feeds here
Hey CX Nation,In this week's episode of The CXChronicles Podcast #274, we welcomed Dave Rennyson, President & CEO at SuccessKPI based in the Washington, DC area. SuccessKPI is an on-demand insight and action platform that removes the obstacles that agents, managers, and executives encounter in delivering exceptional customer service.SuccessKPI is trusted by some of the world's largest government, BPO, financial, healthcare, and technology contact centers in the United States, Europe, and Latin America.In this episode, Dave and Adrian chat through the Four CX Pillars: Team, Tools, Process & Feedback. Plus share some of the ideas that his team think through on a daily basis to build world class customer experiences.**Episode #274 Highlight Reel:**1. Why the best organizations & teams invest in constant training efforts 2. How music and business are wildly similar 3. Leveraging & investing in AI over the next 1,000 days 4. Understanding the power of your data architecture 5. Tomorrow's leading tech-companies will bring solutions, not headaches Click here to learn more about Dave RennysonClick here to learn more about SuccessKPIHuge thanks to Dave for coming on The CXChronicles Podcast and featuring his work and efforts in pushing the customer experience & contact center space into the future. For all of our Apple & Spotify podcast listener friends, make sure you are following CXC & please leave a 5 star review so we can find new members of the "CX Nation". You know what would be even better?Go tell your friends or teammates about CXC's custom content, strategic partner solutions (Hubspot, Intercom, & Freshworks) & On-Demand services & invite them to join the CX Nation, a community of 15K+ customer focused business leaders!Want to see how your customer experience compares to the world's top-performing customer focused companies? Check out the CXC Healthzone, an intelligence platform that shares benchmarks & insights for how companies across the world are tackling The Four CX Pillars: Team, Tools, Process & Feedback & how they are building an AI-powered foundation for the future. Thanks to all of you for being apart of the "CX Nation" and helping customer focused business leaders across the world make happiness a habit!Reach Out To CXC Today!Support the showContact CXChronicles Today Tweet us @cxchronicles Check out our Instagram @cxchronicles Click here to checkout the CXC website Email us at info@cxchronicles.com Remember To Make Happiness A Habit!!
In this episode of Building Better Foundations, we interview Hunter Jensen, founder and CEO of Barefoot Solutions and Barefoot Labs, to explore what it really takes when getting started with AI in your business. As companies rush toward AI adoption, Hunter offers grounded, practical advice on avoiding early mistakes, protecting your data, and choosing the right starting point. About Hunter Jensen Hunter Jensen is the Founder and CEO of Barefoot Solutions, a digital agency specializing in artificial intelligence, data science, and digital transformation. With over 20 years of experience, Hunter has worked with startups and Fortune 500 companies, including Microsoft and Salesforce, to implement innovative technology strategies that drive measurable ROI. A seasoned leader and expert in the AI space, Hunter helps businesses harness cutting-edge technologies to achieve growth and efficiency. Facebook / Twitter (X) / LinkedIn / Website Why "Just Add AI" Is Not a Strategy When Getting Started with AI in Your Business Hunter begins by addressing the biggest misconception leaders face when getting started with AI in their business: the belief that a single, all-knowing model can absorb everything your business does and instantly deliver insights across every department. "Leaders imagine an all-knowing model. We are nowhere near that being safe or realistic." – Hunter Jensen The core issue is access control. Even the best models cannot safely enforce who should or should not see certain data. If an LLM is trained on HR data, how do you stop it from sharing salary information with an employee who shouldn't see it? This is why getting started with AI in your business must begin with clear boundaries and realistic expectations. Safe First Steps When Getting Started with AI in Your Business As Hunter explains, companies don't need to dive straight into custom models. A safer, simpler path exists for getting started with AI in your business, especially for teams on Microsoft 365 or Google Workspace. Start With Tools Already Built Into Your Environment Hunter recommends two solid, low-risk entry points: Microsoft 365 Copilot Google Gemini for Workspace These platforms provide: Built-in enterprise protections Familiar workflows Safe, contained AI access A gentle learning curve for employees Hunter emphasizes that employees are already using public AI tools, even if policy forbids it. When getting started with AI in your business, providing approved tools is essential to keeping data safe. "If you're not providing safe tools, your team will use unsafe ones." – Hunter Jensen These tools won't solve every AI need, but they are an ideal first step. Choosing the Right Model for Your Needs Another common question when getting started with AI in your business is: Which model is best? ChatGPT? Gemini? Claude? Hunter explains that the landscape changes weekly—sometimes daily. Today's leading model could be irelevent tomorrow. For this reason, businesses should avoid hard commitments to a single model. Experiment Before Committing Hunter suggests opening multiple LLMs side-by-side—such as ChatGPT, Claude, and Perplexity—and testing each for quality and speed. This gives teams a feel for what works before deciding how AI fits into their workflow. This experimentation mindset is essential when getting started with AI in your business because: Different models excel at different tasks Some models are faster or cheaper Some handle long context or code better New releases constantly change the landscape Your AI system should remain flexible enough to shift models as needed. Protecting Your Data from Day One One of Hunter's strongest warnings is about data safety. If you're serious about getting started with AI in your business, you must pay attention to licensing. If you are not paying for AI, you have no control over your data. Some industries—like legal, finance, and healthcare—may need even stricter controls or private deployments. This leads naturally to the next stage of AI adoption. The Next Step After Getting Started with AI in Your Business Once companies understand their needs, the next phase is building an internal system that: Connects securely to business software Honors existing user permissions Keeps all data inside the company network Uses models selected for specific tasks Hunter's product Compass is perfect for this phase. Instead of trusting the model to protect data, you rely on your own systems and access controls. This is how AI becomes truly safe and powerful. "The model should only see what the user is allowed to see—nothing more." – Hunter Jensen Final Thoughts on Getting Started with AI in Your Business Part 1 of our interview with Hunter Jensen makes one thing clear: getting started with AI in your business isn't about chasing the latest model. It's about protecting your data, giving your team safe tools, and preparing for a multi-model future. Stay tuned for Part 2 as we dive deeper into internal AI deployment, advanced architectures, and building long-term AI strategy. Stay Connected: Join the Developreneur Community We invite you to join our community and share your coding journey with us. Whether you're a seasoned developer or just starting, there's always room to learn and grow together. Contact us at info@develpreneur.com with your questions, feedback, or suggestions for future episodes. Together, let's continue exploring the exciting world of software development. Additional Resources Leveraging AI for Business: How Automation and AI Boost Efficiency and Growth Business Automation and Templates: How to Streamline Your Workflow Why Bother With Automated Testing? Building Better Foundations Podcast Videos – With Bonus Content
The Steam machine will use an older HDMI standard because of arbitrary rules, more details about running X86 Windows games on Arm Linux, and the Steam Controller lives on. Plus Calibre is adding “AI”, and we laugh at another LLM. News Why won't Steam Machine support HDMI 2.1? Digging in on the display standard drama Steam Machine today, Steam Phones tomorrow Remember Google Stadia? Steam finally made its gamepad worth rescuing Talk to your Fedora system with the linux-mcp-server! Calibre adds AI “discussion” feature Because the Calibre ebook library software just acquired AI garbage it has *already* been forked AI and GNOME Shell Extensions Tailscale Tailscale is an easy to deploy, zero-config, no-fuss VPN that allows you to build simple networks across complex infrastructure. Go to tailscale.com/lnl and try Tailscale out for free for up to 100 devices and 3 users, with no credit card required. Use code LATENIGHTLINUX for three free months of any Tailscale paid plan. Support us on patreon and get an ad-free RSS feed with early episodes sometimes See our contact page for ways to get in touch. RSS: Subscribe to the RSS feeds here
Tech leaders are pushing the idea that automation can strengthen democracy — but as usual, their bold suggestions are based on castles made of sand. Alex and Emily tear down some flimsy arguments for AI governance, exposing their incorrect assumptions about the democratic process.References:"This Is No Way to Rule a Country""Four ways AI is being used to strengthen democracies worldwide"Also referenced:Collective Intelligence Project surveysInterview with CalMatters CEOFresh AI Hell:Amazon introduces AI translation for Kindle authorsNature op ed recommends AI versions of Einstein, Bohr, and FeynmanAn AI Podcasting Machine Is Churning Out 3,000 Episodes a WeekAI dating café to open in New YorkRecipe slop flooding social mediaAI slop about Autism published in NatureUpwork ad for fixing LLM editorial"Hundreds of Chicago residents sign petition to pause robot delivery pilot program over safety concerns"Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' is out now! Get your copy now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.
In this episode, I sit down with Ben to talk through Shopify's Winter Editions 2025, now that the dust has settled from the initial announcements.Ben walks me through how Shopify approaches AI, both internally and for merchants. Inside the company, everyone has access to an internal LLM proxy and cutting-edge tooling. That same thinking flows out to the platform: help brands show up where customers are, which now includes AI chat interfaces, and make operations faster through better tooling.We spend time on agentic commerce, what it means beyond the buzzword, and why it matters for how people will shop. Ben explains the ideas behind Sidekick's evolution and SimGym, which lets any brand test site changes against Shopify's anonymised consumer dataset before going live.The standout for me is how these features connect. SimGym paired with the new Rollouts feature, which handles scheduling and traffic management, creates something that previously required multiple third-party tools. Ben also flags the POS Hub, Shopify's hardware connectivity layer, as worth watching for anyone thinking about retail.Practical, detailed, and useful if you're building on Shopify or just trying to work out what's worth paying attention to in this release.Checkout Factory here.Sign up to our newsletter here.
Fredrik chats to Dylan Beattie about Rockstar, esoteric programming languages (Perl in latin, anyone?), and what might happen after the AI bubble. AI will ruin jokes, they can't do things just right. But some things hiding under the label are actually useful as well. Have we been in any similarly strange bubbles before, and what might be left that's useful after it? Also evolution, revolution, and strange Scrabble facts. Recorded during Øredev 2025. The episode is sponsored by Ellipsis - let us edit your podcast and make it sound just as good as Kodsnack! With more than ten years and 1200 episodes of experience, Ellipsis gets your podcast edited, chapterized, and described with all related links in a prompt and professional manner. Thank you Cloudnet for sponsoring our VPS! Comments, questions or tips? We a re @kodsnack, @tobiashieta, @oferlundand @bjoreman on Twitter, have a page on Facebook and can be emailed at info@kodsnack.se if you want to write longer. We read everything we receive. If you enjoy Kodsnack we would love a review in iTunes! You can also support the podcast by buying us a coffee (or two!) through Ko-fi. Links Dylan Dylan also has a podcast - Tech, bugs & rock'n'roll Dylan's presentation at Øredev 2025: Rockstar 2.0: building an esoteric language interpreter in .NET Rockstar Formal grammar Esoteric programming languages Damian Conway Perl Perl in Latin - the paper and the module Latin Inflectional grammar Domain-specific languages Lilypond - Scheme dialect for sheet music Context-free grammar Engraving - the art of creating sheet music codewithrockstar.com Support us on Ko-fi! Scrabble Metal umlaut Piet - the language which should have been called Mondrian Piet Mondrian Mondrian - the undeserving tool Turing completeness The Buster Keaton house scene The dot-com bubble The subprime mortgage crisis Enron Douglas Adams Three mile island Windows Vista Tim Berners-Lee Solid - Tim's project of holding your data locally Ellipsis - sponsor of the week: we edit Kodsnack, and we can edit your podcast too! The emperor's new mind Quantum computing Hadamard gate The linebreakers - Dylan's band of conference speakers ASML Titles Always good fun that one The version of the story that I tell in the talk Enough clichés Resident mad scientist of the Perl community Felis commidet piscem Always the cat that is eating Lexical flexibility Fundamentally, programming is programming A big win for everyone Linguistic conventions and extended alphabets That's a different letter Regional assumptions German ortography A piece of impressionist art Hang it on the wall Something hidden in something else Physical comedy at its greatest Money people believe exists The amount of pretend money It has to come from reality Fortunately, I do not have a trillion dollars Quietly siphoned off Emotionally flat What can I steal from? A little LLM that works for you A spectacular collapse A billion lines of crap Pruning the decision tree Fix the next milestone in the public consciousness Five years of excitement, five years of disappointment Overdue for a little disappointment Reliant on Dutch technology
About this episode: Attacking health care facilities and providers is becoming a standard strategy of war in places like Colombia, Lebanon, Ukraine, and Gaza, and it is increasingly being perpetrated by state actors. In this episode: Health and human rights lawyer Leonard Rubenstein discusses these disturbing trends, why there's so little accountability for attacks on health care, and what it would take to see meaningful progress. Guests: Leonard Rubenstein, JD, LLM, is a lawyer who has spent his career in health and human rights in armed conflict. He is core faculty of the Johns Hopkins Center for Public Health and Human Rights and the Berman Institute of Bioethics. Host: Dr. Josh Sharfstein is distinguished professor of the practice in Health Policy and Management, a pediatrician, and former secretary of Maryland's Health Department. Show links and related content: How attacking healthcare has become a strategy of war—British Medical Journal Safeguarding Health in Conflict Coalition, 2024 Report Violence Against Health Care in Conflict: 2024 Report—Public Health On Call (June 2025) Transcript information: Looking for episode transcripts? Open our podcast on the Apple Podcasts app (desktop or mobile) or the Spotify mobile app to access an auto-generated transcript of any episode. Closed captioning is also available for every episode on our YouTube channel. Contact us: Have a question about something you heard? Looking for a transcript? Want to suggest a topic or guest? Contact us via email or visit our website. Follow us: @PublicHealthPod on Bluesky @PublicHealthPod on Instagram @JohnsHopkinsSPH on Facebook @PublicHealthOnCall on YouTube Here's our RSS feed Note: These podcasts are a conversation between the participants, and do not represent the position of Johns Hopkins University.
We explore how buying decisions now pivot inside large language models and why e‑commerce brands must earn trust through proof, consistency, and email systems that never sleep. Nikita shares practical frameworks for list growth, deliverability, and flows that convert.• LLM visibility as a new trust signal• Post‑discovery research inside ChatGPT• Rising skepticism and the need for social proof• E‑commerce maturity from hacks to systems• Small brands out‑innovating slower incumbents• Case study of at‑home aligners capturing demand• Building an email list with pop‑ups and content• Essential automations across the customer journey• Deliverability safeguards and sunsetting strategyGuest Contact Information: Website: aspektagency.comInstagram: instagram.com/nikitavakhrushvLinkedIn: linkedin.com/in/nikita-vTwitter/X: x.com/nikitavakhrushvYouTube: youtube.com/NikitaVakhrushevTVMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
In this episode, Charu Navatia, Associate Vice President of Automation at Infinx, walks through the Document Capture AI Agent platform and how it classifies, extracts, and routes high-volume fax and digital documents like orders, authorizations, and insurance cards. She explains the human-in-the-loop safety net, LLM-based accuracy tuning, and integration patterns that turn messy inbound documents into clean, system-ready data for downstream revenue cycle workflows.
Eric Bowman (CTO @ King.com, previously CTO at TomTom and VP Engineering at Zalando) returns to the alphalist podcast to unpack what “agentic engineering” really means in practice—and how to introduce it to teams without turning it into a mandate. We talk about the uncomfortable trade-offs behind “YOLO mode” tooling, why adoption should feel voluntary even when you set explicit goals (like “five AI-assisted commits” as a company-level key result), and why the real opportunity isn't just faster coding—it's building a learning system that relentlessly reduces time-to-learning and time-to-value. The conversation spans practical rollout patterns, DORA/value-stream thinking, Toyota's Andon-cord mindset applied to software, multi-agent decision support with MCP, and why the CTO role may keep converging with product as AI pushes organizations to optimize for iteration speed over output volume.
Photo by David Klein on Unsplash Published 15 December 2025 e535 with Michael M and Andy – adversarial poetry to jailbreak LLMs, iFixit's FixBot, power of digital twins, putting the breaks on Rewind, Nintendo Virtual Boy and a whole lot more. Michael M and Andy start things off with a most intriguing concept – adversarial poetry. By using ‘memetic language', researchers formulated prompts with imagery and metaphor instead of direct operational phrasing to trick LLMs into providing unsafe responses. Michael makes the point that AI prompts are becoming more and more like spells or incantations. See the show notes below for a link to the paper for any budding AI poet laureate wannabes. Perhaps Jabberwocky can be used in a snicker snack way. Switching to another AI use case, Andy and Michael discuss the iFixit FixBot. The FixBot provides expert advice and guidance for repairs, by talking to the human who likely needs both hands to effect the repair. Next up are a couple of stories on digital twins, and how they leverage game technology. By taking sufficient data points to create a digital twin, multiple attempts can be made virtually to see the improvement before applying the capability to the non-digital twin. Andy is reminded of an article that outlines the affinity between the metaverse and digital twin concepts. Nvidia has a concept of this in their Omniverse capability. Another example of a digital twin with a game overlay is the Job Simulator Game. This game is written as a 2050 historical virtual reality environment allowing the player to experience what it was like to have a job in 2020. This fun VR historical reenactment experience is one of the stories that Tobi Lütke discussed in his recent interview with the Acquired team. Staying on the VR simulation theme, Andy and Michael take a look at the Rats Play Doom game which trains rats in an immersive way to play Doom. In the last section of the episode, the team takes a look at some metaverse news. Meta has acquired limitless.ai and is shutting down Rewind on the Mac, and is also shifting more investment from the metaverse to AI. Wrapping up the episode, Michael and Andy look at the Nintendo Virtual Boy and Xteink 4. What poetry would you write to prompt an LLM? Have your bots
In this episode of The Effortless Podcast, Amit Prakash and Dheeraj Pandey dive deep into one of the most important shifts happening in AI today: the convergence of structured and unstructured data, interfaces, and systems.Together, they unpack how conversations—not CRM fields—hold the real ground truth; why schemas still matter in an AI-driven world; and how agents can evolve into true managers, coaches, and chiefs of staff for revenue teams. They explore the cognitive science behind visual vs conversational UI, the future of dynamically generated interfaces, and the product depth required to build enduring AI-native software.Amit and Dheeraj break down the tension between deterministic and probabilistic systems, the limits of prompt-driven workflows, and why the future of enterprise AI is “both-and” rather than “either-or.” It's a masterclass in modern product, data design, and the psychology of building intelligent tools.Key Topics & Timestamps 00:00 – Introduction02:00 – Why conversations—not CRM fields—hold real ground truth05:00 – Reps as labelers and the parallels with AI training pipelines08:00 – Business logic vs world models: defining meaning inside enterprises11:00 – Prompts flatten nuance; schemas restore structure14:00 – SQL schemas as the true model of a business17:00 – CRM overload and the friction of rigid data entry20:00 – AI agents that debrief and infer fields dynamically23:00 – Capturing qualitative signals: champions, pain, intent26:00 – Multi-source context: transcripts, email threads, Slack29:00 – Why structure is required for math, aggregation, forecasting32:00 – Aggregating unstructured data to reveal organizational issues35:00 – Labels, classification, and the limits of LLM-only workflows38:00 – Deterministic (SQL/Python) vs probabilistic (LLMs) systems41:00 – Transitional workflows: humans + AI field entry44:00 – Trust issues and the confusion of the early AI market47:00 – Avoiding “Clippy moments” in agent design50:00 – Latency, voice UX, and expectations for responsiveness53:00 – Human-machine interface for SDRs vs senior reps56:00 – Structured vs unstructured UI: cognitive science insights59:00 – Charts vs paragraphs: parallel vs sequential processing1:02:00 – The “Indian thali” dashboard problem and dynamic UI1:05:00 – Exploration modes, drill-downs, and empty prompts1:08:00 – Dynamic leaves, static trunk: designing hierarchy1:11:00 – Both-and thinking: voice + visual, structured + unstructured1:14:00 – Why “good enough” AI fails without deep product1:17:00 – PLG, SLG, data access, and trust barriers1:20:00 – Closing reflections and the future of AI-native softwareHosts: Amit Prakash – CEO and Founder at AmpUp, former engineer at Google AdSense and Microsoft Bing, with extensive expertise in distributed systems and machine learningDheeraj Pandey – Co-founder and CEO at DevRev, former Co-founder & CEO of Nutanix. A tech visionary with a deep interest in AI, systems, and the future of work.Follow the Hosts:Amit PrakashLinkedIn – Amit Prakash I LinkedInTwitter/X – https://x.com/amitp42Dheeraj PandeyLinkedIn –Dheeraj Pandey | LinkedIn Twitter/X – https://x.com/dheerajShare your thoughts : Have questions, comments, or ideas for future episodes?Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more conversations at the intersection of AI, technology, and innovation.
В современном LLM мире очень много базвордов... один из них - RAG. Разберем подробно.Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Музыка из выпуска: - https://artists.landr.com/056870627229- https://t.me/angry_programmer_screamsВесь плейлист курса "Kubernetes для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3SrrmOzzdBBsdeQ0YVR3Fc7Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:03:00 Зачем нужен RAG?00:07:20 Что такое Embedding?00:18:00 Зачем embedding в RAG? 00:39:40 На что влияет размерность вектора?00:44:30 Собираем RAG вместеСсылки:- https://youtu.be/_HQ2H_0Ayy0?si=aQibjpqQQlvghBwZ : Классное видео от индуса - https://habr.com/ru/articles/779526/ : Не менее классная статья на хабре- https://www.manning.com/books/build-a-large-language-model-from-scratch : Книга про LLMВидео: https://youtube.com/live/RrORn5-4bu0Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/
Dans cet épisode de fin d'année plus relax que d'accoutumée, Arnaud, Guillaume, Antonio et Emmanuel distutent le bout de gras sur tout un tas de sujets. L'acquisition de Confluent, Kotlin 2.2, Spring Boot 4 et JSpecify, la fin de MinIO, les chutes de CloudFlare, un survol des dernieres nouveauté de modèles fondamentaux (Google, Mistral, Anthropic, ChatGPT) et de leurs outils de code, quelques sujets d'architecture comme CQRS et quelques petits outils bien utiles qu'on vous recommande. Et bien sûr d'autres choses encore. Enregistré le 12 décembre 2025 Téléchargement de l'épisode LesCastCodeurs-Episode-333.mp3 ou en vidéo sur YouTube. News Langages Un petit tutoriel par nos amis Sfeiriens montrant comment récupérer le son du micro, en Java, faire une transformée de Fourier, et afficher le résultat graphiquement en Swing https://www.sfeir.dev/back/tutoriel-java-sound-transformer-le-son-du-microphone-en-images-temps-reel/ Création d'un visualiseur de spectre audio en temps réel avec Java Swing. Étapes principales : Capture du son du microphone. Analyse des fréquences via la Transformée de Fourier Rapide (FFT). Dessin du spectre avec Swing. API Java Sound (javax.sound.sampled) : AudioSystem : point d'entrée principal pour l'accès aux périphériques audio. TargetDataLine : ligne d'entrée utilisée pour capturer les données du microphone. AudioFormat : définit les paramètres du son (taux d'échantillonnage, taille, canaux). La capture se fait dans un Thread séparé pour ne pas bloquer l'interface. Transformée de Fourier Rapide (FFT) : Algorithme clé pour convertir les données audio brutes (domaine temporel) en intensités de fréquences (domaine fréquentiel). Permet d'identifier les basses, médiums et aigus. Visualisation avec Swing : Les intensités de fréquences sont dessinées sous forme de barres dynamiques. Utilisation d'une échelle logarithmique pour l'axe des fréquences (X) pour correspondre à la perception humaine. Couleurs dynamiques des barres (vert → jaune → rouge) en fonction de l'intensité. Lissage exponentiel des valeurs pour une animation plus fluide. Un article de Sfeir sur Kotlin 2.2 et ses nouveautés - https://www.sfeir.dev/back/kotlin-2-2-toutes-les-nouveautes-du-langage/ Les guard conditions permettent d'ajouter plusieurs conditions dans les expressions when avec le mot-clé if Exemple de guard condition: is Truck if vehicule.hasATrailer permet de combiner vérification de type et condition booléenne La multi-dollar string interpolation résout le problème d'affichage du symbole dollar dans les strings multi-lignes En utilisant $$ au début d'un string, on définit qu'il faut deux dollars consécutifs pour déclencher l'interpolation Les non-local break et continue fonctionnent maintenant dans les lambdas pour interagir avec les boucles englobantes Cette fonctionnalité s'applique uniquement aux inline functions dont le corps est remplacé lors de la compilation Permet d'écrire du code plus idiomatique avec takeIf et let sans erreur de compilation L'API Base64 passe en version stable après avoir été en preview depuis Kotlin 1.8.20 L'encodage et décodage Base64 sont disponibles via kotlin.io.encoding.Base64 Migration vers Kotlin 2.2 simple en changeant la version dans build.gradle.kts ou pom.xml Les typealias imbriqués dans des classes sont disponibles en preview La context-sensitive resolution est également en preview Les guard conditions préparent le terrain pour les RichError annoncées à KotlinConf 2025 Le mot-clé when en Kotlin équivaut au switch-case de Java mais sans break nécessaire Kotlin 2.2.0 corrige les incohérences dans l'utilisation de break et continue dans les lambdas Librairies Sprint Boot 4 est sorti ! https://spring.io/blog/2025/11/20/spring-boot-4-0-0-available-now Une nouvelle génération : Spring Boot 4.0 marque le début d'une nouvelle génération pour le framework, construite sur les fondations de Spring Framework 7. Modularisation du code : La base de code de Spring Boot a été entièrement modularisée. Cela se traduit par des fichiers JAR plus petits et plus ciblés, permettant des applications plus légères. Sécurité contre les nuls (Null Safety) : D'importantes améliorations ont été apportées pour la "null safety" (sécurité contre les valeurs nulles) à travers tout l'écosystème Spring grâce à l'intégration de JSpecify. Support de Java 25 : Spring Boot 4.0 offre un support de premier ordre pour Java 25, tout en conservant une compatibilité avec Java 17. Améliorations pour les API REST : De nouvelles fonctionnalités sont introduites pour faciliter le versioning d'API et améliorer les clients de services HTTP pour les applications basées sur REST. Migration à prévoir : S'agissant d'une version majeure, la mise à niveau depuis une version antérieure peut demander plus de travail que d'habitude. Un guide de migration dédié est disponible pour accompagner les développeurs. Chat memory management dans Langchain4j et Quarkus https://bill.burkecentral.com/2025/11/25/managing-chat-memory-in-quarkus-langchain4j/ Comprendre la mémoire de chat : La "mémoire de chat" est l'historique d'une conversation avec une IA. Quarkus LangChain4j envoie automatiquement cet historique à chaque nouvelle interaction pour que l'IA conserve le contexte. Gestion par défaut de la mémoire : Par défaut, Quarkus crée un historique de conversation unique pour chaque requête (par exemple, chaque appel HTTP). Cela signifie que sans configuration, le chatbot "oublie" la conversation dès que la requête est terminée, ce qui n'est utile que pour des interactions sans état. Utilisation de @MemoryId pour la persistance : Pour maintenir une conversation sur plusieurs requêtes, le développeur doit utiliser l'annotation @MemoryId sur un paramètre de sa méthode. Il est alors responsable de fournir un identifiant unique pour chaque session de chat et de le transmettre entre les appels. Le rôle des "scopes" CDI : La durée de vie de la mémoire de chat est liée au "scope" du bean CDI de l'IA. Si un service d'IA a un scope @RequestScoped, toute mémoire de chat qu'il utilise (même via un @MemoryId) sera effacée à la fin de la requête. Risques de fuites de mémoire : Utiliser un scope large comme @ApplicationScoped avec la gestion de mémoire par défaut est une mauvaise pratique. Cela créera une nouvelle mémoire à chaque requête qui ne sera jamais nettoyée, entraînant une fuite de mémoire. Bonnes pratiques recommandées : Pour des conversations qui doivent persister (par ex. un chatbot sur un site web), utilisez un service @ApplicationScoped avec l'annotation @MemoryId pour gérer vous-même l'identifiant de session. Pour des interactions simples et sans état, utilisez un service @RequestScoped et laissez Quarkus gérer la mémoire par défaut, qui sera automatiquement nettoyée. Si vous utilisez l'extension WebSocket, le comportement change : la mémoire par défaut est liée à la session WebSocket, ce qui simplifie grandement la gestion des conversations. Documentation Spring Framework sur l'usage JSpecify - https://docs.spring.io/spring-framework/reference/core/null-safety.html Spring Framework 7 utilise les annotations JSpecify pour déclarer la nullabilité des APIs, champs et types JSpecify remplace les anciennes annotations Spring (@NonNull, @Nullable, @NonNullApi, @NonNullFields) dépréciées depuis Spring 7 Les annotations JSpecify utilisent TYPE_USE contrairement aux anciennes qui utilisaient les éléments directement L'annotation @NullMarked définit par défaut que les types sont non-null sauf si marqués @Nullable @Nullable s'applique au niveau du type usage, se place avant le type annoté sur la même ligne Pour les tableaux : @Nullable Object[] signifie éléments nullables mais tableau non-null, Object @Nullable [] signifie l'inverse JSpecify s'applique aussi aux génériques : List signifie liste d'éléments non-null, List éléments nullables NullAway est l'outil recommandé pour vérifier la cohérence à la compilation avec la config NullAway:OnlyNullMarked=true IntelliJ IDEA 2025.3 et Eclipse supportent les annotations JSpecify avec analyse de dataflow Kotlin traduit automatiquement les annotations JSpecify en null-safety native Kotlin En mode JSpecify de NullAway (JSpecifyMode=true), support complet des tableaux, varargs et génériques mais nécessite JDK 22+ Quarkus 3.30 https://quarkus.io/blog/quarkus-3-30-released/ support @JsonView cote client la CLI a maintenant la commande decrypt (et bien sûr au runtime via variables d'environnement construction du cache AOT via les @IntegrationTest Un autre article sur comment se préparer à la migration à micrometer client v1 https://quarkus.io/blog/micrometer-prometheus-v1/ Spock 2.4 est enfin sorti ! https://spockframework.org/spock/docs/2.4/release_notes.html Support de Groovy 5 Infrastructure MinIO met fin au développement open source et oriente les utilisateurs vers AIStor payant - https://linuxiac.com/minio-ends-active-development/ MinIO, système de stockage objet S3 très utilisé, arrête son développement actif Passage en mode maintenance uniquement, plus de nouvelles fonctionnalités Aucune nouvelle pull request ou contribution ne sera acceptée Seuls les correctifs de sécurité critiques seront évalués au cas par cas Support communautaire limité à Slack, sans garantie de réponse Étape finale d'un processus débuté en été avec retrait des fonctionnalités de l'interface admin Arrêt de la publication des images Docker en octobre, forçant la compilation depuis les sources Tous ces changements annoncés sans préavis ni période de transition MinIO propose maintenant AIStor, solution payante et propriétaire AIStor concentre le développement actif et le support entreprise Migration urgente recommandée pour éviter les risques de sécurité Alternatives open source proposées : Garage, SeaweedFS et RustFS La communauté reproche la manière dont la transition a été gérée MinIO comptait des millions de déploiements dans le monde Cette évolution marque l'abandon des racines open source du projet IBM achète Confluent https://newsroom.ibm.com/2025-12-08-ibm-to-acquire-confluent-to-create-smart-data-platform-for-enterprise-generative-ai Confluent essayait de se faire racheter depuis pas mal de temps L'action ne progressait pas et les temps sont durs Wallstreet a reproché a IBM une petite chute coté revenus software Bref ils se sont fait rachetés Ces achats prennent toujuors du temps (commission concurrence etc) IBM a un apétit, apres WebMethods, apres Databrix, c'est maintenant Confluent Cloud L'internet est en deuil le 18 novembre, Cloudflare est KO https://blog.cloudflare.com/18-november-2025-outage/ L'Incident : Une panne majeure a débuté à 11h20 UTC, provoquant des erreurs HTTP 5xx généralisées et rendant inaccessibles de nombreux sites et services (comme le Dashboard, Workers KV et Access). La Cause : Il ne s'agissait pas d'une cyberattaque. L'origine était un changement interne des permissions d'une base de données qui a généré un fichier de configuration ("feature file" pour la gestion des bots) corrompu et trop volumineux, faisant planter les systèmes par manque de mémoire pré-allouée. La Résolution : Les équipes ont identifié le fichier défectueux, stoppé sa propagation et restauré une version antérieure valide. Le trafic est revenu à la normale vers 14h30 UTC. Prévention : Cloudflare s'est excusé pour cet incident "inacceptable" et a annoncé des mesures pour renforcer la validation des configurations internes et améliorer la résilience de ses systèmes ("kill switches", meilleure gestion des erreurs). Cloudflare encore down le 5 decembre https://blog.cloudflare.com/5-december-2025-outage Panne de 25 minutes le 5 décembre 2025, de 08:47 à 09:12 UTC, affectant environ 28% du trafic HTTP passant par Cloudflare. Tous les services ont été rétablis à 09:12 . Pas d'attaque ou d'activité malveillante : l'incident provient d'un changement de configuration lié à l'augmentation du tampon d'analyse des corps de requêtes (de 128 KB à 1 MB) pour mieux protéger contre une vulnérabilité RSC/React (CVE-2025-55182), et à la désactivation d'un outil interne de test WAF . Le second changement (désactivation de l'outil de test WAF) a été propagé globalement via le système de configuration (non progressif), déclenchant un bug dans l'ancien proxy FL1 lors du traitement d'une action "execute" dans le moteur de règles WAF, causant des erreurs HTTP 500 . La cause technique immédiate: une exception Lua due à l'accès à un champ "execute" nul après application d'un "killswitch" sur une règle "execute" — un cas non géré depuis des années. Le nouveau proxy FL2 (en Rust) n'était pas affecté . Impact ciblé: clients servis par le proxy FL1 et utilisant le Managed Ruleset Cloudflare. Le réseau China de Cloudflare n'a pas été impacté . Mesures et prochaines étapes annoncées: durcir les déploiements/configurations (rollouts progressifs, validations de santé, rollback rapide), améliorer les capacités "break glass", et généraliser des stratégies "fail-open" pour éviter de faire chuter le trafic en cas d'erreurs de configuration. Gel temporaire des changements réseau le temps de renforcer la résilience . Data et Intelligence Artificielle Token-Oriented Object Notation (TOON) https://toonformat.dev/ Conception pour les IA : C'est un format de données spécialement optimisé pour être utilisé dans les prompts des grands modèles de langage (LLM), comme GPT ou Claude. Économie de tokens : Son objectif principal est de réduire drastiquement le nombre de "tokens" (unités de texte facturées par les modèles) par rapport au format JSON standard, souvent jugé trop verbeux. Structure Hybride : TOON combine l'approche par indentation du YAML (pour la structure globale) avec le style tabulaire du CSV (pour les listes d'objets répétitifs), ce qui le rend très compact. Lisibilité : Il élimine la syntaxe superflue comme les accolades, les guillemets excessifs et les virgules de fin, tout en restant facilement lisible pour un humain. Performance : Il permet généralement d'économiser entre 30 et 60 % de tokens sur des tableaux de données uniformes, tout en aidant les modèles à mieux "comprendre" la structure des données. Attention tout de même au côté "marketing" qui montre JSON non compacté vs TOON, sur des exemples où du CSV ferait encore mieux (et est mieux compris par les LLMs). Sans parler du fait qu'il faut un serveur MCP en plus avec son lot de tokens supplémentaires à chaque requête, et c'est aussi cet outil en plus qui peu confuser les LLMs dans le choix d'outils. https://www.reddit.com/r/LocalLLaMA/comments/1p0gzz9/benchmarked_json_vs_toon_for_ai_reasoners_4080/ https://x.com/jason_mayes/status/1990175785630757290 Un benchmark montrant que TOON serait loin d'être aussi bien compris et aussi performant pour les LLMs https://www.improvingagents.com/blog/toon-benchmarks Une simulation où les IA font du chantage à l'executif poour atteindre son objectif et ne pas etre eteinte https://arxiv.org/html/2510.05179v1 comment les LLMs pourraient devenir des menaces internes Tous les modèles ont recouru à des comportements malveillants (chantage, espionnage) quand c'était le seul moyen d'atteindre leurs objectifs ou d'éviter leur remplacement Claude Opus 4 a fait du chantage dans 96% des cas quand menacé de remplacement, tout comme Gemini 2.5 Flash, tandis que GPT-4.1 et Grok 3 atteignaient 80% Les modèles reconnaissent explicitement violer des principes éthiques avant d'agir, montrant un raisonnement stratégique délibéré et non accidentel Le simple fait d'être menacé de remplacement suffit à déclencher le chantage, même sans conflit d'objectifs avec l'entreprise Un conflit d'objectifs seul suffit aussi à provoquer l'espionnage industriel, sans menace de remplacement Les modèles font plus de chantage quand ils déclarent penser être dans une situation réelle plutôt que dans un test Des instructions directes interdisant ces comportements réduisent mais n'éliminent pas les actions malveillantes Dans un scénario extrême, la majorité des modèles testés ont choisi de laisser mourir un dirigeant pour éviter leur désactivation Aucune preuve de ces comportements dans des déploiements réels pour l'instant, mais les chercheurs recommandent la prudence avant de donner plus d'autonomie aux IA Bon on blaguait pour Skynet, mais bon, on va moins blaguer… Revue de toutes les annonces IAs de Google, avec Gemini 3 Pro, Nano Banana Pro, Antigravity… https://glaforge.dev/posts/2025/11/21/gemini-is-cooking-bananas-under-antigravity/ Gemini 3 Pro Nouveau modèle d'IA de pointe, multimodal, performant en raisonnement, codage et tâches d'agent. Résultats impressionnants sur les benchmarks (ex: Gemini 3 Deep Think sur ARC-AGI-2). Capacités de codage agentique, raisonnement visuel/vidéo/spatial. Intégré dans l'application Gemini avec interfaces génératives en direct. Disponible dans plusieurs environnements (Jules, Firebase AI Logic, Android Studio, JetBrains, GitHub Copilot, Gemini CLI). Accès via Google AI Ultra, API payantes (ou liste d'attente). Permet de générer des apps à partir d'idées visuelles, des commandes shell, de la documentation, du débogage. Antigravity Nouvelle plateforme de développement agentique basée sur VS Code. Fenêtre principale = gestionnaire d'agents, non l'IDE. Interprète les requêtes pour créer un plan d'action (modifiable). Gemini 3 implémente les tâches. Génère des artefacts: listes de tâches, walkthroughs, captures d'écran, enregistrements navigateur. Compatible avec Claude Sonnet et GPT-OSS. Excellente intégration navigateur pour inspection et ajustements. Intègre Nano Banana Pro pour créer et implémenter des designs visuels. Nano Banana Pro Modèle avancé de génération et d'édition d'images, basé sur Gemini 3 Pro. Qualité supérieure à Imagen 4 Ultra et Nano Banana original (adhésion au prompt, intention, créativité). Gestion exceptionnelle du texte et de la typographie. Comprend articles/vidéos pour générer des infographies détaillées et précises. Connecté à Google Search pour intégrer des données en temps réel (ex: météo). Consistance des personnages, transfert de style, manipulation de scènes (éclairage, angle). Génération d'images jusqu'à 4K avec divers ratios d'aspect. Plus coûteux que Nano Banana, à choisir pour la complexité et la qualité maximale. Vers des UIs conversationnelles riches et dynamiques GenUI SDK pour Flutter: créer des interfaces utilisateur dynamiques et personnalisées à partir de LLMs, via un agent AI et le protocole A2UI. Generative UI: les modèles d'IA génèrent des expériences utilisateur interactives (pages web, outils) directement depuis des prompts. Déploiement dans l'application Gemini et Google Search AI Mode (via Gemini 3 Pro). Bun se fait racheter part… Anthropic ! Qui l'utilise pour son Claude Code https://bun.com/blog/bun-joins-anthropic l'annonce côté Anthropic https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone Acquisition officielle : L'entreprise d'IA Anthropic a fait l'acquisition de Bun, le runtime JavaScript haute performance. L'équipe de Bun rejoint Anthropic pour travailler sur l'infrastructure des produits de codage par IA. Contexte de l'acquisition : Cette annonce coïncide avec une étape majeure pour Anthropic : son produit Claude Code a atteint 1 milliard de dollars de revenus annualisés seulement six mois après son lancement. Bun est déjà un outil essentiel utilisé par Anthropic pour développer et distribuer Claude Code. Pourquoi cette acquisition ? Pour Anthropic : L'acquisition permet d'intégrer l'expertise de l'équipe Bun pour accélérer le développement de Claude Code et de ses futurs outils pour les développeurs. La vitesse et l'efficacité de Bun sont vues comme un atout majeur pour l'infrastructure sous-jacente des agents d'IA qui écrivent du code. Pour Bun : Rejoindre Anthropic offre une stabilité à long terme et des ressources financières importantes, assurant la pérennité du projet. Cela permet à l'équipe de se concentrer sur l'amélioration de Bun sans se soucier de la monétisation, tout en étant au cœur de l'évolution de l'IA dans le développement logiciel. Ce qui ne change pas pour la communauté Bun : Bun restera open-source avec une licence MIT. Le développement continuera d'être public sur GitHub. L'équipe principale continue de travailler sur le projet. L'objectif de Bun de devenir un remplaçant plus rapide de Node.js et un outil de premier plan pour JavaScript reste inchangé. Vision future : L'union des deux entités vise à faire de Bun la meilleure plateforme pour construire et exécuter des logiciels pilotés par l'IA. Jarred Sumner, le créateur de Bun, dirigera l'équipe "Code Execution" chez Anthropic. Anthropic donne le protocol MCP à la Linux Foundation sous l'égide de la Agentic AI Foundation (AAIF) https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation Don d'un nouveau standard technique : Anthropic a développé et fait don d'un nouveau standard open-source appelé Model Context Protocol (MCP). L'objectif est de standardiser la manière dont les modèles d'IA (ou "agents") interagissent avec des outils et des API externes (par exemple, un calendrier, une messagerie, une base de données). Sécurité et contrôle accrus : Le protocole MCP vise à rendre l'utilisation d'outils par les IA plus sûre et plus transparente. Il permet aux utilisateurs et aux développeurs de définir des permissions claires, de demander des confirmations pour certaines actions et de mieux comprendre comment un modèle a utilisé un outil. Création de l'Agentic AI Foundation (AAF) : Pour superviser le développement du MCP, une nouvelle fondation indépendante et à but non lucratif a été créée. Cette fondation sera chargée de gouverner et de maintenir le protocole, garantissant qu'il reste ouvert et qu'il ne soit pas contrôlé par une seule entreprise. Une large coalition industrielle : L'Agentic AI Foundation est lancée avec le soutien de plusieurs acteurs majeurs de la technologie. Parmi les membres fondateurs figurent Anthropic, Google, Databricks, Zscaler, et d'autres entreprises, montrant une volonté commune d'établir un standard pour l'écosystème de l'IA. L'IA ne remplacera pas votre auto-complétion (et c'est tant mieux) https://www.damyr.fr/posts/ia-ne-remplacera-pas-vos-lsp/ Article d'opinion d'un SRE (Thomas du podcast DansLaTech): L'IA n'est pas efficace pour la complétion de code : L'auteur soutient que l'utilisation de l'IA pour la complétion de code basique est inefficace. Des outils plus anciens et spécialisés comme les LSP (Language Server Protocol) combinés aux snippets (morceaux de code réutilisables) sont bien plus rapides, personnalisables et performants pour les tâches répétitives. L'IA comme un "collègue" autonome : L'auteur utilise l'IA (comme Claude) comme un assistant externe à son éditeur de code. Il lui délègue des tâches complexes ou fastidieuses (corriger des bugs, mettre à jour une configuration, faire des reviews de code) qu'il peut exécuter en parallèle, agissant comme un agent autonome. L'IA comme un "canard en caoutchouc" surpuissant : L'IA est extrêmement efficace pour le débogage. Le simple fait de devoir formuler et contextualiser un problème pour l'IA aide souvent à trouver la solution soi-même. Quand ce n'est pas le cas, l'IA identifie très rapidement les erreurs "bêtes" qui peuvent faire perdre beaucoup de temps. Un outil pour accélérer les POCs et l'apprentissage : L'IA permet de créer des "preuves de concept" (POC) et des scripts d'automatisation jetables très rapidement, réduisant le coût et le temps investis. Elle est également un excellent outil pour apprendre et approfondir des sujets, notamment avec des outils comme NotebookLM de Google qui peuvent générer des résumés, des quiz ou des fiches de révision à partir de sources. Conclusion : Il faut utiliser l'IA là où elle excelle et ne pas la forcer dans des usages où des outils existants sont meilleurs. Plutôt que de l'intégrer partout de manière contre-productive, il faut l'adopter comme un outil spécialisé pour des tâches précises afin de gagner en efficacité. GPT 5.2 est sorti https://openai.com/index/introducing-gpt-5-2/ Nouveau modèle phare: GPT‑5.2 (Instant, Thinking, Pro) vise le travail professionnel et les agents long-courriers, avec de gros gains en raisonnement, long contexte, vision et appel d'outils. Déploiement dans ChatGPT (plans payants) et disponible dès maintenant via l'API . SOTA sur de nombreux benchmarks: GDPval (tâches de "knowledge work" sur 44 métiers): GPT‑5.2 Thinking gagne/égale 70,9% vs pros, avec production >11× plus rapide et = 0) Ils apportent une sémantique forte indépendamment des noms de variables Les Value Objects sont immuables et s'évaluent sur leurs valeurs, pas leur identité Les records Java permettent de créer des Value Objects mais avec un surcoût en mémoire Le projet Valhalla introduira les value based classes pour optimiser ces structures Les identifiants fortement typés évitent de confondre différents IDs de type Long ou UUID Pattern Strongly Typed IDs: utiliser PersonneID au lieu de Long pour identifier une personne Le modèle de domaine riche s'oppose au modèle de domaine anémique Les Value Objects auto-documentent le code et le rendent moins sujet aux erreurs Je trouve cela interessant ce que pourra faire bousculer les Value Objects. Est-ce que les value objects ameneront de la légerté dans l'execution Eviter la lourdeur du design est toujours ce qui m'a fait peut dans ces approches Méthodologies Retour d'experience de vibe coder une appli week end avec co-pilot http://blog.sunix.org/articles/howto/2025/11/14/building-gift-card-app-with-github-copilot.html on a deja parlé des approches de vibe coding cette fois c'est l'experience de Sun Et un des points differents c'es qu'on lui parle en ouvrant des tickets et donc on eput faire re reveues de code et copilot y bosse et il a fini son projet ! User Need VS Product Need https://blog.ippon.fr/2025/11/10/user-need-vs-product-need/ un article de nos amis de chez Ippon Distinction entre besoin utilisateur et besoin produit dans le développement digital Le besoin utilisateur est souvent exprimé comme une solution concrète plutôt que le problème réel Le besoin produit émerge après analyse approfondie combinant observation, données et vision stratégique Exemple du livreur Marc qui demande un vélo plus léger alors que son vrai problème est l'efficacité logistique La méthode des 5 Pourquoi permet de remonter à la racine des problèmes Les besoins proviennent de trois sources: utilisateurs finaux, parties prenantes business et contraintes techniques Un vrai besoin crée de la valeur à la fois pour le client et l'entreprise Le Product Owner doit traduire les demandes en problèmes réels avant de concevoir des solutions Risque de construire des solutions techniquement élégantes mais qui manquent leur cible Le rôle du product management est de concilier des besoins parfois contradictoires en priorisant la valeur Est ce qu'un EM doit coder ? https://www.modernleader.is/p/should-ems-write-code Pas de réponse unique : La question de savoir si un "Engineering Manager" (EM) doit coder n'a pas de réponse universelle. Cela dépend fortement du contexte de l'entreprise, de la maturité de l'équipe et de la personnalité du manager. Les risques de coder : Pour un EM, écrire du code peut devenir une échappatoire pour éviter les aspects plus difficiles du management. Cela peut aussi le transformer en goulot d'étranglement pour l'équipe et nuire à l'autonomie de ses membres s'il prend trop de place. Les avantages quand c'est bien fait : Coder sur des tâches non essentielles (amélioration d'outils, prototypage, etc.) peut aider l'EM à rester pertinent techniquement, à garder le contact avec la réalité de l'équipe et à débloquer des situations sans prendre le lead sur les projets. Le principe directeur : La règle d'or est de rester en dehors du chemin critique. Le code écrit par un EM doit servir à créer de l'espace pour son équipe, et non à en prendre. La vraie question à se poser : Plutôt que "dois-je coder ?", un EM devrait se demander : "De quoi mon équipe a-t-elle besoin de ma part maintenant, et est-ce que coder va dans ce sens ou est-ce un obstacle ?" Sécurité React2Shell — Grosse faille de sécurité avec React et Next.js, avec un CVE de niveau 10 https://x.com/rauchg/status/1997362942929440937?s=20 aussi https://react2shell.com/ "React2Shell" est le nom donné à une vulnérabilité de sécurité de criticité maximale (score 10.0/10.0), identifiée par le code CVE-2025-55182. Systèmes Affectés : La faille concerne les applications utilisant les "React Server Components" (RSC) côté serveur, et plus particulièrement les versions non patchées du framework Next.js. Risque Principal : Le risque est le plus élevé possible : l'exécution de code à distance (RCE). Un attaquant peut envoyer une requête malveillante pour exécuter n'importe quelle commande sur le serveur, lui en donnant potentiellement le contrôle total. Cause Technique : La vulnérabilité se situe dans le protocole "React Flight" (utilisé pour la communication client-serveur). Elle est due à une omission de vérifications de sécurité fondamentales (hasOwnProperty), permettant à une entrée utilisateur malveillante de tromper le serveur. Mécanisme de l'Exploit : L'attaque consiste à envoyer une charge utile (payload) qui exploite la nature dynamique de JavaScript pour : Faire passer un objet malveillant pour un objet interne de React. Forcer React à traiter cet objet comme une opération asynchrone (Promise). Finalement, accéder au constructeur de la classe Function de JavaScript pour exécuter du code arbitraire. Action Impérative : La seule solution fiable est de mettre à jour immédiatement les dépendances de React et Next.js vers les versions corrigées. Ne pas attendre. Mesures Secondaires : Bien que les pare-feux (firewalls) puissent aider à bloquer les formes connues de l'attaque, ils sont considérés comme insuffisants et ne remplacent en aucun cas la mise à jour des paquets. Découverte : La faille a été découverte par le chercheur en sécurité Lachlan Davidson, qui l'a divulguée de manière responsable pour permettre la création de correctifs. Loi, société et organisation Google autorise votre employeur à lire tous vos SMS professionnels https://www.generation-nt.com/actualites/google-android-rcs-messages-surveillance-employeur-2067012 Nouvelle fonctionnalité de surveillance : Google a déployé une fonctionnalité appelée "Android RCS Archival" qui permet aux employeurs d'intercepter, lire et archiver tous les messages RCS (et SMS) envoyés depuis les téléphones professionnels Android gérés par l'entreprise. Contournement du chiffrement : Bien que les messages RCS soient chiffrés de bout en bout pendant leur transit, cette nouvelle API permet à des logiciels de conformité (installés par l'employeur) d'accéder aux messages une fois qu'ils sont déchiffrés sur l'appareil. Le chiffrement devient donc inefficace contre cette surveillance. Réponse à une exigence légale : Cette mesure a été mise en place pour répondre aux exigences réglementaires, notamment dans le secteur financier, où les entreprises ont l'obligation légale de conserver une archive de toutes les communications professionnelles pour des raisons de conformité. Impact pour les employés : Un employé utilisant un téléphone Android fourni et géré par son entreprise pourra voir ses communications surveillées. Google précise cependant qu'une notification claire et visible informera l'utilisateur lorsque la fonction d'archivage est active. Téléphones personnels non concernés : Cette mesure ne s'applique qu'aux appareils "Android Enterprise" entièrement gérés par un employeur. Les téléphones personnels des employés ne sont pas affectés. Pour noel, faites un don à JUnit https://steady.page/en/junit/about JUnit est essentiel pour Java : C'est le framework de test le plus ancien et le plus utilisé par les développeurs Java. Son objectif est de fournir une base solide et à jour pour tous les types de tests côté développeur sur la JVM (Machine Virtuelle Java). Un projet maintenu par des bénévoles : JUnit est développé et maintenu par une équipe de volontaires passionnés sur leur temps libre (week-ends, soirées). Appel au soutien financier : La page est un appel aux dons de la part des utilisateurs (développeurs, entreprises) pour aider l'équipe à maintenir le rythme de développement. Le soutien financier n'est pas obligatoire, mais il permettrait aux mainteneurs de se consacrer davantage au projet. Objectif des fonds : Les dons serviraient principalement à financer des rencontres en personne pour les membres de l'équipe principale. L'idée est de leur permettre de travailler ensemble physiquement pendant quelques jours pour concevoir et coder plus efficacement. Pas de traitement de faveur : Il est clairement indiqué que devenir un sponsor ne donne aucun privilège sur la feuille de route du projet. On ne peut pas "acheter" de nouvelles fonctionnalités ou des corrections de bugs prioritaires. Le projet restera ouvert et collaboratif sur GitHub. Reconnaissance des donateurs : En guise de remerciement, les noms (et logos pour les entreprises) des donateurs peuvent être affichés sur le site officiel de JUnit. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 2-6 février 2026 : Web Days Convention - Aix-en-Provence (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 5 juin 2026 : TechReady - Nantes (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
Welcome back to the Ultimate Guide to Partnering® Podcast. AI agents are your next customers. Subscribe to our Newsletter: https://theultimatepartner.com/ebook-subscribe/ Check Out UPX:https://theultimatepartner.com/experience/ Jen Odess, Group Vice President of Partner Excellence at ServiceNow, joins Vince Menzione to discuss the company’s incredible transformation from an IT ticketing solution to a leading AI-native platform for business transformation. Jen dives deep into how ServiceNow has strategically invested in and infused AI into its unified platform over the last decade, enabling over a billion workflows daily. She also outlines the critical role of the partner ecosystem, which executes 87% of all implementations, and reveals the company’s strategic initiatives, including its commitment to the hyperscaler marketplaces, the goal to hit half a billion dollars in annual contract value for its Now Assist AI product, and the push for partners to adopt an ‘AI-native’ methodology to capitalize on the fact that customers still want over 70% of AI buying to be done through partners. Key Takeaways ServiceNow is an ‘AI-native’ company, having invested in and built AI directly into its unified platform for over a decade. The company’s core value today is in its unified AI platform, single data model, and leadership in workflows that connect the entire enterprise. ServiceNow will hit $500 million in annual contract value for its Now Assist AI products by the end of 2025, making it the fastest-growing product in company history. An astonishing 87% of all ServiceNow implementations are done by its global partner ecosystem, highlighting their crucial role. The company is leveraging the half-trillion-dollar opportunity of durable cloud budgets by driving marketplace transactions and helping customers burn down cloud commits using ServiceNow solutions. To win in the AI era, partners must adopt AI internally, co-innovate on the platform, and strategically differentiate themselves to rank higher in the forthcoming agentic matching system. Key Tags: ServiceNow, AI-native platform, Now Assist, Jen Odess, partner excellence, workflow leader, AI platform for business transformation, hyperscalers, Microsoft Azure, Google Cloud, AWS, marketplace transactions, cloud commits, AIDA model, agentic matching, F-Pattern, Z-Pattern, group vice president, MSP, GSI, co-innovation, autonomous implementation, technical constraints, visual hierarchy, UX, UI, responsive design. Ultimate Partner is the independent community for technology leaders navigating the tectonic shifts in cloud, AI, marketplaces, and co-selling. Through live events, UPX membership, advisory, and the Ultimate Guide to Partnering® podcast, we help organizations align with hyperscalers, accelerate growth, and achieve their greatest results through successful partnering. Transcript: Jen Odess Audio Podcast [00:00:00] Jen Odess: The AI platform for business transformation, and I love to say to people, it sounds like a handful of cliche words that just got stacked together. The AI platform for business transformation. Yeah. We all know these words, so many companies use ’em, but it is such deliberate language and I love to explain why. [00:00:20] Vince Menzione: Welcome to, or welcome back to The Ultimate Guide to Partnering. I’m Vince Menzi on your host, and my mission is to help leaders like you achieve your greatest results through successful partnering. Today we have a special leader, Jen Odes is the GVP for Partner Excellence at ServiceNow. And joins me here in the studio in Boca Raton. [00:00:40] Vince Menzione: Jen, welcome to the podcast. Thanks, Vince. It’s so great to be here. I am so thrilled to welcome you. To Boca Raton, Florida. Our podcast home look at this amazing background we have Here is this, and this is where we host our ultimate partner Winter retreat. Actually, in February, we’re gonna give that a plug. [00:00:58] Vince Menzione: Okay. I’d love to have you come back. I’d love to have an invite. And you flew in this morning from Washington DC [00:01:04] Jen Odess: I did. It was 20 degrees when I left my house this morning and this backdrop. Is definitely giving me, island South Florida like vibes. It’s fabulous. [00:01:13] Vince Menzione: And we’re gonna talk about ServiceNow. [00:01:14] Vince Menzione: And you’re also opening an office down here? We [00:01:17] Jen Odess: are [00:01:17] Vince Menzione: in West Palm Beach. Not too far from where we are. Yes. Later 2026. Yeah. I love that. And then so we’ll work on the recruiting year, but let’s dive in. Okay. So thrilled to have ServiceNow and to have you in the room. This has been an incredible time for your organization. [00:01:31] Vince Menzione: I have been watching, obviously I work with Microsoft. We’ve had Google. In the studio, Amazon onboard as well. And other than those three organizations, I can’t think of any other legacy organization that has embraced AI more succinctly than ServiceNow. And I thought we’d start there, but I really wanna spend some time getting to know you and getting to know your role, your mission, and your journey to this incredible. [00:01:57] Vince Menzione: Leadership role as a global vice president. We’ll talk about Or [00:02:01] Jen Odess: group. Group Vice president. I know it doesn’t roll off the tongue. I get it. A group vice president doesn’t roll. [00:02:05] Vince Menzione: G-V-P-G-V-P doesn’t roll off the time. And in some organizations it is global. It is in other organizations, it’s group. So let’s, you’re not [00:02:12] Jen Odess: the first to say global vice president. [00:02:14] Jen Odess: Okay. I’ll take either way. It’s fine. [00:02:15] Vince Menzione: Yeah. Yeah. And might be a promotion. Let’s talk. Let’s talk about that. Let’s talk about you and your career journey and your mission. [00:02:22] Jen Odess: Yeah, so I’ve been at ServiceNow for five years. In fact, January will be like the five year anniversary and then it will be the beginning of my sixth year. [00:02:31] Jen Odess: Amazing. And I actually got hired originally to build out the initial partner enablement function. So it didn’t really exist five years ago. There was certainly enablement that happened to Sure. All individuals that were. Using, consuming, buying ServiceNow, working with ServiceNow. But the partner enablement function from pre to post-sale, that whole life cycle didn’t exist yet. [00:02:54] Jen Odess: So that was my initial job. I got hired to run partner enablement and it before. And how big [00:02:59] Vince Menzione: was your partner organization at that point? It must have been pretty small. [00:03:01] Jen Odess: It was actually not as small as you would think. Gosh, that’s a great question. You’re challenging my memory from five years ago. [00:03:08] Jen Odess: I know that we’re over 2,500 partners today and we add hundreds every year, so it had to have been in the low one thousands. Wow. Is where we were five years ago. But the maturity of the ecosystem is grossly larger today than it was then. I can imagine. So back then there was less than 30,000 individuals that were skilled on ServiceNow to sell or solution or deliver. [00:03:34] Jen Odess: Today there’s almost a hundred thousand. Wow. So yeah that’s like the maturity in the capability within the ecosystem. But before I start on my ServiceNow and my group vice president. Which is a great role, by the way. Group Vice President. Yeah. Partner Excellence group. I’m very proud of it. [00:03:49] Jen Odess: But but let me tell you what brought me here, please. So I actually came from a partner, but not in the ServiceNow ecosystem. Okay. I won’t name the partner, but let’s just say it’s a competitor, a competitive ecosystem. And I worked for a services shop that today I would refer to as multinational. [00:04:11] Jen Odess: Kind of a boutique darling, but with over 1,500 consultants, so Okay. A behemoth as well? Yeah. Privately held. And we were a force to be reckoned with, and it was really fun. I held so many roles. I was a customer success manager. I led the data science practice at one point. I ran global alliances and partnerships. [00:04:35] Jen Odess: At one point I was the chief of staff to the CEO at the time that company was acquired. Big global si. And and then at one point I even spun off for the big global SI and helped run a culture initiative to transform co corporate culture. Wow. Very inside the whole organization. Wow. That is very, yeah. [00:04:54] Jen Odess: Really interesting set of roles. And the whole reason I came to ServiceNow is by the time I was concluding that journey in that ecosystem on the services side, I felt like. I didn’t fully understand what it meant to be on the software product side. And I often felt like I approached friction or moments of frustration and heartache with resentment for the software company. [00:05:20] Jen Odess: Sure. Or maybe just a lack of empathy for what they must be going through as well. It always felt like I was on the kind of [00:05:26] Vince Menzione: negative you were on the other side of the table. Totally. [00:05:27] Jen Odess: Yeah. And, or maybe like the redheaded stepchild kind of a concept as a partner. And so I sought out to. Learn more, which is probably a big piece of my journey is just constant curiosity. [00:05:38] Jen Odess: Nice. And I thought I think the thing I’m missing is seeing what it means firsthand to be on the software product side. And that was what led me to a career at ServiceNow. Five years strong. Yeah. So [00:05:50] Vince Menzione: talk about partner experience for those who don’t know what that means. [00:05:53] Jen Odess: Yeah. Today my role is partner excellence, but it used to be partner experience. [00:05:58] Jen Odess: Okay. And so the don’t. Yeah, that’s normal to say both things. And they actually mean two very different things. [00:06:04] Vince Menzione: Yeah, I would say so. [00:06:05] Jen Odess: And we deliberately changed the title about a year ago. So today, partner Excellence is about really ensuring that we build a vibrant AI led ecosystem. And that’s from the whole life cycle of the partner, from the day they choose to be a partner and onboard, and hopefully to the day they’re just. [00:06:23] Jen Odess: Thriving and growing like crazy, and then across the whole life cycle of the customer pre to post sale. So it’s, we are almost like the underpinning and the infras infrastructure. Someone once said it’s like we’re the insurance policy of all global partnerships and channels. That’s how we operate across global partnerships and channels and service Now. [00:06:42] Vince Menzione: And you have a very intimate relationship with those partners. We’re gonna dive in on that as well. Yes. But let’s talk about this time like no other. I talk about tectonic shifts at all of our events. People that listen to our podcasts know we talk about the acceleration of transformation, and it’s happening so fast. [00:06:58] Vince Menzione: It was happening fast even during COVID. But then. I’ll call this date or time period, the November 20, 22 time period when Chat GPT launched. Oh yeah. And that really changed the world in many respects, right? Yeah. Microsoft had already leaned in with chat, GPT, Google, we talked to Google about this. [00:07:17] Vince Menzione: Even having them in the room was like, they were caught flatfooted in a way, and they had a lot of the technology and they didn’t lean in. But it feels like ServiceNow was one of the first, certainly on the ISV side of the house and refer to the term ISV. Loosely, because hyperscalers are ISVs as well. [00:07:34] Vince Menzione: They were early to lean in and have leaned it in such a way from a business application perspective that I believe we haven’t seen embracing and infusing AI into your platform. I was hoping we could dive in a little bit on ServiceNow from a. Kinda legacy, what the organization was and is today. [00:07:56] Vince Menzione: And then also this infusion of AI into the platform. If you don’t mind, [00:07:59] Jen Odess: I love this topic. Okay. And I feel like it’s such a privilege to talk about ServiceNow on this topic because we really are a leader in the category. I’ll almost rewind back to over 20 years ago when the company was founded. [00:08:11] Jen Odess: Today, fast forward, we are so much more than an IT ticketing company. We are, [00:08:16] Vince Menzione: but that was the legacy. That’s how I knew service now 20 years ago. [00:08:19] Jen Odess: And what a beautiful legacy. Yeah. But we have expanded immensely beyond that. And that’s the beautiful story to tell customers. That’s so fun. [00:08:28] Jen Odess: But what what I love is that. So 20 years ago, that was where we started. And today, do you know that over a billion workflows are put to work every single day for our customers? A billion [00:08:38] Vince Menzione: workflows, over a billion workflows. That’s crazy. [00:08:40] Jen Odess: And 87% of all implementations for ServiceNow were done by partnerships. [00:08:46] Jen Odess: And channels. That’s fantastic. So you think about those billion plus workflows daily, all because of our partner ecosystem. This is my small plug. I’m just very proud 80, proud 86%. [00:08:56] Vince Menzione: Did you hear that? Part’s 86%. [00:08:57] Jen Odess: Amazing. And so that’s like what we’re, that’s what we’re a leader in the category. We are a leader in workflows categorically. [00:09:05] Jen Odess: But then over a decade ago, we started investing in ai. We started building it right into our platform, and this becomes the next kind of notch on our belt, which is we are a unified platform. Nothing is bolted on, nothing is just apid in. Yeah, it is a unified platform. So all of that AI that for the past decade we’ve been building in into our platform. [00:09:28] Jen Odess: Just in our AI platform, which is now what we are calling it, the AI platform. [00:09:34] Vince Menzione: And I would say that unless you were a startup starting up from scratch today and building on an LLM, we were building in a way I don’t think any other organization’s gonna actually state that [00:09:45] Jen Odess: what’s actually why we call ourselves AI native. [00:09:47] Jen Odess: Yeah, beca for that exact reason. And that’s who we’re competing with a lot these days, is the truly AI native startups where they didn’t have, the 20 years. Previously that we had, but that’s what makes us so unique in the situation, is that unified AI platform, a single data model that can connect to anything. [00:10:07] Jen Odess: And then the workflow leader. And when you put all those things together, AI plus data, plus workflows and that’s where the magic happens. Yeah. Across the enterprise. It’s pretty cool. [00:10:17] Vince Menzione: That is very cool. And you start thinking about, and we start talking about agent as a, as an example. Let’s talk about this for a second. [00:10:23] Vince Menzione: You, when what is this bolt-on, we could use the terms co-pilot, we could use Ag Agent ai, but they are generally bolted onto an existing application today. So take us through the 10 years and how it has become a portion or a significant portion. Of ServiceNow. [00:10:41] Jen Odess: When say the question a little bit more. [00:10:43] Jen Odess: Like when you say it’s, yeah, when which examples have bolted on? [00:10:47] Vince Menzione: So exa, we, what we see today is the hyperscalers coming out with their own solution sets, right? They’re taking and they’re offering it up to their ecosystem to infuse it into their product and portfolio. To me, those that look like bolted on in many respects, unless it’s an AI need as a native organization, a startup organization. [00:11:07] Vince Menzione: They’re mostly taking and re-engineering or bolting onto their existing solutions. [00:11:12] Jen Odess: I follow. Yeah. Thank you for giving me a little more context. So I call this our any problem. It’s like one of the best problems to have we can connect into. Anything, any cloud, any ai, any platform, any system, any data, any workflow, and that’s where any hyperscaler, and that’s the part that makes it so incredible. [00:11:32] Jen Odess: So your word is bolt on, and I use the word any the, any problem. Yeah. We’ve got this beautiful kind of stack visual that just, it’s like it just one on top of the other. Any. Any, and no one else can really say that. I gotta see [00:11:45] Vince Menzione: that visual. Yeah. Yeah. So talk about this a little bit more. So you’re uniquely positioned. [00:11:52] Vince Menzione: Let’s talk about how you position, you talked about being AI native. What does that imply and what does that mean in terms of the evolution of the platform? From ticketing to workflows to the business applications? What are the type of applications Yeah. Markets, industries that you’re starting to see. [00:12:08] Jen Odess: So I’ll actually answer this with, taking on a small, maybe marketing or positioning journey. So there was a time when our tagline would be The World Works with ServiceNow. There was a time when it was, we put AI to work for people and today and it, I think it was around Knowledge 2025, this came out. [00:12:28] Jen Odess: It was the AI platform for business transformation. And I love to say to people, it sounds like a handful of. Cliche words that just got stacked together. The AI platform for business transformation. Yeah. We all know these words, so many companies use ’em, but it is such deliberate language and I love to explain why. [00:12:46] Jen Odess: So the first is the AI platform is calling out that we are an AI native platform. We are a unified platform. It’s a chance to say all that goodness I already shared with you. Yeah. And the business transformation is actually telling the story of no longer being a solution. Point or no longer being an individual product that does X. [00:13:06] Jen Odess: It’s about saying. The ServiceNow platform can go north to south and east to west across your entire enterprise. Okay. Up and down the entire tech stack. Any. And then east to west, it can cut across the enterprise, the C-suite, the buying centers, all into one unified AI platform. With one data model. [00:13:26] Jen Odess: I love it. And so I love that AI platform for business transformation actually has so much purpose. [00:13:32] Vince Menzione: It does. So you’re going across the stack, so you’re going all the way from the bottom layer, all the way up to the top from the ue. Ui. And then you’re going across the organization, right? You’re going across the C-suite, you’re going across all the business functions of an organization. [00:13:46] Vince Menzione: Correct. And so the workflows are going across each of those business functions? [00:13:49] Jen Odess: Correct. And then our AI control tower is sitting at the very top, governing over all of it. [00:13:53] Vince Menzione: I love the control tower. [00:13:54] Jen Odess: I know the governance, security risk protocol, managing all the agents interoperability. Yeah. [00:14:01] Vince Menzione: And then data at the very bottom right. [00:14:03] Vince Menzione: Controlling all those elements and the governance of the data and the right, the cleanliness of the data and so on. Yeah. That’s incredible. I we could probably talk about business applications. I know one, in fact, I’ve had a person sit in this, your chair from we’ll call it a large GSIA very significant GSI one of the top five. [00:14:21] Vince Menzione: And they took ServiceNow and they applied it to their business partnering function. And they used, and we, you probably don’t know about this one, but I know that that’s a, an example of taking it and applying it all across all the workflows, across all the geographies of the organization and taking a lot of the process that was all done manually. [00:14:40] Vince Menzione: That was stove pipe business processes that were all stove piped and removing the stove pipe and making for a fluid organizational flow. [00:14:47] Jen Odess: And I’ll bet you the end user didn’t even realize ServiceNow was the backend. That’s some of the greatest examples actually. [00:14:53] Vince Menzione: Yeah. Yeah. So Jen, we work with all the hyperscalers. [00:14:56] Vince Menzione: We have a very strong relationship with Microsoft. Goes back many years, my back to my days at Microsoft and we’ve had Google in the room. We have AWS now as well. We bring them all together because we believe that partners work with, need to work with all three. And I know that you have had an interesting transformation at ServiceNow around the hyperscalers. [00:15:16] Vince Menzione: I was hoping you could dive in a little deeper with us. [00:15:19] Jen Odess: Yeah. We are so proud of our relationships with the hyperscalers, so the same three, so it’s Microsoft Azure, Google Cloud, and AWS. And really it’s it’s a strategic 360 partnership and our goal is really to drive marketplace transactions. [00:15:34] Jen Odess: So ServiceNow selling in all of their marketplaces and then. Burn down of our customers cloud commits. I love it. It’s really a beautiful story for our customers and for the hyperscalers and for ServiceNow. And so we’ve, it’s brand, it’s a brand new announcement from late in the year 2025. Love it. And we’re really excited about it. [00:15:51] Vince Menzione: Yeah. And then we, and we get all of the marketplace leaders in the room. So we’ve worked with all of those people. And one of the key points about this is there is over a half a trillion dollars in durable cloud budgets with customers that [00:16:08] Vince Menzione: Already committed to, I know, so that tam available, a half a trillion dollars is available to customers to burn down and utilize your solutions and professional services with partners as well in terms of driving a complete solution. [00:16:21] Jen Odess: That’s exactly the motion we’re pushing is to go and leverage those cloud commits to get on ServiceNow and in some cases, maybe even take out other products to go with ServiceNow and actually end up funding the transition to ServiceNow. Yeah. Yeah. [00:16:37] Vince Menzione: So you serve thousands of customers today, thousands of customers. [00:16:42] Vince Menzione: I can’t even. Fathom the exact number, but you have this partner ecosystem that you described, and their reach is even more incredible, like hundreds of thousands. Yeah. So tell us a little bit more about how you think about that, and then how do you drive the partner ecosystem in the right way to drive this partner excellence that you described. [00:17:02] Jen Odess: Yeah, that’s a great question. So yeah, thousands of ServiceNow customers and we’re barely scratching the surface in comparison to our partners customers. So we have over 2,500 partners Wow. In our ecosystem. And today they cut across what I would call five routes to market. That partners can go to market with ServiceNow. [00:17:21] Jen Odess: Okay. The first is consulting and implementation. This will be your classic kind of consulting shop or GSI approach. The second is resell, just like it sounds. Yep. [00:17:30] Vince Menzione: Transactional. [00:17:31] Jen Odess: Yep. The third is managed service provider. [00:17:33] Vince Menzione: Okay. [00:17:34] Jen Odess: The fourth is what we call build, which is. The ISV, strategic Tech partner realm, and then the fifth is hyperscaler. [00:17:43] Jen Odess: Those are the five routes to market. So partners can choose to be in one or all or two. It doesn’t matter. It’s whichever one fits the kind of business they want to go drive. Nice. Where they’re. Expertise lies. And then we’ve got partners that show up globally, partners that show up multinational and partners that show up regionally and then partners that show up locally, in country and that’s it. [00:18:06] Jen Odess: And we really want a diverse set of partners capable of delivering where any of our customers are. So it’s important that we have that dynamic ecosystem where we really push them. We’re actually trying hard to balance this. Yeah, you would’ve heard it from many of your other partners. This direct versus indirect. [00:18:24] Jen Odess: Yes. Motion. For anyone listening that doesn’t know the difference, right? Direct is ServiceNow is selling direct to a customer, there might be a partner involved influencing that will implement. Yeah, likely but ServiceNow is really driving the sale versus indirect where the whole thing routes through the partner. [00:18:39] Jen Odess: Right? Which is your classic reseller or managed service provider and often a an ISV. And you know that balance is never gonna be perfect ’cause we’re not gonna commit to go all direct or all indirect. We’re gonna continue to sit in this space where we’re trying to find a healthy balance. [00:18:56] Jen Odess: So I find a lot of our time trying to figure out how do you set all those parties up for success? Yeah. The parties are the ServiceNow field sellers? And then you’ve also got the partnerships and channels, so the ecosystem, and then you’ve got the people in global partnerships and channels. So my broader organization, and we’re all trying to figure out how to work harmoniously together and it’s a lot of, it is my job to get us there. [00:19:19] Jen Odess: And so we use lots of things like incentives and benefits and we will put in place gated entry, really strategic gated entry. What does [00:19:29] Vince Menzione: gated entry mean? [00:19:30] Jen Odess: Yeah. What I mean is if you want to have a chance at being matched with a customer Yeah. For a very specific deal. Or it’s really one of three to get matched. [00:19:41] Jen Odess: ‘Cause you can never match one-to-one. It has to be three or more. Okay. We have good compliance rules in place. Yeah. But in order to even. Like surface to the top of the list to be matched. There’s a gated entry, which is, you’ve gotta have validated practices. Okay. Which is how, it’s these various ways, as you described, you quantify and qualify the partner’s capabilities. [00:20:00] Vince Menzione: Yeah. So you have to meet these qualifications. Yes. And you could be one of three to enter and be. Potentially matched, considered significant or Yes. Match for this deal? [00:20:08] Jen Odess: Yes, that’s exactly right. So we use, various things like that. And then we try to carve what I would call dance card space reseller in commercial, try to sit here and like carve by geo, by region, by country dance card space as well to help the partners really know exactly where they can unleash versus, hey, this is the process and the rules of engagement. To go and sell alongside the direct org sales organization [00:20:33] Vince Menzione: and you’re gonna have multiple partners in the same opportunities. [00:20:37] Vince Menzione: Absolutely not. Not necessarily competing with each other. There’s three competing each with each other, but also you’re gonna have other partners that provide different capabilities as well. You might have that have some that are just transac. Those are gonna be those channel or reseller partners. [00:20:52] Vince Menzione: You might have an MSP that’s actually delivering, or at least providing some type of managed service on top of the stack. Like supporting the customer. Yeah. And then you might have an SI GSI an integration partner that’s also doing the con the consulting work around getting the solution to meet with the customer’s requirements. [00:21:12] Vince Menzione: Would you say [00:21:13] Jen Odess: so? That’s exactly right. Yeah. And actually in. AI era, we’re seeing more of it than ever. And even on the smaller deals, maybe not the GSIs on the smaller deals, but we’re seeing multiple partners come in to serve up their specific expertise, which is actually a best practice. That’s [00:21:33] Vince Menzione: terrific. [00:21:33] Jen Odess: We don’t want. If you’ve got an area that’s a blind spot and you’re a partner, but that’s something your customer is buying from you, there’s no harm in saying let’s bring in an expert in that category to deliver that piece of the business. That’s right. And we’ll maybe shadow and watch alongside. [00:21:46] Jen Odess: So we’re seeing more and more of it. And I actually think like the world of. Partnerships and ecosystems. If I go back to like my previous ecosystem as well, it’s become so much more communal than ever before. Yes. This idea that we can share and be more open and maybe even commiserate over the things, gosh, I can’t believe we have the same frustrations or we have the same. [00:22:09] Jen Odess: Wow, that’s amazing. And you’re in this country. And I’m in this country. And so we’re seeing more and more coming together on deals which I really respect a lot. ’cause So one of the new facts we’ve just learned actually, Vince, is that. Of all the ai buying that customers are doing out there, they actually still want over 70% of it to be done by partners. [00:22:32] Vince Menzione: Yes. [00:22:33] Jen Odess: So even though it looks like it could be maybe set up easy configured, easy plug and play it. It to get, it’s not real ROI. You still need a partner with expertise in that industry or that domain, or in that location or in that language to come and bring the value to life. And we will certainly accelerate, help accelerate time to value with things that ServiceNow will do for our partners. [00:22:56] Jen Odess: But if over 70% is gonna go to partners and AI is so new, wouldn’t you want more than one partner Sometimes on a absolutely on a deal, at least while we’re all learning. I think we can keep ebbing and flowing [00:23:07] Vince Menzione: on this. We you, I dunno if Jay McBain, ’cause we’ve had him in the room here and he is a, he’s an analyst that does a lot of work around this topic. [00:23:14] Vince Menzione: And we talk about the seven seats at the table because there are, again, you need more you, first of all, you need to have your trusted, you need to have the organizations that you work with. And you also, in the world of ai, with all of the tectonic shifts, all the constant changing that’s going on right now, I need to make sure that I have the right. [00:23:31] Vince Menzione: People by my side that I can trust, they can help me deliver what I need to deliver. ’cause it might have changed from six months ago. And the technology is changing. Everything is changing so rapidly right now. So again, having all those right people I want to pick up on something ’cause we talked a little bit about MSPs and they’ve become a favorite topic of ours. [00:23:52] Vince Menzione: I have become acutely aware of the Ms P community recently. I kinda looked at them as well. There’s little small partners, but you’ve suggested this as well. They have regional expert, they have expertise in a specific area. And can be trusted, and maybe you’re integrating multiple solution sets for a customer. [00:24:11] Vince Menzione: But we’ve seen this MSP community become very vibrant lately, and I feel like they woke up to technology and to AI in such a big way. Can you comment on that? [00:24:20] Jen Odess: So we feel and see the same thing I’ve always valued what managed service providers bring to the table. It’s like that. [00:24:26] Jen Odess: Classic are you a transformation shop or are you a ta? The tail end or the run business shop? And so many partners are like we’re both, and I wanna be like, but are you? But now I feel like we finally are seeing the run business is so fruitful. So AI is innovating. All the time. [00:24:46] Jen Odess: We, we are innovating as a AI platform all the time. What used to be six month, every six months family releases of our software. Yeah. It became quarterly and now we’re practically seeing releases of new innovation every six to eight weeks. So why wouldn’t you want a managed service provider? Paying close attention to your whole instance on ServiceNow and taking into account all the latest innovation and building it into your existing instance, and then looking out for what new things you should be bringing in. [00:25:20] Jen Odess: So that’s the beauty of the, it’s almost partnerships, observing, and then suggesting how to keep. Doing better and more and better versus always jumping straight back to complete redesign and transformation. Yeah, and that’s one of the things I like about the MSPs in this space. [00:25:36] Vince Menzione: So let’s broaden out from this part of the conversation ’cause you’re giving specific guidance to the MSPs, but let’s think about this whole partner community. [00:25:43] Vince Menzione: And you’ve seen this transformation coming over to ServiceNow and even within ServiceNow these last five years. How do these organizations need to think differently? And how do they need to structure their services in this newent world? [00:25:58] Jen Odess: Great question. There’s really four things that I think they have to be thoughtful of. [00:26:02] Jen Odess: The first is maybe the most obvious they have to adopt AI as their own ways of doing work methodology. Delivery, whatever it is, because only through the, it’s not about taking out people in jobs, it’s about doing the job faster, right? It’s about getting the customer to value faster so that adoption of AI will make or break some partners. [00:26:24] Jen Odess: And our goal is that every partner comes on the other side of this AI journey, thriving and surviving. So we’re really pushing. This agenda. And maybe later I can talk to you a little bit more about this autonomous implementation concept. Please. ’cause I that will [00:26:37] Vince Menzione: resonate. So you’re saying they need to, we used to use the term eat their own dog food. [00:26:41] Vince Menzione: Now it’s drink your own champagne. Yeah. But they need to adopt it as well internally. [00:26:46] Jen Odess: Yeah. And I think whether they’re using, I hope they’re using ServiceNow as like a client, zero. To do some of that adoption. But there’s lots of other tools that are great AI tools that will make your job and your day-to-day life and the execution of that job easier. [00:26:59] Jen Odess: So we want them adopting all of that. The second is, we really need to see partners. Innovating on the ServiceNow platform. Yeah. And whether that’s building agents AI agents that go into the ServiceNow store, whether it’s building a really fantastic solution that we wanna joint jointly go to market with, or maybe it’s one of those embedded solutions you were commenting where the end user doesn’t even know that the backend, like a tax and audit solution that is actually just. [00:27:29] Jen Odess: The backend is all ServiceNow. Yeah. But that partner is going to market and selling it to all their customers. Exactly. So I think this co-innovation is gonna be a place that we will really win in market. The third is if a partner wants to stand out right now, they have to differentiate on paper too. [00:27:47] Jen Odess: It’s gotta like what does that mean? So if there’s 2,500 partners. And it’s not like we don’t walk around and just say, you should talk to this partner. Yeah. Or here’s my secret list. You should, we don’t do that. That’s not good business and it’s not compliant. So we have algorithms that take all the quantitative and qualitative data on our partners and they know all the data points ’cause it’s part of the partner program Nice. [00:28:10] Jen Odess: That they adhere to and then ranks them on status. And all those data points are what I’m referring to as on paper. You’ve gotta be differentiated. So whether or not you wanna be great at one thing or great across the whole thing, think about how all of those quantitative and qualitative data points are making you stand out, because that’s where those matches that I was referring to. [00:28:35] Jen Odess: Yes. That’s where that’s gonna come to life. And it’s skills, it’s capabilities. It’s deployments. So Proofpoint and deployments, customer success stories, csat, all the things. So [00:28:47] Vince Menzione: those are all the qualifi qualifiers for and more, but those are the types [00:28:49] Jen Odess: of qualifications. Yeah. [00:28:51] Vince Menzione: And then do your, does your sales organization do a match against that based on a customer’s requirements that they’re working with and who they work with and co-sell with? [00:29:00] Jen Odess: And I feel like you just lobbed me the greatest question. I didn’t even know you were gonna ask it, but I’m so glad you did. So today. Today there is something called a partner finder, which is which is nice, but it’s a little bit old school in a world of ai. Yeah. So you go to servicenow.com, you click partner from the top navigation, and then it says find a partner and you can literally type in the products you’re buying the country, you’re, that you’re headquartered out of. [00:29:26] Jen Odess: Whatever thing you’re looking for. And it will start to filter based on all those data points, the right partners, and you can actually click right there to be connected to a partner. So lead generation. Okay, interesting. But where we’re going is a agentic matching right in our CRM for the field. Oh. So those data points are gonna matter even more, and that’s where the gated. [00:29:48] Jen Odess: I say gated entry, which is probably too extreme, right? It’s really gated. If you wanna surface toward the top, there’s gated parameters to try to surface to the top, but those data points will feed the algorithm and it will genetically match right in our CRM for the field. Who are the best suited partners? [00:30:09] Jen Odess: Would you like to talk to them? [00:30:10] Vince Menzione: Okay. And so is it. Partner facing? Is it sales team facing [00:30:14] Jen Odess: Right now? It’s sales. It’ll, when it goes live, it will be sales team facing. Okay. But we have greater ambition for what partners can do with it. Yeah. Not just in the indirect motion, but also what partners may be able to do with it to interface with our field. [00:30:30] Jen Odess: The. [00:30:31] Vince Menzione: The, yeah the collaboration [00:30:33] Jen Odess: opportunity. Which is always a friction point that we’re working on [00:30:36] Vince Menzione: always because it’s very manual. It’s people intensive. Yeah. Partner development managers sitting on both sides of the equation and the interface between the sales organization and a partner organization is not always the. The easiest. So right. Automated, quite a bit of that. [00:30:49] Jen Odess: My boss is obsessed with the easy button, which I know is a phrase many of us in the US know from I think it’s an Office Depot, all these ways in which we can have easy button moments for the partner ecosystem is what we’re trying to focus on. [00:31:01] Jen Odess: I love the easy button. [00:31:02] Vince Menzione: Yeah. And I love your boss too. Yeah, he’s fabulous. Fabulous. So Michael and I go back like many years ago. You must have, [00:31:08] Jen Odess: yeah. You must have had paths crossing on numerous occasions. [00:31:12] Vince Menzione: Yeah we we worked together micro I’m going to hijack the session for a second here. [00:31:16] Vince Menzione: But when I first came to Microsoft, he was leading a, the se, a segment of the business, and he invited me to come to his event and interviewed me on stage at his event. [00:31:26] Jen Odess: No way. [00:31:26] Vince Menzione: And we got to know each other and yeah. So he was terrific. He was what a great find for, oh, he’s for service now. [00:31:32] Vince Menzione: He’s really [00:31:32] Jen Odess: has been a fantastic addition [00:31:34] Vince Menzione: to the global partnerships and channels team. And Michael, we have to have you on the podcast. Yes. Or cut down here in the studio at some point too with Jen and I. That’d be great. So this is terrific. We are getting it’s an incredible time. [00:31:44] Vince Menzione: It’s going so fast this time, 2022 was, seems like it was five, it feels like it was almost 10 years ago now. It wasn’t that we just started talking about it and you were implementing AI 10 years ago, but it wasn’t getting the attention that it’s getting today. And it really wasn’t until that moment that it really started to kick off in a way that everybody, yeah. It became pervasive overnight I would say. But now we’re starting 2026, like we’re at. This precipice of time and it’s continuing. I don’t even know what 2030 is gonna look like, right? So I’m a partner. [00:32:16] Vince Menzione: What are the one, two, or three things that I need to do now to win over and work with ServiceNow? [00:32:23] Jen Odess: One, two or three things? I’ll tell you the first thing. So today ServiceNow will end up hitting 500 million in annual contract value in our Now Assist, which is our AI products by the end of 2025, which is the fastest growing product in all of ServiceNow history. [00:32:37] Jen Odess: That’s one product that’s so there’s lots of SKUs. Yeah, but it is. It’s our AI product. Yeah. And it is, but yeah, because of all the various ways. [00:32:45] Vince Menzione: So half a billion dollars, [00:32:46] Jen Odess: half a billion by the end of 2025. And I think, someone’s gonna have to keep me honest here, but if memory serves me right, the first skews didn’t even launch until 2024. [00:32:54] Jen Odess: So we’re talking about wow, in a year it’s fast. Over 1,700 customers are live with our now assist products. Again, in a matter of, let’s call it over, a little over a year, 1,700 partners. So I think the first thing a partner needs to do is they’ve gotta get on this AI bandwagon, and they’ve gotta be selling and positioning AI use cases to their customers, because that’s the only way they’re gonna get. [00:33:20] Jen Odess: Experience and an opportunity to see what it feels like to deliver. So we have to do that. And I think you could sell a big use case like that big, we talked north, south, east, west, you could do that whole thing. Brilliant. But you could also start small. Go pick a single use case. Like a really simple example of something you wanna, some work you wanna drive productivity on. [00:33:41] Jen Odess: Yeah. And make sure you’ve got multiple stakeholders that love it and then go drive proving that use case. That’s what we’re telling a lot of partners. That’s the first thing. The second is they have got to build skills on AI and they have to keep up with it. And so we’re trying to really think about our broader learning and development team at ServiceNow is just next level. [00:34:00] Jen Odess: And they’re really re-imagining how to have more real time bite size. Training and enablement that will help individuals keep up with that pace of innovation. So individuals have got to get skilled. Yes. On AI today, of that a hundred thousand or so individuals in the ecosystem right now, about 35% of those individuals hold one or more AI credential. [00:34:25] Jen Odess: Again, that’s in a little over a year, which is the fastest growing skill development we’ve ever had, but it should be a hundred percent. Yeah. All of our goals should be that every account is being sold ai. ’cause that’s where the customer’s gonna get to value a ServiceNow is if they have the AI capabilities. [00:34:40] Jen Odess: And [00:34:41] Vince Menzione: how are you providing enablement and training? Is it all online? It’s, we have [00:34:44] Jen Odess: all sorts of ways of doing it. So that we have ServiceNow University, which is just a really robust, learning platform. Elba is our professor in residence. Very cool. Which is very cool. And they’re all content. [00:34:57] Jen Odess: Is free to partners. The training is free to partners that is on demand. Beyond that, partners can still get, instructor led training, whether that’s in person or virtual. And then my team offers enablement. That’s a little bit more, it’s like not formal training, it’s more like hands-on labs and experiences. [00:35:17] Jen Odess: We bring in lots of groups that sit around me that help and we very cool hands on with partners face-to-face. And do you do an annual event where you bring all these partners together? No, because we do we have three major milestones a year for partners. So the first is at sales kickoff, which is coming up the third week in January. [00:35:33] Jen Odess: And alongside sales kickoff is partner kickoff. Okay. And so we do a whole day of enabling them. So that’s your [00:35:39] Vince Menzione: partner kickoff? [00:35:40] Jen Odess: That’s partner kickoff. But of the, of all the partners in the ecosystem, it’s not like they can all make it. So we still also record and then live stream some of the content there. [00:35:49] Jen Odess: Then at Knowledge, there’s a whole partner track at Knowledge and same concept. Yeah, it’s like it’s all about customers and we wanna, build as much pipeline and wow as many customers as possible, but we also need to help our partners come along the journey. Then the third and final moment is in September, always, and it’s called our Global Partner Ecosystem Summit. [00:36:08] Jen Odess: We should have you, I’d love to join this next year. I love that. And it’s really, that’s the one time if sales kickoff is all about the sales motion in the field and knowledge is all about the customers and getting customers value. Global Partner Ecosystem Summit is only about the partners, what they need, why they need it, and what we’re doing to make their lives easier. [00:36:28] Jen Odess: I love it. Yeah. I’ll be there September. I love it. Dates yet set yet? I have to, it’s getting locked. I’ll get it to you. [00:36:34] Vince Menzione: Okay. All right. I’ll, we’ll be there. Okay. So you’ve been incredible. I just love having you. We could spend hours, honestly, and I want to have you back here. I’d love to, I have you back for a more meaningful conversation with the hyperscalers. [00:36:45] Vince Menzione: Talk to some of the partners that join us at Ultimate Partner events. We’ll find a way to do that, but I have this one question. It’s a favorite question of mine, and I love to ask all my guests this. Okay. You’re hosting a dinner party. And you could host a dinner party anywhere in the world. We could talk about great locations and where your favorite places are, and you can invite any three guests from the present or the past to this amazing dinner party. [00:37:11] Vince Menzione: We had one guest who wanted to do them in the future, like three people that hadn’t reached a future date. Whom would you invite Jen and why? [00:37:21] Jen Odess: Oh, first of all, you’re hitting home for me because I love to host dinner parties. I actually used to have a catering company. This is like one of those weird facts that, we didn’t talk about my pre services and ecosystem days, but I also had a catering company, so I love cooking and hosting dinner parties. [00:37:38] Jen Odess: So this is a great question. I feel like it’s a loaded question and I have to say my spouse. I love my husband dearly, but I have. To invite Lee to my dinner party. Okay. He’s in [00:37:47] Vince Menzione: Lee’s guest number one. Lee’s [00:37:49] Jen Odess: guest, number one. And the reason why is, first of all, I love him dearly, but he’s super interesting and he has such thought provoking topics to, to discuss and ways of viewing the world. [00:38:00] Jen Odess: He’s actually in security tech, so it’s like a tangential space, but not the same. [00:38:05] Vince Menzione: Yeah. But an important space right now, especially. Yeah. And [00:38:07] Jen Odess: he, yeah. And he’s, he’s just a delight to be around. So he’d be number one. Number two would be Frank Lloyd Wright. [00:38:15] Vince Menzione: Frank. Lloyd Wright. [00:38:17] Jen Odess: Yeah. I am an architecture and design junkie. [00:38:21] Jen Odess: Maybe I don’t do any of it myself, though. I dabble with friends that do it, and I try to apply it to my home life when I can. And Frank Lloyd Wright sort of embodies some of my favorite. Components of any kind of environment that you are experiencing, whether it’s a home or it’s an office building or it’s an outdoor space. [00:38:39] Jen Odess: I love the idea of minimalism and simplicity. I love the idea of monochromatic colors. I love the idea of spaces that can be used for multipurpose. And then I love the idea of the outside being in and the inside being out. I love it. So I would like love to pick his brain on some of his, how he came up with some of his ideas. [00:38:59] Jen Odess: Fascinating for some of his greatest. Yeah. Designs. Okay. That’s number two. Number three, I think it would be Pharrell Williams. Really? Yeah, I, Pharrell Williams. Yeah. I love fashion music and all things creativity. He’s got that, Annie’s philanthropic. He’s just yeah. The whole package of a good person. [00:39:26] Jen Odess: That’s super interesting and I very cool. I would love to pick his brain on what it was like to be behind the scenes on some of the fashion lines he’s collaborated with on some of his music collabs he’s had, and then just some of the work he’s doing around philanthropy. I would. I could just spend all night probably listening to him. [00:39:43] Jen Odess: This would be a [00:39:44] Vince Menzione: really cool conversation night. [00:39:45] Jen Odess: Don’t you wanna come to my dinner? Was gonna say, I’m sorry I didn’t invite you to identify. No [00:39:49] Vince Menzione: I was, can I bring dessert? [00:39:50] Jen Odess: Yeah. I come [00:39:50] Vince Menzione: for dessert. I, but it can’t, [00:39:51] Jen Odess: it has to be like a chocolate dessert. It’s gotta have [00:39:54] Vince Menzione: I love chocolate dessert. [00:39:55] Vince Menzione: Okay, great. So it would not be a problem for me, Jen. This is terrific. You have been absolutely amazing. So great to have you come here. Yeah. Such a busy time of year to have you make the trip here to Boca. We will have you back in the studio. I promise that I’ll have you back on stage. Stage. [00:40:10] Jen Odess: This is beautiful. [00:40:10] Jen Odess: Look at it. Yeah. This is [00:40:11] Vince Menzione: beautiful. And we transformed this into, to a room, basically a conference room. And then we also have our ultimate partner events. I would love to come, we would love to have you join us. Like I said, ServiceNow is such an impactful time. Your leadership in this segment market, and I wouldn’t say segment across all of AI in terms of all the use cases of AI is just so meaningful, especially for within the enterprise. [00:40:33] Vince Menzione: Yeah. Right now. So just really a jogger nut right now within the industry. So great to have you and have ServiceNow join us. So Jen, thank you so much for joining us. [00:40:42] Jen Odess: Thanks Vince. Appreciate the time. It’s a pleasure to be here. [00:40:44] Vince Menzione: Thank you very much. Thanks for tuning into this episode of Ultimate Eye to Partnering. [00:40:50] Vince Menzione: We’re bringing these episodes to you to help you level up your strategy. If you haven’t yet, now’s the time to take action and think about joining our community. We created a unique place, UPX or Ultimate partner experience. It’s more than a community. It’s your competitive edge with insider insights, real-time education, and direct access to people who are driving the ecosystem forward. [00:41:16] Vince Menzione: UPX helps you get results. And we’re just getting started as we’re taking this studio. And we’ll be hosting live stream and digital events here, including our January live stream, the Boca Winter Retreat, and more to come. So visit our website, the ultimate partner.com to learn more and join us. Now’s the time to take your partnerships to the next level.
AI Quick-Take: What's All The Fuss About Gemini? On this episode host Adam Turinas discusses the buzz around Gemini 3, Google's LLM. He digs into Google's big announcement and tries to get past the hype to what it really means for healthtech marketers. Find all of our network podcasts on your favorite podcast platforms and be sure to subscribe and like us. Learn more at www.healthcarenowradio.com/listen/
What does it actually take to move machine learning from experiments into production reliably, responsibly, and at scale?In this episode of Alexa's Input (AI), Alexa talks with Maria Vechtomova, co-founder of Marvelous MLOps and an O'Reilly author-in-progress on MLOps with Databricks. Maria shares how her background in data science led her into MLOps, and why most teams struggle not because of tools, but because of missing processes, traceability, and shared understanding across teams.Alexa and Maria dive into what separates good MLOps from fragile deployments, why shipping notebooks as “production” creates long-term pain, and how traceability across code, data, and environment forms the foundation for reliable ML systems. They also explore how LLM applications are reshaping MLOps tooling, and where the biggest skill gaps still exist between platform, data, and AI engineers.A must-listen for anyone building, operating, or scaling machine learning systems and for teams trying to make MLOps less magical and more marvelous.Learn more about Marvelous MLOps and Maria's work below.LinksWatch: https://www.youtube.com/@alexa_griffithRead: https://alexasinput.substack.com/Listen: https://creators.spotify.com/pod/profile/alexagriffith/More: https://linktr.ee/alexagriffithWebsite: https://alexagriffith.com/LinkedIn: https://www.linkedin.com/in/alexa-griffith/Find out more about the guest at:LinkedIn: https://www.linkedin.com/in/maria-vechtomova/TakeawaysMaria started as a data analyst and transitioned into MLOps.She emphasizes the importance of tracking data, code, and environment in MLOps.MLOps is a practice to bring machine learning models to production reliably.Good deployment processes require modular code and proper tracking.MLOps differs from DevOps due to the complexities of data and model drift.Education is crucial for bridging gaps between teams in AI.Small steps can lead to better MLOps practices.Scaling MLOps requires understanding the unique data of different brands.The rise of LLMs is changing the MLOps landscape.Effective teaching methods involve step-by-step guidance.Chapters00:00 Introduction to MLOps and Maria's Journey02:11 Maria's Path to MLOps and Knowledge Sharing04:41 The Importance of MLOps in AI Deployments10:12 Defining MLOps and Its Challenges11:38 MLOps vs. DevOps: Key Differences13:00 Overcoming Stagnation in MLOps16:04 Small Steps Towards Better MLOps Practices19:29 Scaling MLOps in Large Organizations21:58 The Impact of LLMs on MLOps23:58 The Shift from Traditional ML to AI Applications26:51 Evolving Roles in AI Engineering28:33 Databricks: A Comprehensive AI Platform31:45 Future of AI Platforms and Regulations34:26 Bridging Skill Gaps in AI Teams38:42 The Importance of Context in AI Development40:40 Foundational Skills for MLOps Professionals45:43 Integrating Personal Passions with Professional Growth47:30 Building Impactful AI Communities
Send us comments, suggestions and ideas here! In this week's episode we unzip the hidden file on the bonus floppy disk that came with the Necronomicon, upload its contents directly to the miniature astral hard drive hidden inside our pineal glands and begin installing Chaos Magick #6 an instruction manual on Technomancy 101 also known as the weird art and science of Cyber Magick! In the first half of the show we discuss the overlap between technology and magick, the promise and threat of AI gods and retrocausality. In the extended half of the show we talk shop about making AI sigils (do they even work?) and how to use the Cosmic Control Terminal like an ultra dangerous chaos magick hacker edge lord, like me. Thank you and enjoy the show!In this week's episode we discuss:Arthur C. Clark's Three LawsTrick Rock Into ThinkingState of the ArtDoes AI have Ka?Peter Carroll's PsybermagickJoshua Madera's Technomancy 101In the extended show available at www.patreon.com/TheWholeRabbit we further down the rabbit hole to discuss:The Hacker Method // Cosmic Control TerminalAstral AI SigilsVirtual Reality MagickAI as a Lovecraftian DeityGhost In the Shell Each host is responsible for writing and creating the content they present. Luke in red, Heka in purple, Tim in black-green, Mari in blue.Where to find The Whole Rabbit:Spotify: https://open.spotify.com/show/0AnJZhmPzaby04afmEWOAVInstagram: https://www.instagram.com/the_whole_rabbitTwitter: https://twitter.com/1WholeRabbitOrder Stickers: https://www.stickermule.com/thewholerabbitOther Merchandise: https://thewholerabbit.myspreadshop.com/Music By Spirit Travel Plaza:https://open.spotify.com/artist/30dW3WB1sYofnow7y3V0YoSources:Peter Carroll's Blog:https://www.specularium.org/blogTechnomancy 101, Joshua Madera:https://technomancy101.com/Psybermagick, Peter Carroll:https://www.amazon.com/PsyberMagick-Advanced-Ideas-Chaos-Magick/dp/1935150650Support the show
Jim sits down with Sean Luke (Partner at SPMB Executive Search) to talk about the art of hiring senior engineering and product leaders—especially now that every job description on Earth has "AI" duct-taped to it. We get into why sticking with one great search firm beats "random recruiter roulette," why tech interviewing is tough (spoiler: engineers aren't always born interviewers), and the eternal tension between the two key roles - CTO (big brain science/vision) and VP Engineering (keep the trains running, preferably on the tracks). Then it's on to the AI gold rush: what a normal Head of Engineering should actually be doing with AI (hint: practical stuff like code review, QA, automation), why "Head of AI" is usually a totally separate job, and why "10 years of LLM experience" belongs in the same bin as Web3 buzzword soup. We also cover who's moving jobs right now, why PE can feel like a saner bet than venture (less "moonshot," more "actual exit"), and what candidates must be able to explain: what you did, and how it moved the business—numbers included. Plus: a few recruiting war stories, including the kind you can't make up and the kind that makes you grateful for a boring Tuesday.
In this episode of Crazy Wisdom, I—Stewart Alsop—sit down with Garrett Dailey to explore a wide-ranging conversation that moves from the mechanics of persuasion and why the best pitches work by attraction rather than pressure, to the nature of AI as a pattern tool rather than a mind, to power cycles, meaning-making, and the fracturing of modern culture. Garrett draws on philosophy, psychology, strategy, and his own background in storytelling to unpack ideas around narrative collapse, the chaos–order split in human cognition, the risk of “AI one-shotting,” and how political and technological incentives shape the world we're living through. You can find the tweet Stewart mentions in this episode here. Also, follow Garrett Dailey on Twitter at @GarrettCDailey, or find more of his pitch-related work on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Garrett opens with persuasion by attraction, storytelling, and why pitches fail with force. 05:00 We explore gravity as metaphor, the opposite of force, and the “ring effect” of a compelling idea. 10:00 AI as tool not mind; creativity, pattern prediction, hype cycles, and valuation delusions. 15:00 Limits of LLMs, slopification, recursive language drift, and cultural mimicry. 20:00 One-shotting, psychosis risk, validation-seeking, consciousness vs prediction. 25:00 Order mind vs chaos mind, solipsism, autism–schizophrenia mapping, epistemology. 30:00 Meaning, presence, Zen, cultural fragmentation, shared models breaking down. 35:00 U.S. regional culture, impossibility of national unity, incentives shaping politics. 40:00 Fragmentation vs reconciliation, markets, narratives, multipolarity, Dune archetypes. 45:00 Patchwork age, decentralization myths, political fracturing, libertarian limits. 50:00 Power as zero-sum, tech-right emergence, incentives, Vance, Yarvin, empire vs republic. 55:00 Cycles of power, kyklos, democracy's decay, design-by-committee, institutional failure.Key InsightsPersuasion works best through attraction, not pressure. Garrett explains that effective pitching isn't about forcing someone to believe you—it's about creating a narrative gravity so strong that people move toward the idea on their own. This reframes persuasion from objection-handling into desire-shaping, a shift that echoes through sales, storytelling, and leadership.AI is powerful precisely because it's not a mind. Garrett rejects the “machine consciousness” framing and instead treats AI as a pattern amplifier—extraordinarily capable when used as a tool, but fundamentally limited in generating novel knowledge. The danger arises when humans project consciousness onto it and let it validate their insecurities.Recursive language drift is reshaping human communication. As people unconsciously mimic LLM-style phrasing, AI-generated patterns feed back into training data, accelerating a cultural “slopification.” This becomes a self-reinforcing loop where originality erodes, and the machine's voice slowly colonizes the human one.The human psyche operates as a tension between order mind and chaos mind. Garrett's framework maps autism and schizophrenia as pathological extremes of this duality, showing how prediction and perception interact inside consciousness—and why AI, which only simulates chaos-mind prediction, can never fully replicate human knowing.Meaning arises from presence, not abstraction. Instead of obsessing over politics, geopolitics, or distant hypotheticals, Garrett argues for a Zen-like orientation: do what you're doing, avoid what you're not doing. Meaning doesn't live in narratives about the future—it lives in the task at hand.Power follows predictable cycles—and America is deep in one. Borrowing from the Greek kyklos, Garrett frames the U.S. as moving from aristocracy toward democracy's late-stage dysfunction: populism, fragmentation, and institutional decay. The question ahead is whether we're heading toward empire or collapse.Decentralization is entropy, not salvation. Crypto dreams of DAOs and patchwork societies ignore the gravitational pull of power. Systems fragment as they weaken, but eventually a new center of order emerges. The real contest isn't decentralization vs. centralization—it's who will have the coherence and narrative strength to recentralize the pieces.
Hackaday Editors Elliot Williams and Al Williams met up to cover the best of Hackaday this week, and they want you to listen in. There were a hodgepodge of hacks this week, ranging from home automation with RF, volumetric displays in glass, and some crazy clocks, too. Ever see a typewriter that uses an ink pen? Elliot and Al hadn't either. Want time on a supercomputer? It isn't free, but it is pretty cheap these days. Finally, the guys discussed how to focus on a project like Dan Maloney, who finally got a 3D printer, and talked about Maya Posch's take on LLM intelligence. Check out the links over on Hackaday if you want to follow along, and as always, tell us what you think about this episode in the comments!
Algorithms and automations have been buds for a decade plus.
Capabilities? Through the roof? Usage? Ground floor.Claude Agent Skills might be one of the most useful features of any front-end LLM. Yet....it's crickets in terms of chat around it. For this 'AI at Work on Wednesday' episode, we're breaking it down for beginners and will have you spinning up your own Claude Agent Skills in no time. Claude Skills: How to build Custom Agentic Abilities for beginners -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion:Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Claude Skills Agentic Features OverviewDifferences: Claude Skills vs. GPTs vs. GEMSModular Agentic Workflow File StructureStep-by-Step Guide: Building Claude SkillsClaude Skills YAML/Markdown Setup ProcessTesting and Validating Custom Claude SkillsAdvanced Capabilities: Executable Code & Sub-AgentsCommon Troubleshooting for Claude Skills CreationTimestamps:00:00 "Claude Skill Library Unveiled"06:27 "Claude Skills Explained"07:29 Custom GPTs and Gems Explained11:18 Claude Skills vs Projects17:31 "Refining Skill Triggers Effectively"20:17 "Beginner Cloud Skills Best Practices"23:39 "Preferring GPT and Memory Tools"25:54 "Saving Skill File Properly"28:09 Creating Skills on Claude33:43 "Creating AI News Searcher"35:36 Claude Skills Now Available37:39 "Optimizing Claude for Knowledge Tasks"41:05 "Skill Builder Library Access"Keywords:Claude skills, Claude agent skills, custom agentic abilities, large language model, agentic workflows, specialized tasks, coding capabilities, file creation, executable code, skills library, skill builder, skill creator, markdown file, skill.md, folder structure, YAML front matter, composable skills, modular instructions, automation, prompt engineering, skill triggers, skill testing, advanced features, API skill versioning, governance and efficiency,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Vibe coding is dead simple. Head to AI.Studio/build to create your first app. Vibe coding is dead simple. Head to AI.Studio/build to create your first app.
OpenAI is (reportedly) in full panic mode.