Every week day, Certified Scrum Master, Agile Coach and business consultant Vasco Duarte interviews Scrum Masters and Agile Coaches from all over the world to get you actionable advice, new tips and tricks, improve your craft as a Scrum Master with daily doses of inspiring conversations with Scrum M…
Vasco Duarte, Agile Coach, Certified Scrum Master, Certified Product Owner, Business Consultant
Listeners of Scrum Master Toolbox Podcast that love the show mention: teams, improve, useful, ideas, daily, listening to this podcast, new, highly recommend, great podcast, thank, show, scrum masters, vasco, agile coaches.
The Scrum Master Toolbox Podcast is an incredibly valuable resource for anyone working in the Agile space. The direct and detailed content provides a daily knowledge boost, making it a must-listen for Agile practitioners. The podcast covers a wide range of topics related to Agile leadership, team challenges, and communication, making it relevant and informative for both new and experienced professionals. The interviews with guests from all over the world provide unique perspectives and insights into real-world experiences. Overall, this podcast is an amazing tool for continuous learning and motivation in the Agile community.
One of the best aspects of The Scrum Master Toolbox Podcast is its ability to provide practical advice and techniques that can be applied to real-life situations. The guests share their wisdom and experiences, offering new ideas and strategies for improving team performance. The brevity of the episodes allows for easy consumption, making it accessible for those with limited free time. Additionally, the production quality of the podcast is top-notch, with clear audio and engaging host moderation.
While there are many positive aspects of this podcast, one potential drawback is that some listeners may prefer longer episodes with more in-depth discussions. However, the short format can also be seen as a positive aspect, as it allows for quick and focused learning on specific topics. Additionally, some listeners may wish to hear more diverse perspectives or voices on the show.
In conclusion, The Scrum Master Toolbox Podcast is an invaluable resource for anyone working in Agile or Scrum roles. It provides daily knowledge boosts and offers insights from experienced professionals around the world. With its practical advice and concise format, this podcast is a must-listen for anyone looking to improve their Agile leadership skills or gain new ideas to enhance team performance.

Scott Smith: Empathy and Availability Define Excellent Product Ownership Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Always Present, Always Available, Always Curious "They are always present. They always make themselves available for team members that need them." - Scott Smith Scott is currently working with a Product Owner who exemplifies what great PO collaboration looks like. This person is always present—not just physically but mentally engaged with the team's work and challenges. They make themselves available for team members who need them, responding actively on the team chat and interacting consistently. What makes this PO stand out is their empathy and curiosity. Instead of being defensive when questions arise or challenges emerge, they lean into helping the team understand and solve problems. They show genuine curiosity about what the team is experiencing, asking questions and exploring solutions together rather than dictating answers. This PO understands that their role isn't to be the smartest person in the room but to be the most available, most collaborative, and most curious. The result is a team that feels supported and empowered, with clear direction and someone who genuinely helps them answer the hard questions. Scott's experience with this PO demonstrates that presence, availability, empathy, and curiosity are the foundations of great Product Owner work. Self-reflection Question: How present and available are you to your team, and do you approach their questions with curiosity or defensiveness? The Bad Product Owner: Never There When the Team Needs Direction "The PO was never present. The team had lack of clarity, and vision, and had no direction or someone who would help answer those questions." - Scott Smith Scott has also experienced the opposite extreme—a Product Owner who was never present. This absence created a cascade of problems for the team. Without regular access to the PO, the team lacked clarity about priorities, vision, and direction. They had questions that went unanswered and decisions that couldn't be made. The result was frustration and a team that couldn't move forward effectively. An absent PO creates a vacuum where uncertainty thrives. Teams end up making assumptions, second-guessing decisions, and feeling disconnected from the purpose of their work. The lack of someone who can help answer strategic questions or provide guidance means the team operates in the dark, building things without confidence that they're building the right things. Scott's experience highlights a fundamental truth about Product Ownership: presence isn't optional. Teams need a PO who shows up, engages, and stays connected to the work. Without that presence, even the most skilled team will struggle to deliver value because they can't align their efforts with the product vision and customer needs. Self-reflection Question: If your team were asked whether you're present and available as a Product Owner or Scrum Master, what would they say? [The Scrum Master Toolbox Podcast Recommends]

Scott Smith: Using MIRO to Build a Living Archive of Learning Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "We're in a servant leadership role. So, ask: is the team thriving? That's a huge indication of success." - Scott Smith For Scott, success as a Scrum Master isn't measured by velocity charts or burn-down graphs—it's measured by whether the people are thriving. This includes everyone: the development team and the Product Owner. As a servant leader, Scott's focus is on creating conditions where teams can flourish, and he has practical ways to gauge that health. Scott does a light touch check on a regular basis and a deeper assessment quarterly. Mid-sprint, he conducts what he calls a "vibe" check—a quick pulse to understand how people are feeling and what they need. During quarterly planning, the team retrospects and celebrates achievements from the past quarter, keeping and tracking actions to ensure continuous improvement isn't just talked about but lived. Scott's approach recognizes that success is both about the work being done and the people doing it. When teams feel supported, heard, and valued, the work naturally flows better. This people-first perspective defines what great servant leadership looks like in practice. Self-reflection Question: How often do you check in on whether your team is truly thriving, and what specific indicators tell you they are? Featured Retrospective Format for the Week: MIRO as a Living History Museum "Use the multiple retros in the MIRO board as a shared history museum for the team." - Scott Smith Scott leverages MIRO not just as a tool for running retrospectives but as a living archive of team learning and growth. He uses MIROVERSE templates to bring diversity to retrospective conversations, exploring the vast library of pre-built formats that offer themed and structured approaches to reflection. The magic happens when Scott treats each retrospective board not as a disposable artifact but as part of the team's shared history museum. Over time, the accumulation of retrospective boards tells the story of the team's journey—what they struggled with, what they celebrated, what actions they took, and how they evolved. This approach transforms retrospectives from isolated events into a continuous narrative of improvement. Teams can look back at previous retros to see patterns, track whether actions were completed, and recognize how far they've come. MIRO becomes both the canvas for current reflection and the archive of collective learning, making improvement visible and tangible across time. [The Scrum Master Toolbox Podcast Recommends]

Scott Smith: Building a Coaching Service Where Survey Scores Become Living Improvement Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Success is about feedback from coaching clients." - Scott Smith Scott is tackling one of the most challenging aspects of organizational transformation: turning annual survey results into continuous improvement. Working with a domain of about 30 people, Scott is exploring how to create a coaching service that doesn't just react to once-a-year data but actively drives ongoing growth. The typical pattern in many organizations is familiar—conduct an annual survey, review the scores, maybe have a few discussions, and then wait another year. Scott is experimenting with a different approach. He's setting up a coaching service that focuses on real-time feedback from the people being coached, making improvement a living practice rather than an annual event. The strategy starts with a pilot, testing the concept before scaling across the entire domain. Scott's measure of success is pragmatic and human-centered: feedback from coaching clients. Not abstract metrics or theoretical frameworks, but whether the people receiving coaching find value in what's being offered. This approach reflects a fundamental principle of Agile coaching—start small, experiment, gather feedback, and iterate based on what actually works for the people involved. Scott is building improvement infrastructure that puts continuous learning at the center, transforming how organizations think about growth from an annual checkbox into an ongoing conversation. Self-reflection Question: If you were to implement a coaching service in your organization, how would you measure its success beyond traditional survey scores? [The Scrum Master Toolbox Podcast Recommends]

Scott Smith: Why Great Scrum Masters Create Space for Breaks Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Think of the people involved. Put yourself in the shoes of the other." - Scott Smith Scott found himself in the middle of rising tension as voices escalated between the Product Owner and the development team. The PO was harsh, emotions were running high, and the conflict was intensifying with each exchange. In that moment, Scott knew he had to act. He stepped in with a simple but powerful reminder: "We're on the same team." That pause—that momentary break—allowed everyone to step back and reset. Both the PO and the team members later thanked Scott for his intervention, acknowledging they needed that space to cool down and refocus on their shared outcome. Scott's approach centers on empathy and perspective-taking. He emphasizes thinking about the people involved and putting yourself in their shoes. When tensions rise, sometimes the most valuable contribution a Scrum Master can make is creating space for a break, reminding everyone of the shared goal, and helping teams focus on the outcome rather than the conflict. It's not about taking sides—it's about serving the team by being the calm presence that brings everyone back to what matters most. Self-reflection Question: When you witness conflict between team members or between the team and Product Owner, do you tend to jump in immediately or create space for the parties to find common ground themselves? Featured Book of the Week: An Ex-Manager Who Believed "It was about having someone who believed in me." - Scott Smith Scott's most influential "book" isn't printed on pages—it's a person. After spending 10 years as a Business Analyst, Scott decided to take the Professional Scrum Master I (PSM I) course and look for a Scrum Master position. That transition wasn't just about skills or certification; it was about having an ex-manager who inspired him to chase his goals and truly believed in him. This person gave Scott the confidence to make a significant career pivot, demonstrating that sometimes the most powerful catalyst for growth is someone who sees your potential before you fully recognize it yourself. Scott's story reminds us that great leadership isn't just about managing tasks—it's about inspiring people to reach for goals they might not have pursued alone. The belief and encouragement of a single person can change the trajectory of someone's entire career. [The Scrum Master Toolbox Podcast Recommends]

Scott Smith: The Spotlight Failure That Taught a Silent Lesson About Recognition Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Not everybody enjoys the limelight and being called out, even for great work." - Scott Smith Scott was facilitating a multi-squad showcase with over 100 participants, and everything seemed to be going perfectly. Each squad had their five-minute slot to share achievements from the sprint, and Scott was coordinating the entire event. When one particular team member delivered what Scott considered fantastic work, he couldn't help but publicly recognize them during the introduction. It seemed like the perfect moment to celebrate excellence in front of the entire organization. But then his phone rang. The individual he had praised was unhappy—really unhappy. What Scott learned in that moment transformed his approach to recognition forever. The person was quiet, introverted, and conservative by nature. Being called out without prior notice or permission in front of 100+ people wasn't a reward—it was uncomfortable and unwelcome. Scott discovered that even positive recognition requires consent and awareness of individual preferences. Some people thrive in the spotlight, while others prefer their contributions to be acknowledged privately. The relationship continued well afterward, but the lesson stuck: check in with individuals before publicly recognizing them, understanding that great coaching means respecting how people want to be celebrated, not just that they should be celebrated. Self-reflection Question: How do you currently recognize team members' achievements, and have you asked each person how they prefer to be acknowledged for their contributions? [The Scrum Master Toolbox Podcast Recommends]

BONUS: When AI Knows Your Emotional Triggers Better Than You Do — Navigating Mindfulness in the AI Age In this thought-provoking conversation, former computer engineer and mindfulness leader Mo Edjlali explores how AI is reshaping human meaning, attention, and decision-making. We examine the critical question: what happens when AI knows your emotional triggers better than you know yourself? Mo shares insights on remaining sovereign over our attention, avoiding dependency in both mindfulness and technology, and preparing for a world where AI may outperform us in nearly every domain. From Technology Pioneer to Mindfulness Leader "I've been very heavily influenced by technology, computer engineering, software development. I introduced DevOps to the federal government. But I have never seen anything change the way in which human beings work together like Agile." — Mo Edjlali Mo's journey began in the tech world — graduating in 1998, he was on the front line of the internet explosion. He remembers the days before the internet, watched online multiplayer games emerge in 1994, and worked on some of the most complicated tech projects in federal government. Technology felt almost like magic, advancing at a logarithmic rate faster than anything else. But when Mo discovered mindfulness practices 12-15 years ago, he found something equally transformative: actual exercises to develop emotional intelligence and soft skills that the tech world talked about but never taught. Mindfulness provided logical, practical methods that didn't require "woo-woo" beliefs — just practice that fundamentally changed his relationship with his mind. This dual perspective — tech innovator and mindfulness teacher — gives Mo a unique lens for understanding where we're headed. The Shift from Liberation to Dependency "I was fortunate enough, the teachers I was exposed to, the mentality was very much: you're gonna learn how to meditate on your own, in silence. There is no guru. There is no cult of personality." — Mo Edjlali Mo identifies a dangerous drift in the mindfulness movement: from teaching independence to creating dependency. His early training, particularly a Vipassana retreat led by S.N. Goenka, modeled true liberation — you show up for 10 days, pay nothing, receive food and lodging, learn to meditate, then donate what you can at the end. Critically, you leave being able to meditate on your own without worshiping a teacher or subscribing to guided meditations. But today's commercialized mindfulness often creates the opposite: powerful figures leading fiefdoms, consumers taught to listen to guided meditations rather than meditate independently. This dependency model mirrors exactly what's happening with AI — systems designed to make us rely on them rather than empower our own capabilities. Recognizing this parallel is essential for navigating both fields wisely. AI as a New Human Age, Not Just Another Tool "With AI, this is different. This isn't like mobile computing, this isn't like the internet. We're entering a new age. We had the Bronze Age, the Iron Age, the Industrial Age. When you enter a new age, it's almost like knocking the chess board over, flipping the pieces upside down. We're playing a new game." — Mo Edjlali Mo frames AI not as another technology upgrade but as the beginning of an entirely new human age. In a new age, everything shifts: currency, economies, government, technology, even religions. The documentary about the Bronze Age collapse taught him that when ages turn over, the old rules no longer apply. This perspective explains why AI feels fundamentally different from previous innovations. ChatGPT 2.0 was interesting; ChatGPT 3 blew Mo's mind and made him realize we're witnessing something unprecedented. While he's optimistic about the potential for sustainable abundance and extraordinary breakthroughs, he's also aware we're entering both the most exciting and most frightening time to be alive. Everything we learned in high school might be proven wrong as AI rewrites human knowledge, translates animal languages, extends longevity, and achieves things we can't even imagine. The Mental Health Tsunami and Loss of Purpose "If we do enter the age of abundance, where AI could do anything that human beings could do and do it better, suddenly the system we have set up — where our purpose is often tied to our income and our job — suddenly, we don't need to work. So what is our purpose?" — Mo Edjlali Mo offers a provocative vision of the future: a world where people might pay for jobs rather than get paid to work. It sounds crazy until you realize it's already happening — people pay $100,000-$200,000 for college just to get a job, politicians spend millions to get elected. If AI handles most work and we enter an age of abundance, jobs won't be about survival or income — they'll be about meaning, identity, and social connection. This creates three major crises Mo sees accelerating: attacks on our focus and attention (technology hijacking our awareness), polarization (forcing black-and-white thinking), and isolation (pushing us toward solo experiences). The mental health tsunami is coming as people struggle to find purpose in a world where AI outperforms them in domain after domain. The jobs will change, the value systems will shift, and those without tools for navigating this transformation will suffer most. When AI Reads Your Mind "Researchers at Duke University had hooked up fMRI brain scanning technology and took that data and fed it into GPT 2. They were able to translate brain signals into written narrative. So the implications are that we could read people's minds using AI." — Mo Edjlali The future Mo describes isn't science fiction — it's already beginning. Three years ago, researchers used early GPT to translate brain signals into written text by scanning people's minds with fMRI and training AI on the patterns. Today, AI knows a lot about heavy users like Mo through chat conversations. Tomorrow, AI will have video input of everything we see, sensory input from our biometrics (pulse, heart rate, health indicators), and potentially direct connection to our minds. This symbiotic relationship is coming whether we're ready or not. Mo demonstrates this with a personal experiment: he asked his AI to tell him about himself, describe his personality, identify his strengths, and most powerfully — reveal his blind spots. The AI's response was outstanding, better than what any human (even his therapist or himself) could have articulated. This is the reality we're moving toward: AI that knows our emotional triggers, blind spots, and patterns better than we do ourselves. Using AI as a Mirror for Self-Discovery "I asked my AI, 'What are my blind spots?' Human beings usually won't always tell you what your blind spots are, they might not see them. A therapist might not exactly see them. But the AI has... I've had the most intimate kind of conversations about everything. And the response was outstanding." — Mo Edjlali Mo's approach to AI is both pragmatic and experimental. He uses it extensively — at the level of teenagers and early college students who are on it all the time. But rather than just using AI as a tool, he treats it as a mirror for understanding himself. Asking AI to identify your blind spots is a powerful exercise because AI has observed all your conversations, patterns, and tendencies without the human limitations of forgetfulness or social politeness. Vasco shares a similar experience using AI as a therapy companion — not replacing his human therapist, but preparing for sessions and processing afterward. This reveals an essential truth: most of us don't understand ourselves that well. We're blind navigators using an increasingly powerful tool. The question isn't whether AI will know us better than we know ourselves — that's already happening. The question is how we use that knowledge wisely. The Danger of AI Hijacking Our Agency "There's this real danger. I saw that South Park episode about ChatGPT where his wife is like, 'Come on, put the AI down, talk to me,' and he's got this crazy business idea, and the AI keeps encouraging him along. It's a point where he's relying way too heavily on the AI and making really poor decisions." — Mo Edjlali Not all AI use is beneficial. Mo candidly admits his own mistakes — sometimes leaning into AI feedback over his actual users' feedback for his Meditate Together app because "I like what the AI is saying." This mirrors the South Park episode's warning about AI dependency, where the character's AI encourages increasingly poor decisions while his relationships suffer. Social media demonstrates this danger at scale: AI algorithms tuned to steal our attention and hijack our agency, preventing us from thinking about what truly matters — relationships and human connection. Mo shares a disturbing story about Zoom bombers disrupting Meditate Together sessions, filming it, posting it on YouTube where it got 90,000 views, with comments thanking the disruptors for "making my day better." Technology created a cannibalistic dynamic where teenagers watched videos of their mothers, aunts, and grandmothers being harassed during meditation. When Mo tried to contact Google, the company's incentive structure prioritized views and revenue over human decency. Technology combined with capitalism creates these dangerous momentum toward monetizing attention at any cost. Remaining Sovereign Over Your Attention "Traditionally, mindfulness does an extraordinary job, if you practice right, to help you regain your agency of your focus and concentration. It takes practice. But reading is now becoming a concentration practice. It's an actual practice." — Mo Edjlali Mo identifies three major symptoms affecting us: attacks on focus/attention, polarization into black-and-white thinking, and isolation. Mindfulness practices directly counter all three — but only if practiced correctly. Training attention, focus, and concentration requires actual practice, not just listening to guided meditations. Mo offers practical strategies: reading as concentration practice (asking "does anyone read anymore?" recognizing that sustained reading now requires deliberate effort), turning off AirPods while jogging or driving to find silence, spending time alone with your thoughts, and recognizing that we were given extraordinary power (smartphones) with zero training on how to be aware of it. Older generations remember having to rewind VHS tapes — forced moments of patience and stillness that no longer exist. We need to deliberately recreate those spaces where we're not constantly consuming entertainment and input. Dialectic Thinking: Beyond Polarization "I saw someone the other day wear a shirt that said, 'I'm perfect the way I am.' That's one-dimensional thinking. Two-dimensional thinking is: you're perfect the way that you are, and you could be a little better." — Mo Edjlali Mo's book OpenMBSR specifically addresses polarization by introducing dialectic thinking — the ability to hold paradoxes and seeming contradictions simultaneously. Social media and algorithms push us toward one-dimensional, black-and-white thinking: good/bad, right/wrong, with me/against me. But reality is far more nuanced. The ability to think "I'm perfect as I am AND I can improve" or "AI is extraordinary AND dangerous" is essential for navigating complexity. This mirrors the tech world's embrace of continuous improvement in Agile — accepting where you are while always pushing for better. Chess players learned this years ago when AI defeated humans — they didn't freak out, they accepted it and adapted. Now AI in chess doesn't just give answers; it helps humans understand how it arrived at those answers. This partnership model, where AI coaches us through complexity rather than simply replacing us, represents the healthiest path forward. Building Community, Not Dependency "When people think to meditate, unfortunately, they think, I have to do this by myself and listen to guided meditation. I'm saying no. Do it in silence. If you listen to guided meditation, listen to guided meditation that teaches you how to meditate in silence. And do it with other people, with intentional community." — Mo Edjlali Mo's OpenMBSR initiative explicitly borrows from the Agile movement's success: grassroots, community-centric, open source, transparent. Rather than creating fiefdoms around cult personalities, he wants mindfulness to spread organically through communities helping communities. This directly counters the isolation trend that technology accelerates. Meditate Together exists specifically to create spaces where people meditate with other human beings around the world, with volunteer hosts holding sessions. The model isn't about dependency on a teacher or platform — it's about building connection and shared practice. This aligns perfectly with how the tech world revolutionized collaborative work through Agile and Scrum: transparent, iterative, valuing individuals and interactions. The question for both mindfulness and AI adoption is whether we'll create systems that empower independence and community, or ones that foster dependency and isolation. Preparing for a World Where AI Outperforms Humans "AI is going to need to kind of coach us and ease us into it, right? There's some really dark, ugly things about ourselves that could be jarring without it being properly shared, exposed, and explained." — Mo Edjlali Looking at his children, Mo wonders what tools they'll need in a world where AI may outperform humans in nearly every domain. The answer isn't trying to compete with AI in calculation, memory, or analysis — that battle is already lost. Instead, the essential human skills become self-awareness, emotional intelligence, dialectic thinking, community building, and maintaining agency over attention and decision-making. AI will need to become a coach, helping humans understand not just answers but how it arrived at those answers. This requires AI development that prioritizes human growth over profit maximization. It also requires humans willing to do the hard work of understanding themselves — confronting blind spots, managing emotional triggers, practicing concentration, and building genuine relationships. The mental health tsunami Mo predicts isn't inevitable if we prepare now by teaching these skills widely, building community-centric systems, and designing AI that empowers rather than replaces human wisdom and connection. About Mo Edjlali Mo Edjlali is a former computer engineer, and also the founder and CEO of Mindful Leader, the world's largest provider of Mindfulness-Based Stress Reduction training. Mo's new book Open MBSR: Reimagining the Future of Mindfulness explores how ancient practices can help us navigate the AI revolution with awareness and resilience. You can learn more about Mo and his work at MindfulLeader.org, check out Meditate Together, and read his articles on AI's Mind-Reading Breakthrough and AI: Not Another Tool, but a New Human Age.

AI Assisted Coding: Building Reliable Software with Unreliable AI Tools In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering. From Skeptic to Pioneer: Lada's AI Coding Journey "I got a new skill for free!" Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI. Understanding Vibecoding vs. AI-Assisted Development "AI assisted coding requires judgment, and it's never been as important to exercise judgment as now." Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices. The Answer Injection Anti-Pattern When Working With AI "You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect." One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability. Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you. Never Trust a Single LLM: Multi-Agent Collaboration "Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements." Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review. Code Quality Matters MORE with AI "This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!" Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively. Managing Complexity: The Open Question "If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it." One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever. Context is Everything: Highway vs. Parking Lot "You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt." Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down. The Era of Discovery: We've Only Just Begun "We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of." Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time. Resources For Further Study Video of Lada's talk: https://www.youtube.com/watch?v=_LSK2bVf0Lc&t=8654s Lada's Patterns and Anti-patterns website: https://lexler.github.io/augmented-coding-patterns/ Lada's Substack https://lexler.substack.com/ AI Assisted Coding episode with Dawid Dahl AI Assisted Coding episode with Llewellyn Falco Claude Flow - orchestration platform About Lada Kesseler Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering. Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools. You can link with Lada Kesseler on LinkedIn.

AI Assisted Coding: Treating AI Like a Junior Engineer - Onboarding Practices for AI Collaboration In this special episode, Sergey Sergyenko, CEO of Cybergizer, shares his practical framework for AI-assisted development built on transactional models, Git workflows, and architectural conventions. He explains why treating AI like a junior engineer, keeping commits atomic, and maintaining rollback strategies creates production-ready code rather than just prototypes. Vibecoding: An Automation Design Instrument "I would define Vibecoding as an automation design instrument. It's not a tool that can deliver end-to-end solution, but it's like a perfect set of helping hands for a person who knows what they need to do." Sergey positions vibecoding clearly: it's not magic, it's an automation design tool. The person using it must know what they need to accomplish—AI provides the helping hands to execute that vision faster. This framing sets expectations appropriately: AI speeds up development significantly, but it's not a silver bullet that works without guidance. The more you practice vibecoding, the better you understand its boundaries. Sergey's definition places vibecoding in the evolution of development tools: from scaffolding to co-pilots to agentic coding to vibecoding. Each step increases automation, but the human architect remains essential for providing direction, context, and validation. Pair Programming with the Machine "If you treat AI as a junior engineer, it's very easy to adopt it. Ah, okay, maybe we just use the old traditions, how we onboard juniors to the team, and let AI follow this step." One of Sergey's most practical insights is treating AI like a junior engineer joining your team. This mental model immediately clarifies roles and expectations. You wouldn't let a junior architect your system or write all your tests—so why let AI? Instead, apply existing onboarding practices: pair programming, code reviews, test-driven development, architectural guidance. This approach leverages Extreme Programming practices that have worked for decades. The junior engineer analogy helps teams understand that AI needs mentorship, clear requirements, and frequent validation. Just as you'd provide a junior with frameworks and conventions to follow, you constrain AI with established architectural patterns and framework conventions like Ruby on Rails. The Transactional Model: Atomic Commits and Rollback "When you're working with AI, the more atomic commits it delivers, more easy for you to kind of guide and navigate it through the process of development." Sergey's transactional approach transforms how developers work with AI. Instead of iterating endlessly when something goes wrong, commit frequently with atomic changes, then rollback and restart if validation fails. Each commit should be small, independent, and complete—like a feature flag you can toggle. The commit message includes the prompt sequence used to generate the code and rollback instructions. This approach makes the Git repository the context manager, not just the AI's memory. When you need to guide AI, you can reference specific commits and their context. This mirrors trunk-based development practices where teams commit directly to master with small, verified changes. The cost of rollback stays minimal because changes are atomic, making this strategy far more efficient than trying to fix broken implementations through iteration. Context Management: The Weak Point and the Solution "Managing context and keeping context is one of the weak points of today's coding agents, therefore we need to be very mindful in how we manage that context for the agent." Context management challenges current AI coding tools—they forget, lose thread, or misinterpret requirements over long sessions. Sergey's solution is embedding context within the commit history itself. Each commit links back to the specific reasoning behind that code: why it was accepted, what iterations it took, and how to undo it if needed. This creates a persistent context trail that survives beyond individual AI sessions. When starting new features, developers can reference previous commits and their context to guide the AI. The transactional model doesn't just provide rollback capability—it creates institutional memory that makes AI progressively more effective as the codebase grows. TDD 2.0: Humans Write Tests, AI Writes Code "I would never allow AI to write the test. I would do it by myself. Still, it can write the code." Sergey is adamant about roles: humans write tests, AI writes implementation code. This inverts traditional TDD slightly—instead of developers writing tests then code, they write tests and AI writes the code to pass them. Tests become executable requirements and prompts. This provides essential guardrails: AI can iterate on implementation until tests pass, but it can't redefine what "passing" means. The tests represent domain knowledge, business requirements, and validation criteria that only humans should control. Sergey envisions multi-agent systems where one agent writes code while another validates with tests, but critically, humans author the original test suite. This TDD 2.0 framework (a talk Sergey gave at the Global Agile Summit) creates a verification mechanism that prevents the biggest anti-pattern: coding without proper validation. The Two Cardinal Rules: Architecture and Verification "I would never allow AI to invent architecture. Writing AI agentic coding, Vibecoding, whatever coding—without proper verification and properly setting expectations of what you want to get as a result—that's the main mistake." Sergey identifies two non-negotiables. First, never let AI invent architecture. Use framework conventions (Rails, etc.) to constrain AI's choices. Leverage existing code generators and scaffolding. Provide explicit architectural guidelines in planning steps. Store iteration-specific instructions where AI can reference them. The framework becomes the guardrails that prevent AI from making structural decisions it's not equipped to make. Second, always verify AI output. Even if you don't want to look at code, you must validate that it meets requirements. This might be through tests, manual review, or automated checks—but skipping verification is the fundamental mistake. These two rules—human-defined architecture and mandatory verification—separate successful AI-assisted development from technical debt generation. Prototype vs. Production: Two Different Workflows "When you pair as an architect or a really senior engineer who can implement it by himself, but just wants to save time, you do the pair programming with AI, and the AI kind of ships a draft, and rapid prototype." Sergey distinguishes clearly between prototype and production development. For MVPs and rapid prototypes, a senior architect pairs with AI to create drafts quickly—this is where speed matters most. For production code, teams add more iterative testing and polishing after AI generates initial implementation. The key is being explicit about which mode you're in. The biggest anti-pattern is treating prototype code as production-ready without the necessary validation and hardening steps. When building production systems, Sergey applies the full transactional model: atomic commits, comprehensive tests, architectural constraints, and rollback strategies. For prototypes, speed takes priority, but the architectural knowledge still comes from humans, not AI. The Future: AI Literacy as Mandatory "Being a software engineer and trying to get a new job, it's gonna be a mandatory requirement for you to understand how to use AI for coding. So it's not enough to just be a good engineer." Sergey sees AI-assisted coding literacy becoming as fundamental as Git proficiency. Future engineering jobs will require demonstrating effective AI collaboration, not just traditional coding skills. We're reaching good performance levels with AI models—now the challenge is learning to use them efficiently. This means frameworks and standardized patterns for AI-assisted development will emerge and consolidate. Approaches like AAID, SpecKit, and others represent early attempts to create these patterns. Sergey expects architectural patterns for AI-assisted development to standardize, similar to how design patterns emerged in object-oriented programming. The human remains the bottleneck—for domain knowledge, business requirements, and architectural guidance—but the implementation mechanics shift heavily toward AI collaboration. Resources for Practitioners "We are reaching a good performance level of AI models, and now we need to guide it to make it impactful. It's a great tool, now we need to understand how to make it impactful." Sergey recommends Obie Fernandez's work on "Patterns of Application Development Using AI," particularly valuable for Ruby and Rails developers but applicable broadly. He references Andrey Karpathy's original vibecoding post and emphasizes Extreme Programming practices as foundational. The tools he uses—Cursor and Claude Code—support custom planning steps and context management. But more important than tools is the mindset: we have powerful AI capabilities now, and the focus must shift to efficient usage patterns. This means experimenting with workflows, documenting what works, and sharing patterns with the community. Sergey himself shares case studies on LinkedIn and travels extensively speaking about these approaches, contributing to the collective learning happening in real-time. About Sergey Sergyenko Sergey is the CEO of Cybergizer, a dynamic software development agency with offices in Vilnius, Lithuania. Specializing in MVPs with zero cash requirements, Cybergizer offers top-tier CTO services and startup teams. Their tech stack includes Ruby, Rails, Elixir, and ReactJS. Sergey was also a featured speaker at the Global Agile Summit, and you can find his talk available in your membership area. If you are not a member don't worry, you can get the 1-month trial and watch the whole conference. You can cancel at any time. You can link with Sergey Sergyenko on LinkedIn.

BONUS: Augmented AI Development - Software Engineering First, AI Second In this special episode, Dawid Dahl introduces Augmented AI Development (AAID)—a disciplined approach where professional developers augment their capabilities with AI while maintaining full architectural control. He explains why starting with software engineering fundamentals and adding AI where appropriate is the opposite of most frameworks, and why this approach produces production-grade software rather than technical debt. The AAID Philosophy: Don't Abandon Your Brain "Two of the fundamental developer principles for AAID are: first, don't abandon your brain. And the second is incremental steps." Dawid's Augmented AI Development framework stands in stark contrast to "vibecoding"—which he defines strictly as not caring about code at all, only results on screen. AAID is explicitly designed for professional developers who maintain full understanding and control of their systems. The framework is positioned on the furthest end of the spectrum from vibe coding, requiring developers to know their craft deeply. The two core principles—don't abandon your brain, work incrementally—reflect a philosophy that AI is a powerful collaborator, not a replacement for thinking. This approach recognizes that while 96% of Dawid's code is now written by AI, he remains the architect, constantly steering and verifying every step. In this segment we refer to Marcus Hammarberg's work and his book The Bungsu Story. Software Engineering First, AI Second: A Hill to Die On "You should start with software engineering wisdom, and then only add AI where it's actually appropriate. I think this is super, super important, and the entire foundation of this framework. This is a hill I will personally die on." What makes AAID fundamentally different from other AI-assisted development frameworks is its starting point. Most frameworks start with AI capabilities and try to add structure and best practices afterward. Dawid argues this is completely backwards. AAID begins with 50-60 years of proven software engineering wisdom—test-driven development, behavior-driven development, continuous delivery—and only then adds AI where it enhances the process. This isn't a minor philosophical difference; it's the foundation of producing maintainable, production-grade software. Dawid admits he's sometimes "manipulating developers to start using good, normal software engineering practices, but in this shiny AI box that feels very exciting and new." If the AI wrapper helps developers finally adopt TDD and BDD, he's fine with that. Why TDD is Non-Negotiable with AI "Every time I prompt an AI and it writes code for me, there is often at least one or two or three mistakes that will cause catastrophic mistakes down the line and make the software impossible to change." Test-driven development isn't just a nice-to-have in AAID—it's essential. Dawid has observed that AI consistently makes 2-3 mistakes per prompt that could have catastrophic consequences later. Without TDD's red-green-refactor cycle, these errors accumulate, making code increasingly difficult to change. TDD answers the question "Is my code technically correct?" while acceptance tests answer "Is the system releasable?" Both are needed for production-grade software. The refactor step is where 50-60 years of software engineering wisdom gets applied to make code maintainable. This matters because AAID isn't vibe coding—developers care deeply about code quality, not just visible results. Good software, as Dave Farley says, is software that's easy to change. Without TDD, AI-generated code becomes a maintenance nightmare. The Problem with "Prompt and Pray" Autonomous Agents "When I hear 'our AI can now code for over 30 hours straight without stopping,' I get very afraid. You fall asleep, and the next morning, the code is done. Maybe the tests are green. But what has it done in there? Imagine everything it does for 30 hours. This system will not work." Dawid sees two diverging paths for AI-assisted development's future. The first—autonomous agents working for hours or days without supervision—terrifies him. The marketing pitch sounds appealing: prompt the AI, go to sleep, wake up to completed features. But the reality is technical debt accumulation at scale. Imagine all the decisions, all the architectural choices, all the mistakes an AI makes over 30 hours of autonomous work. Dawid advocates for the stark contrast: working in extremely small increments with constant human steering, always aligned to specifications. His vision of the future isn't AI working alone—it's voice-controlled confirmations where he says "Yes, yes, no, yes" as AI proposes each tiny change. This aligns with DORA metrics showing that high-performing teams work in small batches with fast feedback loops. Prerequisites: Product Discovery Must Come First "Without Dave Farley, this framework would be totally different. I think he does everything right, basically. With this framework, I want to stand on the shoulders of giants and work on top of what has already been done." AAID explicitly requires product discovery and specification phases before AI-assisted coding begins. This is based on Dave Farley's product journey model, which shows how products move from idea to production. AAID starts at the "executable specifications" stage—it requires input specifications from prior discovery work. This separates specification creation (which Dawid is addressing in a separate "Dream Encoder" framework) from code execution. The prerequisite isn't arbitrary; it acknowledges that AI-assisted implementation works best when the problem is well-defined. This "standing on shoulders of giants" approach means AAID doesn't try to reinvent software engineering—it leverages decades of proven practices from TDD pioneers, BDD creators, and continuous delivery experts. What's Wrong with Other AI Frameworks "When the AI decides to check the box [in task lists], that means this is the definition of done. But how is the AI taking that decision? It's totally ad hoc. It's like going back to the 1980s: 'I wrote the code, I'm done.' But what does that mean? Nobody has any idea." Dawid is critical of current AI frameworks like SpecKit, pointing out fundamental flaws. They start with AI first and try to add structure later (backwards approach). They use task lists with checkboxes where AI decides when something is "done"—but without clear criteria, this becomes ad hoc decision-making reminiscent of 1980s development practices. These frameworks "vibecode the specs," not realizing there's a structured taxonomy to specifications that BDD already solved. Most concerning, some have removed testing as a "feature," treating it as optional. Dawid sees these frameworks as over-engineered, process-centric rather than developer-centric, often created by people who may not develop software themselves. AAID, in contrast, is built by a practicing developer solving real problems daily. Getting Started: Learn Fundamentals First "The first thing developers should do is learn the fundamentals. They should skip AI altogether and learn about BDD and TDD, just best practices. But when you know that, then you can look into a framework, maybe like mine." Dawid's advice for developers interested in AI-assisted coding might seem counterintuitive: start by learning fundamentals without AI. Master behavior-driven development, test-driven development, and software engineering best practices first. Only after understanding these foundations should developers explore frameworks like AAID. This isn't gatekeeping—it's recognizing that AI amplifies whatever approach developers bring. If they start with poor practices, AI will help them build unmaintainable systems faster. But if they start with solid fundamentals, AI becomes a powerful multiplier that lets them work at unprecedented speed while maintaining quality. AAID offers both a dense technical article on dev.to and a gentler game-like onboarding in the GitHub repo, meeting developers wherever they are in their journey. About Dawid Dahl Dawid is the creator of Augmented AI Development (AAID), a disciplined approach where developers augment their capabilities by integrating with AI, while maintaining full architectural control. Dawid is a software engineer at Umain, a product development agency. You can link with Dawid Dahl on LinkedIn and find the AAID framework on GitHub.

AI Assisted Coding: Swimming in AI - Managing Tech Debt in the Age of AI-Assisted Coding In this special episode, Lou Franco, veteran software engineer and author of "Swimming in Tech Debt," shares his practical approach to AI-assisted coding that produces the same amount of tech debt as traditional development—by reading every line of code. He explains the critical difference between vibecoding and AI-assisted coding, why commit-by-commit thinking matters, and how to reinvest productivity gains into code quality. Vibecoding vs. AI-Assisted Coding: Reading Code Matters "I read all the code that it outputs, so I need smaller steps of changes." Lou draws a clear distinction between vibecoding and his approach to AI-assisted coding. Vibecoding, in his definition, means not reading the code at all—just prompting, checking outputs, and prompting again. His method is fundamentally different: he reads every line of generated code before committing it. This isn't just about catching bugs; it's about maintaining architectural control and accountability. As Lou emphasizes, "A computer can't be held accountable, so a computer can never make decisions. A human always has to make decisions." This philosophy shapes his entire workflow—AI generates code quickly, but humans make the final call on what enters the repository. The distinction matters because it determines whether you're managing tech debt proactively or discovering it later when changes become difficult. The Moment of Shift: Staying in the Zone "It kept me in the zone. It saved so much time! Never having to look up what a function's arguments were... it just saved so much time." Lou's AI coding journey began in late 2022 with GitHub Copilot's free trial. He bought a subscription immediately after the trial ended because of one transformative benefit: staying in the flow state. The autocomplete functionality eliminated constant context switching to documentation, Stack Overflow searches, and function signature lookups. This wasn't about replacing thinking—it was about removing friction from implementation. Lou could maintain focus on the problem he was solving rather than getting derailed by syntax details. This experience shaped his understanding that AI's value lies in removing obstacles to productivity, not in replacing the developer's judgment about architecture and design. Thinking in Commits: The Right Size for AI Work "I think of prompts commit-by-commit. That's the size of the work I'm trying to do in a prompt." Lou's workflow centers on a simple principle: size your prompts to match what should be a single commit. This constraint provides multiple benefits. First, it keeps changes small enough to review thoroughly—if a commit is too big to review properly, the prompt was too ambitious. Second, it creates a clear commit history that tells a story about how the code evolved. Third, it enables easy rollback if something goes wrong. This commit-sized thinking mirrors good development practices that existed long before AI—small, focused changes that each accomplish one clear purpose. Lou uses inline prompting in Cursor (Command-K) for these localized changes because it keeps context tight: "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." The Tech Debt Question: Same Code, Same Debt "Based on the way I've defined how I did it, it's exactly the same amount of tech debt that I would have done on my own... I'm faster and can make more code, but I invest some of that savings back into cleaning things up." As the author of "Swimming in Tech Debt," Lou brings unique perspective to whether AI coding creates more technical debt. His answer: not if you're reading and reviewing everything. When you maintain the same quality standards—code review, architectural oversight, refactoring—you generate the same amount of debt as manual coding. The difference is speed. Lou gets productivity gains from AI, and he consciously reinvests a portion of those gains back into code quality through refactoring. This creates a virtuous cycle: faster development enables more time for cleanup, which maintains a codebase that's easier for both humans and AI to work with. The key insight is that tech debt isn't caused by AI—it's caused by skipping quality practices regardless of how code is generated. When Vibecoding Creates Debt: AI Resistance as a Symptom "When you start asking the AI to do things, and it can't do them, or it undoes other things while it's doing them... you're experiencing the tech debt a different way. You're trying to make changes that are on your roadmap, and you're getting resistance from making those changes." Lou identifies a fascinating pattern: tech debt from vibecoding (without code review) manifests as "AI resistance"—difficulty getting AI to make the changes you want. Instead of compile errors or brittle tests signaling problems, you experience AI struggling to understand your codebase, undoing changes while making new ones, or producing code with repetition and tight coupling. These are classic tech debt symptoms, just detected differently. The debt accumulates through architecture violations, lack of separation of concerns, and code that's hard to modify. Lou's point is profound: whether you notice debt through test failures or through AI confusion, the underlying problem is the same—code that's difficult to change. The solution remains consistent: maintain quality practices including code review, even when AI makes generation fast. Can AI Fix Tech Debt? Yes, With Guidance "You should have some acceptance criteria on the code... guide the LLM as to the level of code quality you want." Lou is optimistic but realistic about AI's ability to address existing tech debt. AI can definitely help with refactoring and adding tests—but only with human guidance on quality standards. You must specify what "good code" looks like: acceptance criteria, architectural patterns, quality thresholds. Sometimes copy/paste is faster than having AI regenerate code. Very convoluted codebases challenge both humans and AI, so some remediation should happen before bringing AI into the picture. The key is recognizing that AI amplifies your approach—if you have strong quality standards and communicate them clearly, AI accelerates improvement. If you lack quality standards, AI will generate code just as problematic as what already exists. Reinvesting Productivity Gains in Quality "I'm getting so much productivity out of it, that investing a little bit of that productivity back into refactoring is extremely good for another kind of productivity." Lou describes a critical strategy: don't consume all productivity gains as increased feature velocity. Reinvest some acceleration back into code quality through refactoring. This mirrors the refactor step in test-driven development—after getting code working, clean it up before moving on. AI makes this more attractive because the productivity gains are substantial. If AI makes you 30% faster at implementation, using 10% of that gain on refactoring still leaves you 20% ahead while maintaining quality. Lou explicitly budgets this reinvestment, treating quality maintenance as a first-class activity rather than something that happens "when there's time." This discipline prevents the debt accumulation that makes future work progressively harder. The 100x Code Concern: Accountability Remains Human "Directionally, I think you're probably right... this thing is moving fast, we don't know. But I'm gonna always want to read it and approve it." When discussing concerns about AI generating 100x more code (and potentially 100x more tech debt), Lou acknowledges the risk while maintaining his position: he'll always read and approve code before it enters the repository. This isn't about slowing down unnecessarily—it's about maintaining accountability. Humans must make the decisions because only humans can be held accountable for those decisions. Lou sees potential for AI to improve by training on repository evolution rather than just end-state code, learning from commit history how codebases develop. But regardless of AI improvements, the human review step remains essential. The goal isn't to eliminate human involvement; it's to shift human focus from typing to thinking, reviewing, and making architectural decisions. Practical Workflow: Inline Prompting and Small Changes "Right here, don't go look at the rest of my files... Everything you need is right here. The context is right here... And it's fast." Lou's preferred tool is Cursor with inline prompting (Command-K), which allows him to work on specific code sections with tight context. This approach is fast because it limits what AI considers, reducing both latency and irrelevant changes. The workflow resembles pair programming: Lou knows what he wants, points AI at the specific location, AI generates the implementation, and Lou reviews before accepting. He also uses Claude Code for full codebase awareness when needed, but the inline approach dominates his daily work. The key principle is matching tool choice to context needs—use inline prompting for localized changes, full codebase tools when you need broader understanding. This thoughtful tool selection keeps development efficient while maintaining control. Resources and Community Lou recommends Steve Yegge's upcoming book on vibecoding. His website, LouFranco.com, provides additional resources. About Lou Franco Lou Franco is a veteran software engineer and author of Swimming in Tech Debt. With decades of experience at startups, as well as Trello, and Atlassian, he's seen both sides of debt—as coder and leader. Today, he advises teams on engineering practices, helping them turn messy codebases into momentum. You can link with Lou Franco on LinkedIn and visit his website at LouFranco.com.

AI Assisted Coding: From Designer to Solo Developer - Building Production Apps with AI In this special episode, Elina Patjas shares her remarkable journey from designer to solo developer, building LexieLearn—an AI-powered study tool with 1,500+ users and paying customers—entirely through AI-assisted coding. She reveals the practical workflow, anti-patterns to avoid, and why the future of software might not need permanent apps at all. The Two-Week Transformation: From Idea to App Store "I did that, and I launched it to App Store, and I was like, okay, so… If I can do THIS! So, what else can I do? And this all happened within 2 weeks." Elina's transformation happened fast. As a designer frustrated with traditional software development where maybe 10% of your original vision gets executed, she discovered Cursor and everything changed. Within two weeks, she went from her first AI-assisted experiment to launching a complete app in the App Store. The moment that shifted everything was realizing that AI had fundamentally changed the paradigm from "writing code" to "building the product." This wasn't about learning to code—it was about finally being able to execute her vision 100% the way she wanted it, with immediate feedback through testing. Building LexieLearn: Solving Real Problems for Real Users "I got this request from a girl who was studying, and she said she would really appreciate to be able to iterate the study set... and I thought: "That's a brilliant idea! And I can execute that!" And the next morning, it was 9.15, I sent her a screen capture." Lexie emerged from Elina's frustration with ineffective study routines and gamified edtech that didn't actually help kids learn. She built an AI-powered study tool for kids aged 10-15 that turns handwritten notes into adaptive quizzes revealing knowledge gaps—private, ad-free, and subscription-based. What makes Lexie remarkable isn't just the technology, but the speed of iteration. When a user requested a feature, Elina designed and implemented it overnight, sending a screen capture by 9:15 AM the next morning. This kind of responsiveness—from customer feedback to working feature in hours—represents a fundamental shift in how software can be built. Today, Lexie has over 1,500 users with paying customers, proving that AI-assisted development isn't just for prototypes anymore. The Workflow: It's Not Just "Vibing" "I spend 30 minutes designing the whole workflow inside my head... all the UX interactions, the data flow, and the overall architectural decisions... so I spent a lot of time writing a really, really good spec. And then I gave that to Claude Code." Elina has mixed feelings about the term "vibecoding" because it suggests carelessness. Her actual workflow is highly disciplined. She spends significant time designing the complete workflow mentally—all UX interactions, data flow, and architectural decisions—then writes detailed specifications. She often collaborates with Claude to write these specs, treating the AI as a thinking partner. Once the spec is clear, she gives it to Claude Code and enters a dialogue mode: splitting work into smaller tasks, maintaining constant checkpoints, and validating every suggestion. She reads all the code Claude generates (32,000 lines client-side, 8,000 server-side) but doesn't write code herself anymore. This isn't lazy—it's a new kind of discipline focused on design, architecture, and clear communication rather than syntax. Reading Code vs. Writing Code: A New Skill Set "AI is able to write really good code, if you just know how to read it... But I do not write any code. I haven't written a single line of code in a long time." Elina's approach reveals an important insight: the skill shifts from writing code to reading and validating it. She treats Claude Code as a highly skilled companion that she needs to communicate with extremely well. This requires knowing "what good looks like"—her 15 years of experience as a designer gives her the judgment to evaluate what the AI produces. She maintains dialogue throughout development, using checkpoints to verify direction and clarify requirements. The fast feedback loop means when she fails to explain something clearly, she gets immediate feedback and can course-correct instantly. This is fundamentally different from traditional development where miscommunication might not surface until weeks later. The Anti-Pattern: Letting AI Run Rampant "You need to be really specific about what you want to do, and how you want to do it, and treat the AI as this highly skilled companion that you need to be able with." The biggest mistake Elina sees is treating AI like magic—giving vague instructions and expecting it to "just figure it out." This leads to chaos. Instead, developers need to be incredibly specific about requirements and approach, treating AI as a skilled partner who needs clear communication. The advantage is that the iteration loop is so fast that when you fail to explain something properly, you get feedback immediately and can clarify. This makes the learning curve steep but short. The key is understanding that AI amplifies your skills—if you don't know what good architecture looks like, AI won't magically create it for you. Breaking the Gatekeeping: One Person, Ten Jobs "I think that I can say that I am a walking example of what you can do, if you have the proper background, and you know what good looks like. You can do several things at a time. What used to require 10 people, at least, to build before." Elina sees herself as living proof that the gatekeeping around software development is breaking down. Someone with the right background and judgment can now do what previously required a team of ten people. She's passionate about others experiencing this same freedom—the ability to execute their vision without compromise, to respond to user feedback overnight, to build production-quality software solo. This isn't about replacing developers; it's about expanding who can build software and what's possible for small teams. For Elina, working with a traditional team would actually slow her down now—she'd spend more time explaining her vision than the team would save through parallel work. The Future: Intent-Based Software That Emerges and Disappears "The software gets built in an instance... it's going to this intent-based mode when we actually don't even need apps or software as we know them." Elina's vision for the future is radical: software that emerges when you need it and disappears when you don't. Instead of permanent apps, you'd have intent-based systems that generate solutions in the moment. This shifts software from a product you download and learn to a service that materializes around your needs. We're not there yet, but Elina sees the trajectory clearly. The speed at which she can now build and modify Lexie—overnight feature implementations, instant bug fixes, continuous evolution—hints at a future where software becomes fluid rather than fixed. Getting Started: Just Do It "I think that the best resource is just your own frustration with some existing tools... Just open whatever tool you're using, is it Claude or ChatGPT and start interacting and discussing, getting into this mindset that you're exploring what you can do, and then just start doing." When asked about resources, Elina's advice is refreshingly direct: don't look for tutorials, just start. Let your frustration with existing tools drive you. Open Claude or ChatGPT and start exploring, treating it as a dialogue partner. Start building something you actually need. The learning happens through doing, not through courses. Her own journey proves this—she went from experimenting with Cursor to shipping Lexie to the App Store in two weeks, not because she found the perfect tutorial, but because she just started building. The tools are good enough now that the biggest barrier isn't technical knowledge—it's having the courage to start and the judgment to evaluate what you're building. About Elina Patjas Elina is building Lexie, an AI-powered study tool for kids aged 10–15. Frustrated by ineffective "read for exams" routines and gamified edtech fluff, she designed Lexie to turn handwritten notes into adaptive quizzes that reveal knowledge gaps—private, ad-free, and subscription-based. Lexie is learning, simplified. You can link with Elina Patjas on LinkedIn.

Sara Di Gregorio: Coaching Product Owners from Isolation to Collaboration Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Using User Story Mapping to Break Down PO Isolation "One of the key strengths is the ability to build a strong collaborative relationship with the Scrum team. We constantly exchange feedback, with the shared goal of improving both our collaborating and the way of working." - Sara Di Gregorio Sara considers herself fortunate—she currently works with Product Owners who exemplify what great collaboration looks like. One of their key strengths is the ability to build strong collaborative relationships with the Scrum team. They don't wait for sprint reviews to exchange feedback; instead, they constantly communicate with the shared goal of improving both collaboration and ways of working. These Product Owners involve the team early, using techniques like user story mapping after analysis phases to create open discussions around upcoming topics and help the team understand potential dependencies. They make themselves truly available—they observe daily stand-ups not as passive attendees but as engaged contributors. If the team needs five minutes to discuss something afterward, the Product Owner is ready. They attend Scrum events with genuine interest in working with the team, not just fulfilling an attendance requirement. They encourage open dialogue, even participating in retrospectives to understand how the team is working and where they can improve collaboration. What sets these Product Owners apart is their communication approach. They don't come in thinking they know everything or that they need to do everything alone. Their mindset is collaborative: "We're doing this together." They recognize that developers aren't just executors—they're users of the product, experts who can provide valuable perspectives. When Product Owners ask "Why do you want this?" and developers respond with "If we do it this way, we can be faster, and you can try your product sooner," that's when magic happens. Great Product Owners understand that strong communication skills and collaborative relationships create better products, better teams, and better outcomes for everyone involved. Self-reflection Question: How are your Product Owners involving the team early in discovery and analysis, and are they building collaborative relationships or just attending required events? The Bad Product Owner: The Isolated Expert Who Thinks Teams Just Execute "Sometimes they feel very comfortable in their subject, so they assume they know everything, and the team has only to execute what they asked for." - Sara Di Gregorio Sara has encountered Product Owners who embody the worst anti-pattern: they believe they don't need to interact with the development team because they're confident in their subject matter expertise. They assume they know everything, and the team's job is simply to execute what they ask for. These Product Owners work isolated from the development team, writing detailed user stories alone and skipping the interesting discussions with developers. They only involve the team when they think it's necessary, treating developers as order-takers rather than collaborators who could contribute valuable insights. The impact is significant—teams lose the opportunity to understand the "why" behind features, Product Owners miss perspectives that could improve the product, and collaboration becomes transactional instead of transformational. Sara's approach to addressing this anti-pattern is patient but deliberate. She creates space for dialogue and provides training with the Product Owner to help them understand how important it is to collaborate and cooperate with the team. She shows them the impact of including the team from the beginning of feature study. One powerful technique she uses is user story mapping workshops, bringing both the team and Product Owner together. The Product Owner explains what they want to deliver from their point of view, but then something crucial happens: the team asks lots of questions to understand "Why do you want this?"—not just "I will do it." Through this exercise, Sara watched Product Owners have profound realizations. They understood they could change their mindset by talking with developers, who often are users of the product and can offer perspectives like "If we do it this way, we can be faster, and you can try your product sooner." The workshop helps teams understand the big picture of what the Product Owner is asking for while helping the Product Owner reflect on what they're actually asking. It transforms the relationship from isolation to collaboration, from directive to dialogue, from assumption to shared understanding. In this segment, we refer to the User Story Mapping blog post by Jeff Patton. Self-reflection Question: Are your Product Owners writing user stories in isolation, or are they involving the team in discovery to create shared understanding and better solutions? [The Scrum Master Toolbox Podcast Recommends]

Sara Di Gregorio: How to Know Your Team Has Internalized Agile Values Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Scrum isn't just a process to follow, it's a way of working." - Sara Di Gregorio For Sara, success as a Scrum Master isn't measured by what the team delivers—it's measured by how they grow. She knows that if you facilitate team growth in communication and collaboration, delivery will naturally improve. The indicators she watches for are subtle but powerful. When teams come to her with specific requests outside the regular schedule—"Can we have 30 minutes to talk and reflect mid-sprint?"—she knows something has shifted. When teams want to reflect outside the retrospective cycle, it means they've internalized the value of continuous improvement, not just going through the motions. She listens for the word "goal" during sprint planning. When team members start their planning by talking about goals, she feels a surge of recognition: "Okay, for me, this is very, very, very important." Success shows up in unexpected places. One of her colleague's teams pushed back during a cross-team meeting, saying "We're going out of the timebox" and suggesting they move the discussion to a different time. That kind of proactive leadership and accountability signals maturity. It means the team isn't just attending Scrum events because they have to—they truly understand why each event matters and actively participate to make them valuable. When Sara first met a team, they asked if she wanted to change things. She said no. What she focuses on is how people improve and understand the process better. For her, it starts with the people—when people change and understand the value, that's when real changes happen in the company. It's about helping people feel good and be guided well, because when they're working well, that's when transformation becomes possible. As Sara reminds us, Scrum isn't just a process to follow—it's a way of working that teams must embrace, understand, and make their own. Self-reflection Question: Are your teams coming to you asking for reflection time outside scheduled events, and what does that tell you about how deeply they've internalized continuous improvement? Featured Retrospective Format for the Week: Unstructured Retrospective After facilitating many structured retrospectives, Sara started experimenting with an unstructured format that brought new energy to team reflection. Instead of using predefined frameworks, she brings white paper, sticky notes, and sharpies of different colors. She opens with a simple question: "Guys, what impacted you mostly during the last week? How do you feel today?" Sometimes she starts with data and metrics; other times, she begins with how the team is feeling. The key is creating open space for conversation rather than forcing it into a predetermined structure. What Sara discovered is remarkable: "They are more engaged, more open, and more present in the conversation, maybe because it was something new." Instead of the same structured format every time, the unstructured approach breaks the routine and creates space for true reflections that bring out something deeper and more meaningful. It allows people to express what's genuinely going on for them, not just what fits into a predefined template. Sara doesn't abandon structured formats entirely—she alternates between structured and unstructured to keep retrospectives fresh and engaging. She also recommends, if you work hybrid, trying to schedule unstructured retrospectives for days when the team is in the office together. The physical presence combined with the open format creates an environment where teams can be more vulnerable, more creative, and more honest about what's really happening. The unstructured retrospective isn't about chaos—it's about trusting the team to surface what matters most to them, with the Scrum Master providing light facilitation and space for authentic reflection. [The Scrum Master Toolbox Podcast Recommends]

Sara Di Gregorio: Facilitating Deeper Retrospectives—When to Step In and When to Step Back Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "When they start connecting and having an interesting discussion, I go to the corner, and I'm only trying to listen." - Sara Di Gregorio Sara faces a challenge that many Scrum Masters encounter: teams that want to discuss too many topics during retrospectives without going deep on any of them. The team had plenty to talk about, but conversations stayed surface-level, never reaching the insights that drive real improvement. Sara recognized that the aim of the retrospective isn't to talk about everything—it's to go deeper on topics the team genuinely cares about. So she started coaching teams to select just three main topics they wanted to discuss, helping them understand why prioritization matters and making explicit which topics are most important. But her real skill emerged in how she facilitated the discussions. When she saw communication starting to flow and team members becoming deeply connected to the topic, she moved to the corner and listened. She didn't abandon the team—she remained present, ready to help shy or quiet members speak up, watching the clock to respect timeboxes. But she understood that when teams connect authentically, the Scrum Master's job is to create space, not fill it. Sara learned to ask better questions too. Instead of repeatedly asking "Why? Why? Why?"—which can feel accusatory—she reformulated: "How did you approach it? What happens?" When teams started blaming other teams, she redirected: "What can we influence? What can we do from our side?" She used visual tools like white paper, sharpies, and sticky notes to help teams visualize their discussion steps and create structured moments for questions. Sometimes, when teams discussed complex technical topics beyond her understanding, she empowered them: "You are the main expert of this topic. Please, when someone sees that we're going out of topic or getting too detailed, raise your hand and help me bring the communication back to what we've chosen to talk about." This balance—knowing when to step in with structure and when to step back and listen—is what transforms retrospectives from checkbox events into genuine opportunities for team growth. Self-reflection Question: In your facilitation, are you creating space for deep team connection, or are you inadvertently filling the space that teams need to discover insights for themselves? [The Scrum Master Toolbox Podcast Recommends]

Sara Di Gregorio: Rebuilding Agile Team Connection in the Remote Work Era Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "The book helped me to shift from reacting to connecting, which completely changed the quality of conversation." - Sara Di Gregorio When COVID forced Sara's team into full remote work, she noticed something troubling—the team was losing real connection. Replicating in-office meetings online simply didn't work. People attended meetings but weren't truly present. The spontaneous coffee machine conversations that built relationships and surfaced important information had vanished. So Sara started experimenting. She introduced 5-minute chit-chat sessions at the start of every meeting: "Guys, how are you today? What happened yesterday?" She created "coffee all together" moments—10-minute virtual breaks where the team could drink coffee or have aperitivos together, sometimes three times per week. She established weekly feedback sessions every Friday morning—30 minutes to recap the week and understand what could improve. These weren't just social niceties; they were deliberate efforts to recreate the human connections that remote work had stripped away. Sara recognized that mechanized interactions—"here are the things I need you to do, let's talk next steps"—kill team dynamics. Teams need moments where they relate to each other as people, not just as functions. The experiments worked because they created space for genuine connection, allowing the team to maintain the trust and collaboration that makes effective teamwork possible, even when working remotely. In this episode, we refer to Non-Violent Communication concepts and practices. Self-reflection Question: How are you creating moments for your remote or hybrid team to connect as people, not just as colleagues executing tasks? Featured Book of the Week: Nonviolent Communication by Marshall Rosenberg Sara credits Nonviolent Communication by Marshall Rosenberg (translated in Italian as "Words are Windows, or They are Walls") as having a deep impact on her career. The book explores how to listen without judging, how to ask the right questions, and how to observe people to understand their real needs. But above all, it teaches how to communicate in a way that builds connection rather than creating barriers. For Sara, the book was remarkably practical—she didn't just read it, she experimented with the techniques afterward. She explains: "I think that without this mindset, it's easy to fall into reactive communication, trying to defend, justify, or give quick answers. But that often blocks real understanding." The book helped her shift from reacting to connecting, which completely changed the quality of her conversations. As a Scrum Master working with people every day—facilitating meetings, mediating conflicts, supporting teams—the way we communicate determines whether we open dialogue or close it. Sara found that taking time to reflect instead of giving quick answers transformed her ability to help teams discover dependencies, improve dialogue, and address communication issues. For anyone in the Scrum Master role, this book provides essential skills for building the kind of connection that makes true collaboration possible. In this segment, we also refer to the NVC episodes we have on the podcast. Check those out to learn more about Nonviolent Communication [The Scrum Master Toolbox Podcast Recommends]

Sara Di Gregorio: When Teams Lose Trust—How Scrum Masters Rebuild It One Small Change at a Time Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I continue to approach this situation with openness, positivity, and trust, because I truly believe that even the smallest changes can make a difference over time." - Sara Di Gregorio Sara faced one of the most challenging situations a Scrum Master can encounter—a team member who had lost all trust in change, creating a negative atmosphere that weighed heavily on the entire team. She remembers the heaviness on her shoulders, feeling personally responsible for the team's wellbeing. The negativity was palpable during every meeting, and it threatened to undermine the team's progress. But Sara refused to give up. She started experimenting with different approaches: one-to-one conversations to understand what was happening, bringing intentional energy to meetings, and trying new facilitation techniques in retrospectives. She added personal check-ins, asking "How are you today?" at the start of stand-ups, consciously bringing positive energy even on days when she didn't feel it herself. She discovered that listening—truly listening, not just hearing—means understanding how people feel, not just what they're saying. Sara learned that the energy you bring to interactions matters deeply. Starting the day with genuine interest, asking about the team's wellbeing, and even making small comments about the weather could create tiny shifts—a small smile that signaled something had changed. Her approach was rooted in persistence and belief: she continued approaching the situation with openness, positivity, and trust, knowing that even the smallest changes can make a difference over time. For Sara, reestablishing a good environment wasn't about quick fixes—it was about showing up every day with the right energy and never giving up on her team. Self-reflection Question: What energy are you bringing to your interactions with the team today, and how might that be shaping the team's atmosphere? [The Scrum Master Toolbox Podcast Recommends]

BONUS: Flawless Execution — Translating Fighter Pilot Precision to Business Results In this powerful conversation, former fighter pilot Christian "Boo" Boucousis reveals how military precision translates into agile business leadership. We explore the FLEX model (Plan-Brief-Execute-Debrief), the critical difference between control-based and awareness-based leadership, and why most organizations fail to truly embrace iterative thinking. From Cockpit to Boardroom: An Unexpected Journey "I learned over time that it doesn't matter what you do if you're always curious, and you're always intentional, and you're always asking questions." — Christian "Boo" Boucousis Christian's path from fighter pilot to leadership consultant wasn't planned—it was driven by necessity and curiosity. After 11 years as a fighter pilot (7 in Australia, 4 in the UK), an autoimmune condition ended his flying career at age 30. Rather than accepting a comfy job flying politicians around, he chose entrepreneurship. He moved to Afghanistan with a friend and built a reconstruction company that grew to a quarter billion dollars in four years. The secret? The debrief skills he learned as a fighter pilot. By constantly asking "What are you trying to achieve? How's it going? Why is there a gap?" he approached business with an agile mindset before he even knew what agile was. This curiosity-driven, question-focused approach became the foundation for everything that followed. The FLEX Model: Plan-Brief-Execute-Debrief "Agile and scrum were co-created by John Sutherland, who was a fighter pilot, and its origins sit in the OODA loop and iteration. Which is why it's a circle." — Christian "Boo" Boucousis The FLEX model isn't new—fighter pilots have used this Plan-Brief-Execute-Debrief cycle for 60 years. It's the ultimate simple agile model, designed to help teams accelerate toward goals using the same accelerated learning curve the Air Force uses to train fighter pilots. The key insight: everything in this model is iterative, not linear. Every mission has a start, middle, and end, and every stage involves constant adaptation. Afterburner (the company Christian now leads as CEO) has worked with nearly 3,800 companies and 2.8 million people over 30 years, teaching this model. What's fascinating is that the DNA of agile is baked into fighter pilot thinking—John Sutherland, co-creator of Scrum, wrote the foreword for Christian's book "The Afterburner Advantage" because they share the same roots in the OODA loop and iterative thinking. Why Iterative Thinking Doesn't Come Naturally "Iterative thinking is not a natural human model. Most of the time we learn from mistakes. We don't learn as a habit." — Christian "Boo" Boucousis Here's the hard truth: agile as a way of working is very different from the way human beings naturally think. Business leadership models still hark back to Frederick Winslow Taylor's 1911 book on scientific management—industrial era leadership designed for building buildings, not creating software. Time is always linear (foundation, then structure, then finishing), and this shapes how we think about planning. Humans also tend to organize like villages with chiefs, warriors, and gatherers—hierarchical and political. Fighter pilots created a parallel system where politics exist outside missions, but during execution, personality clashes can't interfere. The challenge for business isn't the method—it's getting human minds to embrace iteration as a habit, not just a process they follow when forced. Planning: Building Collective Consciousness, Not Task Lists "Planning isn't all about sequencing actions—that's not planning. That's the byproduct of planning, which is collectively agreeing what good looks like at the end." — Christian "Boo" Boucousis Most people plan in their head or in front of a spreadsheet by themselves. That's not planning—that's collecting thoughts. Real planning means bringing everyone on the team together to build collective consciousness about what's possible. The plan is always "the best idea based on what we know now." Once airborne, everything changes because the enemy doesn't cooperate with your plan. Planning is about the destination, not the work to get there. Think about airline pilots: they don't tell you about traffic delays on their commute or maintenance issues. They say "Welcome aboard, our destination is Amsterdam, there's weather on the way, we'll land 5 minutes early." That's a brief—just the effect on you based on all their work. Most business meetings waste 55 minutes on backstory and 5 minutes deciding to have another meeting. Fighter pilots focus entirely on: What are we trying to achieve? What might get in the way? Let's go. Briefing: The 25-Minute Focus Window "You need 25 minutes of focus before your brain really focuses on the task. You program your brain for the mission at hand." — Christian "Boo" Boucousis The brief is the moment between planning and execution when the plan is as accurate as it'll ever get. It's called "brief" for a reason—it's really short. The team checks that everyone understands the plan in today's context, accounting for last-minute changes (broken equipment, weather, personnel changes). Then comes the critical part: creating the mission bubble. From the brief until mission end, there are no distractions, no notifications. If someone tries to interrupt a fighter pilot walking to the jet, the response is clear: "I'm in my mission bubble. No distractions." This isn't optional—research shows it takes 25 minutes of uninterrupted focus before your brain truly locks onto a task. Yet most business leaders expect constant availability, with notifications pinging every few minutes. If you need everyone to have notifications on to run your business, you're doing a really bad job at planning. Execution: Awareness-Based Leadership vs. Control-Based Leadership "The reason we have so many meetings is because the leader is trying to control the situation and own all the awareness. It's not humanly possible to do that." — Christian "Boo" Boucousis During execution, fighter pilots fly the plan until it doesn't work anymore—then they adapt. A mission commander might lead 70 airplanes, but can't possibly track all 69 others. Instead, they create "gates"—checkpoints where everyone confirms they're in the right place within 10 seconds. They plan for chaos, creating awareness points where the team is generally on track or not. The key shift: from control-based leadership (the leader tries to control everything) to awareness-based leadership (the leader facilitates and listens for divergences). This includes "subordinated leadership"—any of the four pilots in a formation can take the lead if they have better awareness. If a wingman calls out a threat the leader doesn't see, the immediate response is "Press! You take the lead." This works because they planned for it and have criteria. Business teams profess to want this kind of agile collaboration, but struggle because they haven't invested in the planning and shared understanding that makes fluid leadership transitions possible. Abort Criteria: Knowing When to Stop "We have this concept called abort criteria. If certain criteria are hit, we abort the mission. I think that's a massive opportunity for business." — Christian "Boo" Boucousis There are degrees of things going wrong: a little bit, a medium amount, and everything going wrong. When everything's going wrong, fighter pilots stop and turn around—they don't keep pressing a bad situation. This "abort criteria" concept is massively underutilized in business. Too often, teams press bad situations, transparency disappears, people stop talking, and everyone goes into survival mode (protect myself, blame others). This never happens with fighter pilots. If something goes wrong, they take accountability and make the best decision. The most potent team size is four people: a leader, deputy leader, and two wingmen. This small team size with clear roles and shared abort criteria creates psychological safety to call out problems and adapt quickly. The Retrospective Mindset: Not Just a Ritual "A retrospective isn't a ritual. It's actually a way of thinking. It's a cognitive model. If you approached everything as a retrospective—what are we trying to achieve? How's it going? Why is it not going where we want? What's the one action to get back on track?" — Christian "Boo" Boucousis The debrief—the retrospective—is the most important part of fighter pilot culture translated into agile. It's not just a meeting you have at the end of a sprint. It's a mindset you apply to everything: projects, relationships, personal development. Christian introduces "Flawless Leadership" built on three M's: Method (agile practices), Mindset (growth mindset developed through acting iteratively), and Moments (understanding when to show up as a people leader vs. an impact leader). The biggest mistake in technology: teams do retrospectives internally but don't include the business. They get a brief from the business, build for two months, come back, and the business says "What is this? This isn't what I expected." If they'd had the business in every scrum, every iteration, trust would build naturally. Everyone involved in the mission must be part of the planning, briefing, executing, and debriefing. Leading in the Moment: Three Layers of Leadership "Your job as a scrum master, as a leader—it doesn't matter if you're leading a division of people—is to be aware. And you're only going to be aware by listening." — Christian "Boo" Boucousis Christian breaks leadership into three layers: People Leadership (political, emotional, dealing with personalities and overwhelm), Impact Leadership (the agile layer, results-driven, scientific), and Leading Now (the reactive, amygdala-driven panic response when things go wrong). The mistake: mixing these layers. Don't try to be a people leader during execution—that's not the time. But if you're really good at impact leadership (planning, breaking epics into stories, getting work done), you become high trust and high credibility. People leadership becomes easier because success eliminates excuses. During execution, watch for individual traits and blind spots. Use one-on-ones with a retrospective mindset: "What does good look like for you? How do we get to where you're not frustrated?" When leaders aren't present—checking phones and watches during meetings—they lose people. Your job as a leader is to turn your ears on, facilitate (not direct), and listen for divergences others don't see. The Technology-Business Disconnect "Every time you're having a scrum, every time you're coming together to talk about the product, just have the business there with you. It's easy." — Christian "Boo" Boucousis One of the biggest packages of work Afterburner does: technology teams ask them to help build trust with the business. The solution is shockingly simple—include the business in every scrum, every planning session, every retrospective. Agile is a tech-driven approach, creating a disconnect. Technology brings overwhelming information about how hard they're working and problems they've solved, but business doesn't care about the past. They care about the future: what are you delivering and when? During the Gulf War, the military scaled this fighter pilot model to large-scale planning. Fighter pilots work with marines, special forces, navy, CIA agents—everyone is part of the plan. If one person is missing from planning, execution falls apart. If someone on the ground doesn't know how an F-18 works, the jet is just expensive decoration. Planning is about learning what everyone else does and how to support them best—not announcing what you'll do and how you'll do it. High-Definition Destinations: Beyond Goals "Planning is all about the destination, not the work to get there. Think about when you hop on an airplane—the pilot doesn't tell you the whole backstory. They say 'Welcome aboard, our destination is Amsterdam, there's weather on the way, we'll land 5 minutes early.' All you want is the effect on you." — Christian "Boo" Boucousis Christian uses the term "High-Definition Destinations" rather than goals. The difference is clarity and vividness. When you board a plane, you don't get the pilot's commute story or maintenance details—you get the destination, obstacles, and estimated arrival. That's communication focused on effect, not process. Most business communication does the opposite: overwhelming context, backstory, and detail, with the destination buried somewhere in the middle. The brief should always be: Here's where we're going. Here's what might get in the way. Let's go. This communication style—focused on outcomes and effects rather than processes and problems—transforms how teams align and execute. It eliminates the noise and centers everyone on what actually matters: the destination. About Christian "Boo" Boucousis Christian "Boo" Boucousis is a former fighter pilot who now helps leaders navigate today's fast-moving world. As CEO of Afterburner and author of The Afterburner Advantage, he shares practical, people-centered tools for turning chaos into clarity, building trust, and delivering results without burning out. You can link with Christian "Boo" Boucousis on LinkedIn, visit Afterburner.com, check out his personal site at CallMeBoo.com, or interact with his AI tool at AIBoo.com.

Alidad Hamidi: When Product Owners Facilitate Vision Instead of Owning It Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Co-Creating Vision Through Discovery "The best product owner I worked with was not a product owner, but a project manager. And she didn't realize that she's acting as a product owner." - Alidad Hamidi The irony wasn't lost on Alidad. The best Product Owner he ever worked with didn't have "Product Owner" in her title—she was a project manager who didn't even realize she was acting in that capacity. The team was working on a strategic project worth millions, but confusion reigned about what value they were creating. Alidad planned an inception workshop to create alignment among stakeholders, marketing, operations, advisors, and the team. Twenty minutes into the session, Alidad asked a simple question: "How do we know the customer has this problem, and they're gonna pay for it?" Silence. No one knew. To her immense credit, the project manager didn't retreat or deflect. Instead, she jumped in: "What do we need to do?" Alidad suggested assumptions mapping, and two days later, the entire team and stakeholders gathered for the workshop. What happened next was magic. "She didn't become a proxy," Alidad emphasizes. She didn't say, "I'll go find out and come back to you." Instead, she brought everyone together—team, stakeholders, and customers—into the same room. The results were dramatic. The team was about to invest millions integrating with an external vendor. Through the assumption mapping workshop, they uncovered huge risks and realized customers didn't actually want that solution. "We need to pivot," she declared. Instead of the expensive integration, they developed educational modules and scripts for customer support and advisors. The team sat with advisors, listening to actual customer calls, creating solutions based on real needs rather than assumptions. The insight transformed not just the project but the project manager herself. She took these discovery practices across the entire organization, teaching everyone how to conduct proper discovery and fundamentally shifting the product development paradigm. One person, willing to facilitate rather than dictate, made this impact. "Product owner can facilitate creation of that [vision]," Alidad explains. "It's not just product owner or a team. It's the broader stakeholder and customer community that need to co-create that." Self-reflection Question: Are you facilitating the creation of vision with your stakeholders and customers, or are you becoming a proxy between the team and the real sources of insight? The Bad Product Owner: Creating Barriers Instead of Connections "He did the opposite, just creating barriers between the team and the environment." - Alidad Hamidi The Product Owner was new to the organization, technically skilled, and genuinely well-intentioned. The team was developing solutions for clinicians—complex healthcare work requiring deep domain understanding. Being new, the PO naturally leaned into his strength: technical expertise. He spent enormous amounts of time with the team, drilling into details, specifying exactly how everything should look, and giving the team ready-made solutions instead of problems to solve. Alidad kept telling him: "Mate, you need to spend more time with our stakeholder, you need to understand their perspective." But the PO didn't engage with users or stakeholders. He stayed comfortable in his technical wheelhouse, designing solutions in isolation. The results were predictable and painful. Halfway through work, the PO would realize, "Oh, we really don't need that." Or worse, the team would complete something and deliver it to crickets—no one used it because no one wanted it. "Great person, but it created a really bad dynamic," Alidad reflects. What should have been the PO's job—understanding the environment, stakeholder needs, and market trends—never happened. Instead of putting people in front of the environment to learn and adapt, he created barriers between the team and reality. Years later, Alidad's perspective has matured. He initially resented this PO but came to realize: "He was just being human, and he didn't have the right support and the environment for him." Sometimes people learn only after making mistakes. The coaching opportunity isn't to shame or blame but to focus on reflection from failures and supporting learning. Alidad encouraged forums with stakeholders where the PO and team could interact directly, seeing each other's work and constraints. The goal isn't perfection—it's creating conditions where Product Owners can connect teams to customers rather than standing between them. Self-reflection Question: What barriers might you be unintentionally creating between your team and the customers or stakeholders they need to serve, and what would it take to remove yourself from the middle? [The Scrum Master Toolbox Podcast Recommends]

Alidad Hamidi: Maximizing Human Potential as the Measure of Success Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Does my work lead into maximizing human potential? Maximizing the ability of the human to use their potential and freedom." - Alidal Hamidi Alidad calls himself a "recovering agility coach," and for good reason. For years, he struggled to define success in his work. As an enterprise coach, he plants seeds but never sees the trees grow. By the time transformation takes root, he's moved on to the next challenge. This distance from outcomes forced him to develop a more philosophical definition of success—one rooted not in deliverables or velocity charts, but in human potential and freedom. His measure of success centers on three interconnected questions. First, are customers happy with what the teams create? Notice he says "create," not "deliver"—a deliberate choice. "I really hate the term product delivery, because delivery means you have a feature factory," he explains. Creating value requires genuine interaction between people who solve problems and people who have problems, with zero distance between them. Second, what's the team's wellbeing? Do they have psychological safety, trust, and space for innovation? And third, is the team growing—and by "team," Alidad means the entire organization, not just the squad level. There's a fourth element he acknowledges: business sustainability. A bank could make customers ecstatic by giving away free money, but that's not viable long-term. The art lies in balance. "There's always a balance, sometimes one grows more than the other, and that's okay," Alidad notes. "As long as you have the awareness of why, and is that the right thing at the right time." This definition of success requires patience with the messy reality of organizations and faith that when humans have the freedom to use their full potential, both people and businesses thrive. Self-reflection Question: If you measured your success solely by whether you're maximizing human potential and freedom in your organization, what would you start doing differently tomorrow? Featured Retrospective Format for the Week: Six Intrinsic Motivators Alidad's favorite retrospective format comes from Open Systems Theory—the Six Intrinsic Motivators. This approach uses the OODA Loop philosophy: understanding reality and reflecting on actions. "Let's see what actually happened in reality, rather than our perception," Alidad explains. The format assesses six elements. Three are personal and can have too much or too little (rated -10 to +10): autonomy in decision making, continuous learning and feedback, and variety in work. Three are team environment factors that you can't have too much of (rated 0 to 10): mutual support and respect, meaningfulness (both socially useful work and seeing the whole product), and desirable futures (seeing development opportunities ahead). The process is elegantly simple. Bring the team together and ask each person to assess themselves on each criterion. When individuals share their numbers, fascinating conversations emerge. One person's 8 on autonomy might surprise a teammate who rated themselves a 3. These differences spark natural dialogue, and teams begin to balance and adjust organically. "If these six elements don't exist in the team, you can never have productive human teams," Alidad states. He recommends running this at least every six months, or every three months for teams experiencing significant change. The beauty? No intervention from outside is needed—the team naturally self-organizes around what they discover together. [The Scrum Master Toolbox Podcast Recommends]

Alidad Hamidi: The Tax Agile Teams Pay for Organizational Standards Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "If you set targets for people, they will achieve the target, even if that means destroying the system around them." - W. Edwards Deming (quoted by Alidad) The tension is familiar to every Scrum Master working in large organizations: leadership demands standard operating models, flow time metrics below specific numbers, and reporting structures that fit neat boxes. Meanwhile, teams struggle under the weight of context-insensitive measurements that ignore the nuanced reality of their work. Alidad faces this challenge daily—creating balance between organizational demands and what teams actually need to transform and thrive. His approach starts with a simple but powerful question to leaders: "What is it that you want to achieve with these metrics?" Going beyond corporate-speak to have real conversations reveals that most leaders want outcomes, not just numbers. Alidad then involves teams in defining strategies to achieve those outcomes, framing metrics as "the tax we pay" or "the license to play." When teams understand the intent and participate in the strategy, something surprising happens—most metrics naturally improve because teams are delivering genuine value, customers are happy, and team dynamics are healthy. But context sensitivity remains critical. Alidad uses a vivid analogy: "If you apply lean metrics to Pixar Studio, you're gonna kill Pixar Studio. If you apply approaches of Pixar Studio to production line, they will go bankrupt in less than a month." Toyota's production line and Pixar's creative studio both need different approaches based on their context, team evolution, organizational maturity, and market environment. He advocates aligning teams to value delivery with end-to-end metrics rather than individual team measurements, recognizing that organizations operate in ecosystem models beyond simple product paradigms. Perhaps most important is patience. "Try to not drink coffee for a week," Alidad challenges. "Even for a single person, one practice, it's very hard to change your behavior. Imagine for organization of hundreds of thousands of people." Organizations move through learning cycles at their own rhythm. Our job isn't to force change at the speed we prefer—it's to take responsibility for our freedom and find ways to move the system, accepting that systems have their own speed. Self-reflection Question: Which metrics are you applying to your teams without considering their specific context, and what conversation do you need to have with leadership about the outcomes those metrics are meant to achieve? [The Scrum Master Toolbox Podcast Recommends]

Alidad Hamidi: When a Billion-Dollar Team Becomes Invisible Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Most of the times, it's not teams that are self-destructive or anything... Simple analogy is when a flower is not blooming, you don't fix the flower, you fix the soil." - Alidad Hamidi The team sat on the sidelines, maintaining a large portfolio of systems while the organization buzzed with excitement about replatforming initiatives. Nobody seemed to care about them. Morale was low. Whenever technical challenges arose, everyone pointed to the same person for help. Alidad tried the standard playbook—team-building activities, bonding exercises—but the impact was minimal. Something deeper was broken, and it wasn't the team. Then Alidad shifted his lens to systems thinking. Instead of fixing the flower, he examined the soil. Using the Viable Systems Model, he started with System 5—identity. Who were they? What value did they create? He worked with stakeholders to map the revenue impact of the systems this "forgotten" team maintained. The number shocked everyone: one billion dollars. These weren't legacy systems gathering dust—they were revenue-generating engines critical to the business. Alidad asked the team to run training series for each other, teaching colleagues about the ten different systems they managed. They created self-assessments of skill sets, making visible what had been invisible for too long. When Alidad made their value explicit to the organization, everything shifted. The team's perspective transformed. Later, when asked what made the difference, their answer was unanimous: "You made us visible. That's it." People have agency to change their environment, but sometimes they need someone to help the system see what it's been missing. Ninety percent of the time, when teams struggle, it's not the team that needs fixing—it's the soil they're planted in. Self-reflection Question: What teams in your organization are maintaining critical systems but remain invisible to leadership, and what would happen if you made their value explicit? Featured Book of the Week: More Time to Think by Nancy Kline Alidad describes Nancy Kline's More Time to Think as transformative for his facilitation practice. While many Scrum Masters focus on filling space and driving conversations forward, this book teaches the opposite—how to create space and listen deeply. "It teaches you to create a space, not to fill it," Alidad explains. The book explores how to design containers—meetings, workshops, retrospectives—that allow deeper thinking to emerge naturally among team members. For Alidad, the book answered a fundamental question: "How do you help people to find the solution among themselves?" It transformed his approach from facilitation to liberation, helping teams slow down so they can think more clearly. He first encountered the audiobook and was so impacted that he explored both "Time to Think" and this follow-up. While both are valuable, "More Time to Think" resonated more deeply with his coaching philosophy. The book pairs beautifully with systems thinking, helping Scrum Masters understand that creating the right conditions for thinking is often more powerful than providing the right answers. In this segment, we also refer to the book Confronting our freedom, by Peter Block et al. [The Scrum Master Toolbox Podcast Recommends]

Alidad Hamidi: When Silence Becomes Your Most Powerful Coaching Tool Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I purposefully designed a moment of silence. Staying in the anxiety of being silenced. Do not interrupt the team. Put the question there, let them come up with a solution. It is very hard. But very effective." - Alidad Hamidi Alidad walked into what seemed like a straightforward iteration manager role—what some use, instead of Scrum Master. The organization was moving servers to the cloud, a transformation with massive implications. When leadership briefed him on the team's situation, they painted a clear picture of challenges ahead. Yet when Alidad asked the team directly about the transformation's impact, the response was uniform: "Nothing." But Alidad knew better. After networking with other teams, he discovered the truth—this team maintained software generating over half a billion dollars in revenue, and the transformation would fundamentally change their work. When he asked again, silence filled the room. Not the comfortable silence of reflection, but the heavy silence of fear and mistrust. Most facilitators would have filled that void with words, reassurance, or suggestions. Alidad did something different—he waited. And waited. For what felt like an eternity, probably a full minute, he stood in that uncomfortable silence, about to leave the room. Then something shifted. One team member picked up a pen. Then another joined in. Suddenly, the floodgates opened. Debates erupted, ideas flew, and the entire board filled with impacts and concerns. What made the difference? Before that pivotal moment, Alidad had invested in building relationships—taking the team to lunch, standing up for them when managers blamed them for support failures, showing through his actions that he genuinely cared. The team saw that he wasn't there to tell them how to do their jobs. They started to trust that this silence wasn't manipulation—it was genuine space for their voices. This moment taught Alidad a profound lesson about Open Systems Theory and Socio-Technical systems—sometimes the most powerful intervention is creating space and having the courage to hold it. Self-reflection Question: When was the last time you designed a moment of silence for your team, and what held you back from making it longer? [The Scrum Master Toolbox Podcast Recommends]

Karim Harbott: From Requirements Documents to Customer Obsession—Redefining the PO Role Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Strategic, Customer-Obsessed, and Vision-Driven "The PO role in the team is strategic. These POs focus on the customer, outcomes, and strategy. They're customer-obsessed and focus on the purpose and the why of the product." - Karim Harbott Karim believes the industry fundamentally misunderstands what a Product Owner should be. The great Product Owners he's seen are strategic thinkers who are obsessed with the customer. They don't just manage a backlog—they paint a vision for the product and help the entire team become customer-obsessed alongside them. These POs focus relentlessly on outcomes rather than outputs, asking "why are we building this?" before diving into "what should we build?" They understand the purpose of the product and communicate it compellingly. Karim references Amazon's "working backwards" approach, where Product Owners start with the customer experience they want to create and work backwards to figure out what needs to be built. Great POs also embrace the framework of Desirability (what customers want), Viability (what makes business sense), Feasibility (what's technically possible), and Usability (what's easy to use). While the PO owns desirability and viability, they collaborate closely with designers on usability and technical teams on feasibility. This is critical: software is a team sport, and great POs recognize that multiple roles share responsibility for delivery. Like David Marquet teaches, they empower the team to own decisions rather than dictating every detail. The result? Teams that understand the "why" and can innovate toward it autonomously. Self-reflection Question: Does your Product Owner paint a compelling vision that inspires the team, or do they primarily manage a list of tasks? The Bad Product Owner: The User Story Writer "The user story writer PO thinks it's their job to write full, long requirements documents, put it in JIRA, and assign it to the team. This is far away from what the PO role should be." - Karim Harbott The anti-pattern Karim sees most often is the "User Story Writer" Product Owner. These POs believe their job is to write detailed requirements documents, load them into JIRA, and assign them to the team. It's essentially waterfall disguised as Agile—treating user stories like mini-specifications rather than conversation starters. This approach completely misses the collaborative nature of product development. Instead of engaging the team in understanding customer needs and co-creating solutions, these POs hand down fully-formed requirements and expect the team to execute without question. The problem is that this removes the team's ownership and creativity. When POs act as the sole source of product knowledge, they become bottlenecks. The team can't make smart tradeoffs or innovate because they don't understand the underlying customer problems or business context. Using the Desirability-Viability-Feasibility-Usability framework, bad POs try to own all four dimensions themselves instead of recognizing that designers, developers, and other roles bring essential perspectives. The result is disengaged teams, slow delivery, and products that miss the mark because they were built to specifications rather than shaped by collaborative discovery. Software is a team sport—but the User Story Writer PO forgets to put the team on the field. Self-reflection Question: Is your Product Owner engaging the team in collaborative discovery, or just handing down requirements to be implemented? [The Scrum Master Toolbox Podcast Recommends]

Karim Harbott: Don't Scale Dysfunction—Fix the Team First Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "How do you define the success of a football manager? Football managers are successful when the team is successful. For Scrum Masters it is also like that. Is the team better than it was before?" - Karim Harbott Karim uses a powerful analogy to define success for Scrum Masters: think of yourself as a football manager. A football manager isn't successful because they personally score goals—they're successful when the team wins. The same principle applies to Scrum Masters. Success isn't measured by how many problems you solve or how busy you are. It's measured by whether the team is better than they were before. Are they more self-organizing? More effective? More aligned with organizational outcomes? This requires a mindset shift. Unlike sprinters competing individually, Scrum Masters succeed by enabling others to be better. Karim recommends involving the team when defining success—what does "better" mean to them? He also emphasizes linking the work of the team to organizational objectives. When teams understand how their efforts contribute to broader goals, they become more engaged and purposeful. But there's a critical warning: don't scale dysfunction! If a team isn't healthy, improving it is far more important than expanding your coaching to more teams. A successful Scrum Master creates teams that don't need constant intervention—teams that can manage themselves, make decisions, and deliver value consistently. Just like a great football manager builds a team that plays brilliantly even when the manager isn't on the field. Self-reflection Question: Is your team more capable and self-sufficient than they were six months ago, or have they become more dependent on you? Featured Retrospective Format for the Week: Systems Modeling with Causal Loop Diagrams "It shows how many aspects of the system there are and how things are interconnected. This helps us see something that we would not come up with in normal conversations." - Karim Harbott Karim recommends using systems modeling—specifically causal loop diagrams—as a retrospective format. This approach helps teams visualize the complex interconnections between different aspects of their work. Instead of just listing what went wrong or right, causal loop diagrams reveal how various elements influence each other, often uncovering hidden feedback loops and unintended consequences. The power of this format is that it surfaces insights the team wouldn't discover through normal conversation. Teams can then think of their retrospective actions as experiments—ways to interact with the system to test hypotheses about what will improve outcomes. This shifts retrospectives from complaint sessions to scientific inquiry, making them far more actionable and engaging. If your team is struggling with recurring issues or can't seem to break out of patterns, systems modeling might reveal the deeper dynamics at play. [The Scrum Master Toolbox Podcast Recommends]

Karim Harbott: You Can't Make a Flower Grow Faster—The Oblique Approach to Shaping Culture Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "How can I make a flower grow faster? Culture is a product of the behaviors of people in the system." - Karim Harbott For Karim, one of the biggest challenges—and enablers—in his current work is creating a supporting culture. After years of learning what doesn't work, he's come to understand that culture isn't something you can force or mandate. Like trying to make a flower grow faster by pulling on it, direct approaches to culture change often backfire. Instead, Karim uses what he calls the "oblique approach"—changing culture indirectly by adjusting the five levers: leadership behaviors, organizational structure, incentives, metrics, and systems. Leadership behaviors are particularly crucial. When leaders step back and encourage ownership rather than micromanaging, teams transform. Incentives have a huge impact on how teams work—align them poorly, and you'll get exactly the wrong behaviors. Karim references Team of Teams by General Stanley McChrystal, which demonstrates how changing organizational structure and leadership philosophy can unlock extraordinary performance. He also uses the Competing Values Framework to help leaders understand different cultural orientations and their tradeoffs. But the most important lesson? There are always unexpected consequences. Culture change requires patience, experimentation, and a willingness to observe how the system responds. You can't force a flower to grow, but you can create the conditions where it thrives. Self-reflection Question: Are you trying to change your organization's culture directly, or are you adjusting the conditions that shape behavior? [The Scrum Master Toolbox Podcast Recommends]

Karim Harbott: Why System Design Beats Individual Coaching Every Time Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "You can't change people, but you can change the system. Change the environment, not the people." - Karim Harbott Karim was coaching a distributed team that was struggling with defects appearing constantly during sprints. The developers and testers were at different sites, and communication seemed fractured. But Karim knew from experience that when teams are underperforming, the problem usually isn't the people—it's the system they're working in. He stepped back to examine the broader context, implementing behavior-driven development(BDD) and specification by example to improve clarity through BDD scenarios. But the defects persisted. Then, almost by accident, Karim discovered the root cause: the developers and testers were employed by different companies. They had competing interests, different incentives, and fundamentally misaligned goals. No amount of coaching the individuals would fix a structural problem like that. It took months, but eventually the system changed—developers and testers were reorganized into unified teams from the same organization. Suddenly, the defects dropped dramatically. As Jocko Willink writes in Extreme Ownership, when something isn't working, look at the system first. Karim's experience proves that sometimes the most compassionate thing you can do is stop trying to fix people and start fixing the environment they work in. Self-reflection Question: When your team struggles, do you look at the people or at the system they're embedded in? Featured Book of the Week: Scaling Lean and Agile Development by Craig Larman and Bas Vodde "This book was absolute gold. The way it is written, and the tools they talk about went beyond what I was talking about back then. They introduced many concepts that I now use." - Karim Harbott Karim discovered Scaling Lean and Agile Development by accident, but it resonated with him immediately. The concepts Craig Larman and Bas Vodde introduced—particularly around LeSS (Large-Scale Scrum)—went far beyond the basics Karim had been working with. The book opened his eyes to system-level thinking at scale, showing how to maintain agility even as organizations grow. It's packed with practical tools and frameworks that Karim still uses today. For anyone working beyond a single team, this book provides the depth and nuance that most scaling frameworks gloss over. Also worth reading: User Stories Applied by Mike Cohn, another foundational text that shaped Karim's approach to working with teams. [The Scrum Master Toolbox Podcast Recommends]

Karim Harbott: The Day I Discovered I Was a Scrum Project Manager, Not a Scrum Master Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I was telling the team what to do, instead of helping the team to be better on their own. There's a lot more to being a Scrum Master than Agile—working with people is such a different skillset." - Karim Harbott Karim thought he had mastered Scrum. He had read the books, understood the framework, and was getting things done. His team seemed to be moving forward smoothly—until he stepped away for a few weeks. But, when he returned, everything had fallen apart. The team couldn't function without him constantly directing their work. That's when Karim realized he had fallen into one of the most common anti-patterns in Agile: the Scrum Project Manager. Instead of enabling his team to be more effective, he had become their bottleneck. Every decision flowed through him, every task needed his approval, and the team had learned to wait for his direction rather than taking ownership themselves. The wake-up call was brutal but necessary. Karim discovered that pushing project management responsibilities to the people doing the work—as David Marquet advocates—was far more powerful than being the hero who solves all problems. The real skill wasn't in telling people what to do; it was in creating an environment where they could figure it out themselves. Geoff Watts calls this servant leadership, and Karim learned it the hard way: a great Scrum Master makes themselves progressively less necessary, not more indispensable. Self-reflection Question: Are you enabling your team to be more effective, or have you become the person they can't function without? [The Scrum Master Toolbox Podcast Recommends]

BONUS: Organizations as Ecosystems — Understanding Complexity, Innovation, and the Three-Body Problem at Work In this fascinating conversation about complex adaptive systems, Simon Holzapfel helps us understand why traditional planning and control methods fail in knowledge work — and what we can do instead. Understanding Ecosystems vs. Systems "Complex adaptive systems are complex in nature and adaptive in that they evolve over time. That's different from a static system." — Simon Holzapfel Simon introduces the crucial distinction between mechanical systems and ecosystems. While mechanical systems are predictable and static, ecosystems — like teams and organizations — are complex, adaptive, and constantly evolving. The key difference lies in the interactions among team members, which create emergent properties that cannot be predicted by analyzing individuals separately. Managers often fall into the trap of focusing on individuals rather than the interactions between them, missing where the real magic happens. This is why understanding your organization as an ecosystem, not a machine, fundamentally changes how you lead. In this segment, we refer to the Stella systems modeling application. The Journey from Planning to Emergence "I used to come into class with a lesson plan — doop, doop, doop, minute by minute agenda. And then what I realized is that I would just completely squash those questions that would often emerge from the class." — Simon Holzapfel Simon shares his transformation from rigid classroom planning to embracing emergence. As a history and economics teacher for 10 years, he learned that over-planning kills the spontaneous insights that make learning powerful. The same principle applies to leadership: planning is essential, but over-planning wastes time and prevents novelty from emerging. The key is separating strategic planning (the "where" and "why") from tactical execution (the "how"), letting teams make local decisions while leaders focus on alignment with the bigger picture. "Innovation Arrives Stochastically" "Simply by noticing the locations where you've had your best ideas, we notice the stochasticness of arrival. Might be the shower, might be on a bike ride, might be sitting in traffic, might be at your desk — but often not." — Simon Holzapfel Simon unpacks the concept of stochastic emergence — the idea that innovation cannot be scheduled or predicted in advance. Stochastic means something is predictable over large datasets but not in any given moment. You know you'll have ideas if you give yourself time and space, but you can't predict when or where they'll arrive. This has profound implications for managers who try to control when and how innovation happens. Knowledge work is about creating things that haven't existed before, so emergence is what we rely on. Try to squash it with too much control, and it simply won't happen. In this segment, we refer to the Systems Innovation YouTube channel. The Three-Body Problem: A Metaphor for Teams "When you have three nonlinear functions working at the same time within a system, you have almost no ability to predict its future state beyond just some of the shortest time series data." — Simon Holzapfel Simon uses the three-body problem from physics as a powerful metaphor for organizational complexity. In physics, when you have three bodies (like planets) influencing each other, prediction becomes nearly impossible. The same is true in business — think of R&D, manufacturing, and sales as three interacting forces. The lesson: don't think you can master this complexity. Work with it. Understand it's a system. Most variability comes from the system itself, not from any individual person. This allows us to depersonalize problems — people aren't good or bad, systems can be improved. When teams understand this, they can relax and stop treating every unpredictable moment as an emergency. Coaching Leaders to Embrace Uncertainty "I'll start by trying to read their comfort level. I'll ask about their favorite teachers, their most hated teachers, and I'll really try to bring them back to moments in time that were pivotal in their own development." — Simon Holzapfel How do you help analytical, control-oriented leaders embrace complexity and emergence? Simon's approach is to build rapport first, then gently introduce concepts based on each leader's background. For technical people who prefer math, he'll discuss narrow tail distributions and fat tails. For humanities-oriented leaders, he uses narrative and storytelling. The goal is to get leaders to open up to possibilities without feeling diminished. He might suggest small experiments: "Hold your tongue once in a meeting" or "Ask questions instead of making statements." These incremental changes help managers realize they don't have to be superhuman problem-solvers who control everything. Giving the Board a Number: The Paradox of Prediction "Managers say we want scientific management, but they don't actually want that. They want predictive management." — Simon Holzapfel Simon addresses one of the biggest tensions in agile adoption: leaders who say "I just need to give the board a number" while also wanting innovation and adaptability. The paradox is clear — you cannot simultaneously be open to innovation and emergent possibilities while executing a predetermined plan with perfect accuracy. This is an artifact of management literature that promoted the "philosopher king" manager who knows everything. But markets are too movable, consumer tastes vary too much, and knowledge work is too complex for any single person to control. The burnout we see in leaders often comes from trying to achieve an impossible standard. In this segment, we refer to the episodes with David Marquet. Resources for Understanding Complexity "Eric Beinhocker's book called 'The Origin of Wealth' is wonderful. It's a very approachable and well-researched piece that shows where we've been and where we're going in this area." — Simon Holzapfel Simon recommends two key resources for anyone wanting to understand complexity and ecosystems. First, Eric Beinhocker's "The Origin of Wealth" explains how we developed flawed economic assumptions based on 19th-century Newtonian physics, and why we need to evolve our understanding. Second, the Systems Innovation YouTube channel offers brilliant short videos perfect for curious, open-minded managers. Simon suggests a practical approach: have someone on your team watch a video and share what they learned. This creates shared language around complexity and makes the concepts less personal and less threatening. The Path Forward: Systems Over Individuals "As a manager, our goal is to constantly evaluate the performance of the system, not the people. We can always put better systems in place. We can always improve existing systems. But you can't tell people what to do — it's not possible." — Simon Holzapfel The conversation concludes with a powerful insight from Deming's work: about 95% of a system's productivity is linked to the system itself, not individual performance. This reframes the manager's role entirely. Instead of trying to control people, focus on improving systems. Instead of treating burnout as individual failure, see it as information that something in the system isn't working. Organizations are ever-changing ecosystems with dynamic properties that can only be observed, never fully predicted. This requires a completely different way of thinking about management — one that embraces uncertainty, values emergence, and trusts teams to figure things out within clear strategic boundaries. Recommended Resources As recommended resources for further reading, Simon suggests: The Origin of Wealth, by Eric Beinhocker The Systems Innovation YouTube channel About Simon Holzapfel Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack, where he explores the intersection of economics, equality, and equanimity in the workplace. You can link with Simon Holzapfel on LinkedIn.

Darryl Wright: The PONO—Product Owners in Name Only and How They Destroy Teams Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Collaborative, Present, and Clear in Vision "She was collaborative, and that meant that she was present—the opposite of the MIA product owner. She came, and she sat with the team, and she worked with them side by side. Even when she was working on something different, she'd be there, she'd be available." - Darryl Wright Darryl shares an unusual story about one of the best Product Owners he's ever encountered—someone who had never even heard of Agile before taking the role. Working for a large consulting company with 170,000 staff worldwide, they faced a difficult project that nobody wanted to do. Darryl suggested running it as an Agile project, but the entire team had zero Agile experience. The only person who'd heard of Agile was a new graduate who'd studied it for one week at university—he became the Scrum Master. The executive sponsor, with her business acumen and stakeholder management skills, became the Product Owner despite having no idea what that meant. The results were extraordinary: an 18-month project completed in just over 7 months, and when asked about the experience, the team's highest feedback was how much fun they had working on what was supposed to be an awful, difficult project. Darryl attributes this success to mindset—the team was open and willing to try something new. The Product Owner brought critical skills to the role even without technical Agile knowledge: She was collaborative and present, sitting with the team and remaining available. She was decisive, making prioritization calls clearly so nobody was ever confused about priorities. She had excellent communication skills, articulating the vision with clarity that inspired the team. Her stakeholder management capabilities kept external pressures managed appropriately. And her business acumen meant she instantly understood conversations about value, time to market, and customer impact. Without formal training, she became an amazing Product Owner simply by being open, willing, and committed. As Darryl reflects, going from never having heard of the role to being an inspiring Product Owner in 7 months was incredible—one of the most successful projects and teams he's ever worked with. Self-reflection Question: If you had to choose between a Product Owner with deep Agile certification and no business skills, or one with strong business acumen and willingness to learn—which would serve your team better? The Bad Product Owner: The PONO—Product Owner in Name Only "The team never saw the PO until the showcase. And so, the team would come along with work that they deemed was finished, and the product owner had not seen it before because he wasn't around. So he would be seeing it for the first time in the showcase, and he would then accept or reject the work in the showcase, in front of other stakeholders." - Darryl Wright The most destructive anti-pattern Darryl has witnessed was the MIA—Missing in Action—Product Owner, someone who was a Product Owner in Name Only (PONO). This senior business person was too busy to spend time with the team, only appearing at the sprint showcase. The damage this created was systematic and crushing. The team would build work without Product Owner engagement, then present it in the showcase looking to be proud of their accomplishment. The PO, seeing it for the first time, would accept or reject the work in front of stakeholders. When he rejected it, the team was crushed, deflated, demoralized, and made to look like fools in front of senior leaders—essentially thrown under the bus. This pattern violates multiple principles of Agile teamwork. First, there's no feedback loop during the sprint, so the team works blind, hoping they're building the right thing. Second, the showcase becomes a validation ceremony rather than a collaborative feedback session, creating a dynamic of subservience rather than curiosity. The team seeks approval instead of engaging as explorers discovering what delivers customer value together. Third, the PO positions themselves as judge rather than coach—extracting themselves from responsibility for what's delivered while placing all blame on the team. As Deming's quote reminds us, "A leader is a coach, not a judge." When the PO takes the judge role, they're betraying fundamental Agile values. The responsibility for what the team delivers belongs strictly to the Product Owner; the team owns how it's delivered. When Darryl encounters this situation as a Scrum Master, he lobbies intensely with the PO: "Even if you can't spare any other time for the entire sprint, give us just one hour the night before the showcase." That single hour lets the team preview what they'll present, getting early yes/no decisions so they never face public rejection. The basic building block of any Agile or Scrum way of working is an empowered team—and this anti-pattern strips all empowerment away. Self-reflection Question: Does your Product Owner show up as a coach who's building something together with the team, or as a judge who pronounces verdicts? How does that dynamic shape what your team is willing to try? [The Scrum Master Toolbox Podcast Recommends]

Darryl Wright: The Retrospective Formats That Actually Generate Change Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "My success is, how much have I helped the team achieve what they want? If what they want is to uplift quality, or to reduce their time to market, well then, my success is helping them achieve that." - Darryl Wright When Darryl enters a new organization, he's often told his success will be measured by percentage of Agile adoption or team maturity assessment scores. His response is direct: those are vanity metrics that show something for its own sake, not real success. True success requires multiple measures, carefully balanced to prevent gaming and to capture both the human and business dimensions of work. Darryl advocates balancing quantitative metrics like lead time and flow efficiency with qualitative measures like employee happiness and team self-assessment of productivity. He balances business outcomes like customer satisfaction and revenue with humanity metrics that track the team's journey toward high performance. Most importantly, Darryl believes his success metrics should be co-created with the team. If he's there to help the team, then success must be defined by how much he's helped them achieve what they want—not what he wants. When stakeholders fixate on output metrics like "more story points," Darryl uses a coaching approach to shift the conversation toward outcomes and value. "Would you be happy if your team checked off more boxes, but your customers were less happy?" he asks. This opens space for exploring what they really want to achieve and why it matters. The key is translating outputs into impacts, helping people articulate the business value or customer experience improvement they're actually seeking. As detailed in Better Value, Sooner, Safer, Happier by Jonathan Smart, comprehensive dashboards can track value across multiple domains simultaneously—balancing speed with quality, business success with humanity, quantitative data with qualitative experience. When done well, Agile teams can be highly productive, highly successful, and have high morale at the same time. We don't have to sacrifice one for the other—we can have both. Self-reflection Question: If your team could only track two metrics for the next sprint, what would they choose? What would you choose? And more importantly, whose choice should drive the selection? Featured Retrospective Format for the Week: The 4 L's and Three Little Pigs Darryl offers two favorites, tailored to different contexts. For learning environments, he loves the 4 L's retrospective: Liked, Learned, Lacked, and Longed For. This format creates space for teams to reflect on their learning journey, surfacing insights about what worked, what was missing, and what they aspire to moving forward. For operational environments, he recommends the Three Little Pigs retrospective, which brilliantly surfaces team strengths and weaknesses through a playful metaphor. The House of Straw represents things the team is weak at—nothing stands up, everything falls over. The House of Sticks is things they've put structure around, but it doesn't really work. The House of Bricks represents what they're solid on, what they can count on every time. Then comes the most important part: identifying the Big Bad Wolf—the scary thing, the elephant in the room that nobody wants to talk about but everyone knows is there. This format creates psychological safety to discuss the undiscussable. Darryl emphasizes two critical success factors for retrospectives: First, vary your formats. Teams that hear the same questions sprint after sprint will disengage, asking "why are you asking me again?" Different questions provide different lenses, generating fresh insights. Second, ensure actions come out of every retro. Nothing kills engagement faster than suggestions disappearing into the void. When people see their ideas lead to real changes, they'll eagerly return to the next retrospective. And don't forget to know your team—if they're sports fans, use sports retros; if they're scientists, use space exploration themes. Just don't make the mistake of running a "sailboat retro" with retiring mainframe engineers who'll ask if you think they're kindergarten children. For more retrospective formats, check out Retromat. [The Scrum Master Toolbox Podcast Recommends]

Darryl Wright: Why AI Adoption Will Fail Just Like Agile Did—Unless We Change Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "People are looking to AI to solve their problems, and they're doing it in the same way that they previously looked to Agile to solve their problems for them. The problem with that is, of course, that Agile doesn't solve problems for you. What it does is it shines a light on where your problems are." - Darryl Wright The world has gone AI crazy, and Darryl sees history repeating itself in troubling ways. Organizations are rushing to adopt AI with the same magical thinking they once applied to Agile—believing that simply implementing the tool will solve their fundamental problems. But just as Agile reveals problems rather than solving them, AI will do the same. Worse, AI threatens to accelerate existing problems: if you have too many things moving at once, AI won't fix that, it will amplify the chaos. If you automate a bad process, you've simply locked in badness at higher speed. As Darryl points out, when organizations don't understand that AI requires them to still do the hard work of problem-solving, they're setting themselves up for disillusionment, and in five or twenty years, we'll hear "AI is dead" just like we now hear "Agile is dead." The challenge for Scrum Masters and Agile coaches is profound: how do you help people with something they don't know they need? The answer lies in returning to first principles. Before adopting any tool—whether Agile or AI—organizations must clearly define the problem they're trying to solve. As Einstein reportedly said, "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and 5 minutes thinking about solutions." Value stream mapping becomes essential, allowing teams to visualize where humans and AI agents should operate, with clear handovers and explicit policies. The cognitive load on software teams will increase dramatically as AI generates more code, more options, and more complexity. Without clear thinking about problems and deliberate design of systems, AI adoption will follow the same disappointing trajectory as many Agile adoptions—lots of activity, little improvement, and eventually, blame directed at the tool rather than the system. Self-reflection Question: Are you adopting AI to solve a clearly defined problem, or because everyone else is doing it? If you automated your current process with AI, would you be locking in excellence or just accelerating dysfunction? [The Scrum Master Toolbox Podcast Recommends]

Darryl Wright: The Agile Team That Committed to Failure for 18 Sprints Straight Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "As Deming said, a bad system will beat a good person every time." - Darryl Wright Darryl was called in to help a struggling team at a large energy retailer. The symptoms seemed straightforward—low morale, poor relationships, and chronic underdelivery. But as he asked questions, a heartbreaking pattern emerged. The team had been "committing" to 110 story points per sprint while consistently delivering only 30. For 18 sprints. When Darryl asked why the team would commit to numbers they couldn't possibly achieve, the answer was devastating: "The business needs that much." This wasn't a problem of skill or capability—it was learned helplessness in action. Sprint after sprint, the team experienced failure, which made them more despondent and less effective, creating a vicious downward spiral. The business lost trust, the team lost confidence, and everyone was trapped in a system that guaranteed continued failure. When Darryl proposed the solution—committing to a realistic 30 points—he was told it was impossible because "the business needs 110 points." But the business wasn't getting 110 points anyway. They were getting broken promises, a demoralized team, stress leave, high churn, and a relationship built on distrust. Darryl couldn't change the system in that case, but the lesson was clear: adult people who manage their lives perfectly well outside work can become completely helpless inside work when the system repeatedly tells them their judgment doesn't matter. As Ricardo Semler observes in Maverick!, people leave their initiative at the door when organizations create systems that punish honest assessment and reward false promises. Self-reflection Question: Is your team committing to what they believe they can achieve, or to what they think someone else wants to hear? What would happen if they told the truth? Featured Book of the Week: Better Value, Sooner, Safer, Happier by Jonathan Smart Darryl describes Better Value, Sooner, Safer, Happier by Jonathan Smart as a treasure trove of real-life experience from people who have "had their sleeves rolled up in the trenches" for decades. What he loves most is the authenticity—the authors openly share not just their successes, but all the things that didn't work and why. One story that crystallizes the book's brilliance involves Barclays Bank and their ingenious approach to change adoption. Facing resistance from laggards who refused to adopt Agile improvements despite overwhelming social proof, they started publishing lists of "most improved teams." When resisters saw themselves at the bottom of these public lists, they called to complain—and were asked, "Did you have improvements we didn't know about?" The awkward pause would follow, then the inevitable question: "How do I get these improvements?" Demand creation at its finest. Darryl particularly appreciates that the authors present at conferences saying, "Let me tell you about all the things we've stuffed up in major agile transformations all around the world," bringing genuine humility and practical wisdom to every page. [The Scrum Master Toolbox Podcast Recommends]

Darryl Wright: When Enthusiasm Became Interference—Learning to Listen as a Scrum Master Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Wait stands for Why Am I Talking? Just ask yourself, wait, why am I talking? Is this the right moment for you to give an idea, or is this the right moment to just listen and let them have space to come up with ideas?" - Darryl Wright Early in his Agile journey, Darryl was evangelically enthusiastic about the principles and practices that had transformed his approach to leadership. He believed he had discovered the answers people were seeking, and his excitement manifested in a problematic pattern—he talked too much. Constantly jumping in with solutions, ideas, and suggestions, Darryl dominated conversations without realizing the impact. Then someone pulled him aside with a generous gift: "You're not really giving other people time to come up with ideas or take ownership of a problem." They introduced him to WAIT—Why Am I Talking?—an acronym that would fundamentally shift his coaching approach. This simple tool forced Darryl to pause before speaking and examine his motivations. Was he trying to prove himself? Did he think he knew better? Or was this genuinely the right moment to contribute? As he practiced this technique, Darryl discovered something profound: when he held space and waited, others would eventually step forward with insights and solutions. The concept of "small enough to try, safe enough to fail" became his framework for deciding when to intervene. Not every moment requires a Scrum Master to step in—sometimes the most powerful coaching happens in silence. By developing better skills in active listening and learning to hold space for others, Darryl transformed from someone who provided all the answers into someone who created the conditions for shared leadership to emerge. In this episode, we refer to David Marquet's episodes on the podcast for practical techniques on holding space and enabling leadership in others. Self-reflection Question: When was the last time you caught yourself jumping in with a solution before giving your team space to discover it themselves? What would happen if you waited just five more minutes? [The Scrum Master Toolbox Podcast Recommends]

Alex Sloley: How to Coach POs Who Treat Developers Like Mindless Robots In this episode, we refer to the previous episodes with David Marquet, author of Turn the Ship Around! The Great Product Owner: Trust and the Sprint Review That Changes Everything Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "She was like, oh my gosh, I've never seen this before, I didn't think it was possible. I just saw you deliver stuff in 2 weeks that I can actually use." - Alex Sloley In 2011, Alex worked with a client organization creating software for external companies. They needed a Product Owner for a new Agile team, and a representative from the client—who had never experienced Scrum—volunteered for the role. She was initially skeptical, having never witnessed or heard of this approach. Alex gently coached her through the process, asking her to trust the team and be patient. Then came the first Sprint Review, and everything changed. For the first time in her career, she saw working product delivered in just two weeks that she could actually touch, see, and use. Her head exploded with possibility. Even though it didn't have everything and wasn't perfect, it was remarkably good. That moment flipped a switch—she became fully engaged and transformed into a champion for Agile adoption, not just for the team but for the entire company. Alex reflects that she embodied all five Scrum values: focus (trusting the team's capacity), commitment (attending and engaging in all events), openness (giving the new approach a chance), respect (giving the team space to succeed), and courage (championing an unfamiliar process). The breakthrough wasn't about product ownership techniques—it was about creating an experience that reinforced Scrum values, allowing her to see the potential of a bright new future. Self-reflection Question: What practices, techniques, or processes can you implement that will naturally and automatically build the five Scrum values in your Product Owner? The Bad Product Owner: When Control Becomes Domination Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "They basically just owned the team. The developers on the team might as well have been mindless robots, because they were being assigned all the work, told how much work they could do in a sprint, what the work was." - Alex Sloley In 2018, while working with five interconnected Product Owners, Alex observed a Sprint Planning session that revealed a severe anti-pattern. One Product Owner completely controlled everything, telling the team exactly what work they would take into the Sprint, assigning specific work to specific people by name, and dictating precisely how they would implement solutions down to technical details like which functions and APIs to use. The developers were reduced to helpless executors with no autonomy, while the Scrum Master sat powerless in the corner. Alex wondered what caused this dynamic—was the PO a former project manager? Had the team broken trust in the past? What emotional baggage or trauma led to this situation? His approach started with building trust through coffee meetings and informal conversations, crucially viewing the PO not as the problem but as someone facing their own impediment. He reframed the challenge as solving the Product Owner's problem rather than fixing the Product Owner. When he asked, "Why do you have to do all this? Can't you trust the team?" and suggested the PO could relax if they delegated, the response was surprisingly positive. The PO was willing to step back once given permission and assurance. Alex's key lesson: think strategically about how to build trust and who needs to build trust with whom. Sometimes the person who appears to be creating problems is actually struggling under their own burden. Self-reflection Question: When you encounter a controlling Product Owner, do you approach the situation as "fixing" the PO or as "solving the PO's problem"? How might this reframe change your coaching strategy? [The Scrum Master Toolbox Podcast Recommends]

Alex Sloley: Why Sticky Notes Are Your Visualization Superpower in Retrospectives Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Like the smell, the vibe is something you feel. If you're having a successful impact on the organization or on teams as a Scrum Master, you can feel it, you can smell it. It's intangible." - Alex Sloley Alex introduces a compelling concept from Sumantra Ghoshal about "the smell of the workplace"—you can walk into an environment and immediately sense whether it smells like fresh strawberries and cream or a dumpster fire. In Australia, there's a cultural reference from the movie "The Castle" about "the vibe of the thing," and Alex emphasizes that as a successful Scrum Master, you can feel and smell when you're having an impact. While telling executives you're measuring "vibe" might be challenging, Alex shares three concrete ways he's measured success. The key insight is that success isn't always measurable in traditional ways, but successful Scrum Masters develop an intuition for sensing when their work is making a meaningful difference. Self-reflection Question: Can you articulate the "vibe" or "smell" of your current team or organization? What specific indicators tell you whether your Scrum Master work is truly making an impact beyond the metrics? Featured Retrospective Format for the Week: Sticky Notes for Everything Alex champions any retrospective format that includes sticky notes, calling them a "visualization superpower." With sticky notes, teams can visualize anything—the good, the bad, improvements, options, possibilities, and even metrics. They make information transparent, which is critical for the inspect-and-adapt cycle that forms the heart of Scrum. Alex emphasizes being strategic about visualization: identify a challenge, figure out how to make it visual, and then create experiments around that visualization. Once something becomes visible, magic happens because the team can see patterns they've never noticed before. You can use different sizes, colors, and positions to visualize constraints in the system, including interruptions, unplanned work, blocker clustering, impediments, and flow. This approach works not just in retrospectives but in planning, reviews, and daily scrums. The key principle is that you must have transparency in order to inspect, and you must inspect to adapt. Alex's practical advice: be strategic about what you choose to visualize, involve the team in determining how to make challenges visible, and watch as the transparency naturally leads to insights and improvement ideas. [The Scrum Master Toolbox Podcast Recommends]

Alex Sloley: Coaching Teams Trapped Between Agile Aspirations and Organizational Control Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "The team says, oh, we want to try to do things this way, and the org keeps coming back and saying stuff like, no, no, no, you can't do that, because in this org, we don't allow that." - Alex Sloley Alex shares his current challenge working with a 10-person pilot Scrum team within a 1,500-person organization that has never done Agile before. While the team appears open-minded and eager to embrace agile ways of working, the organization continuously creates impediments by dictating how the team must estimate, break down work, and operate. Management tells them "the right way" to do everything, from estimation techniques to role-based work assignments, even implementing RACI matrices that restrict who can do what type of work. Half the team has been with the organization for six months or less, making it comfortable to simply defer to authority and follow organizational rules. Through coaching conversation, Alex explores whether the team might be falling into learned helplessness or simply finding comfort in being told what to do—both positions that avoid accountability. His experimental approach includes designing retrospective questions to help the team reflect on what they believe they're empowered to do versus what management dictates, and potentially using delegation cards to facilitate conversations about decision-making authority. Alex's key insight is recognizing that teams may step back from empowerment either out of fear or comfort, and identifying which dynamic is at play requires careful, small experiments that create safe spaces for honest dialogue. Self-reflection Question: When your team defers to organizational authority, are they operating from learned helplessness, comfort in avoiding accountability, or genuine respect for hierarchy? How can you design experiments to uncover the real dynamic at play? [The Scrum Master Toolbox Podcast Recommends]

Alex Sloley: When Toxic Leadership Creates Teams That Self-Destruct Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "They would take notes at every team meeting, so that later on they could argue with team members about what they committed to, and what they said in meetings." - Alex Sloley Alex recounts working with a small team where a project manager created such a toxic environment that one new hire quit after just eight hours on the job. This PM would belittle team members publicly, take detailed notes to use as weapons in contract negotiations, and dominate the team through intimidation. The situation became so severe that one team member sent an email that sounded like a suicide note. When the PM criticized Alex's "slide deck velocity," comparing four slides per 15 minutes to Alex's one, he realized the environment was beyond salvaging. Despite coaching the team and attempting to introduce Scrum values, Alex ultimately concluded that management was encouraging this behavior as a control mechanism. The organization lacked trust in the team, creating learned helplessness where team members became submissive and unable to resist. Sometimes, the most important lesson for a Scrum Master is recognizing when a system is too toxic to change and having the courage to walk away. Alex emphasizes that respect—one of the core Scrum values—was completely absent, making any meaningful transformation impossible. In this segment, we talk about “learned helplessness”. Self-reflection Question: How do you recognize when a toxic environment is being actively encouraged by the system rather than caused by individual behavior? What are the signs that it's time to exit rather than continue fighting? Featured Book of the Week: The Goal by Eliyahu M. Goldratt Alex describes his complex relationship with The Goal by Goldratt—it both inspires and worries him. He struggles with the text because the concepts are so deep and meaningful that he's never quite sure he's fully understood everything Goldratt was trying to convey. The book was difficult to read, taking him four times longer than other agile-related books, and he had to reread entire sections multiple times. Despite the challenge, the concepts around Theory of Constraints and systems thinking have stayed with him for years. Alex worries late at night that he might have missed something important in the book. He also mentions reading The Scrum Guide at least once a week, finding new tidbits each time and reflecting on why specific segments say what they say. Both books share a common thread—the text that isn't in the text—requiring readers to dig deeper into the underlying principles and meanings rather than just the surface content. [The Scrum Master Toolbox Podcast Recommends]

Alex Sloley: The Sprint Planning That Wouldn't End - A Timeboxing Failure Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Although I knew about the steps of sprint planning, what I didn't really understand was the box of time versus the box of scope." - Alex Sloley Alex shares a critical learning moment from his first team as a Scrum Master. After six months in the role, during an eight-hour sprint planning session for a four-week sprint, he successfully completed the "what" portion but ran out of time before addressing "how." Rather than respecting the timebox, Alex forced the team to continue planning for another four hours the next day—blowing the timebox by 50%. This experience taught him a fundamental lesson: the difference between scope-boxing and timeboxing. In waterfall, we try to control scope while time slips away. In Scrum, we fix time and let scope adjust. Alex emphasizes that timeboxing isn't just about keeping meetings short—it's about limiting work in process and maintaining focus. His practical tip: use visible timers to train yourself and your teams to respect timeboxes. This mindset shift from controlling scope to respecting time remains one of the most important lessons for Scrum Masters. Self-reflection Question: How often do you prioritize completing a planned agenda over respecting the timebox? What message does this send to your team about the values you're reinforcing? [The Scrum Master Toolbox Podcast Recommends]

BONUS: The Evolution of Agile - From Project Management to Adaptive Intelligence, With Mario Aiello In this BONUS episode, we explore the remarkable journey of Mario Aiello, a veteran agility thinker who has witnessed and shaped the evolution of Agile from its earliest days. Now freshly retired, Mario shares decades of hard-won insights about what works, what doesn't, and where Agile is headed next. This conversation challenges conventional thinking about methodologies, certifications, and what it truly means to be an Agile coach in complex environments. The Early Days: Agilizing Before Agile Had a Name "I came from project management and project management was, for me, was not working. I used to be a wishful liar, basically, because I used to manipulate reports in such a way that would please the listener. I knew it was bullshit." Mario's journey into Agile began around 2001 at Sun Microsystems, where he was already experimenting with iterative approaches while the rest of the world was still firmly planted in traditional project management. Working in Palo Alto, he encountered early adopters discussing Extreme Programming and had an "aha moment" - realizing that concepts like short iterations, feedback loops, and learning could rescue him from the unsustainable madness of traditional project management. He began incorporating these ideas into his work with PRINCE2, calling stages "iterations" and making them as short as possible. His simple agile approach focused on: work on the most important thing first, finish it, then move to the next one, cooperate with each other, and continuously improve. The Trajectory of Agile: From Values to Mechanisms "When the craze of methodologies came about, I started questioning the commercialization and monetization of methodologies. That's where things started to get a little bit complicated because the general focus drifted from values and principles to mechanisms and metrics." Mario describes witnessing three distinct phases in Agile's evolution. The early days were authentic - software developers speaking from the heart about genuine needs for new ways of working. The Agile Manifesto put important truths in front of everyone. However, as methodologies became commercialized, the focus shifted dangerously away from the core values and principles toward prescriptive mechanisms, metrics, and ceremonies. Mario emphasizes that when you focus on values and principles, you discover the purpose behind changing your ways of working. When you focus only on mechanics, you end up just doing things without real purpose - and that's when Agile became a noun, with people trying to "be agile" instead of achieving agility. He's clear that he's not against methodologies like Scrum, XP, SAFe, or LeSS - but rather against their mindless application without understanding the essence behind them. Making Sense Before Methodology: The Four-Fit Framework "Agile for me has to be fit for purpose, fit for context, fit for practice, and I even include a fourth dimension - fit for improvement." Rather than jumping straight to methodology selection, Mario advocates for a sense-making approach. First, understand your purpose - why do you want Agile? Then examine your context - where do you live, how does your company work? Only after making sense of the gap between your current state and where the values and principles suggest you should be, should you choose a methodology. This might mean Scrum for complex environments, or perhaps a flow-based approach for more predictable work, or creating your own hybrid. The key insight is that anyone who understands Agile's principles and values is free to create their own approach - it's fundamentally about plan, do, inspect, and adapt. Learning Through Failure: Context is Paramount "I failed more often than I won. That teaches you - being brave enough to say I failed, I learned, I move on because I'm going to use it better next time." Mario shares pivotal learning moments from his career, including an early attempt to "agilize PRINCE2" in a command-and-control startup environment. While not an ultimate success, this battle taught him that context is paramount and cannot be ignored. You must start by understanding how things are done today - identifying what's good (keep doing it), what's bad (try to improve it), and what's ugly (eradicate it to the extent possible). This lesson shaped his next engagement at a 300-person organization, where he spent nearly five months preparing the organizational context before even introducing Scrum. He started with "simple agile" practices, then took a systems approach to the entire delivery system. A Systems Approach: From Idea to Cash "From the moment sales and marketing people get brilliant ideas they want built, until the team delivers them into production and supports them - all that is a system. You cannot have different parts finger-pointing." Mario challenges the common narrow view of software development systems. Rather than focusing only on prioritization, development, and testing, he advocates for considering everything that influences delivery - from conception through to cash. His approach involved reorganizing an entire office floor, moving away from functional silos (sales here, marketing there, development over there) to value stream-based organization around products. Everyone involved in making work happen, including security, sales, product design, and client understanding, is part of the system. In one transformation, he shifted security from being gatekeepers at the end of the line to strategic partners from day one, embedding security throughout the entire value stream. This comprehensive systems thinking happened before formal Scrum training began. Beyond the Job Description: What Can an Agile Coach Really Do? "I said to some people, I'm not a coach. I'm just somebody that happens to have experience. How can I give something that can help and maybe influence the system?" Mario admits he doesn't qualify as a coach by traditional standards - he has no formal coaching qualifications. His coaching approach comes from decades of Rugby experience and focuses on establishing relationships with teams, understanding where they're going, and helping them make sense of their path forward. He emphasizes adaptive intelligence - the probe, sense, respond cycle. Rather than trying to change everything at once and capsizing the boat, he advocates for challenging one behavior at a time, starting with the most important, encouraging adaptation, and probing quickly to check for impact of specific changes. His role became inviting people to think outside the box, beyond the rigidity of their training and certifications, helping individuals and teams who could then influence the broader system even when organizational change seemed impossible. The Future: Adaptive Intelligence and Making Room for Agile "I'm using a lot of adaptive intelligence these days - probe, sense, respond, learn and adapt. That sequence will take people places." Looking ahead, Mario believes the valuable core of Agile - its values and principles - will remain, but the way we apply them must evolve. He advocates for adaptive intelligence approaches that emphasize sense-making and continuous learning rather than rigid adherence to frameworks. As he enters retirement, Mario is determined to make room for Agile in his new life, seeking ways to give back to the community through his blog, his new Substack "Adaptive Ways," and by inviting others to think differently. He's exploring a "pay as you wish" approach to sharing his experience, recognizing that while he may not be a traditional coach or social media expert, his decades of real-world experience - with its failures and successes - holds value for those still navigating the complexity of organizational change. About Mario Aiello Retired from full-time work, Mario is an agility thinker shaped by real-world complexity, not dogma. With decades in VUCA environments, he blends strategic clarity, emotional intelligence, and creative resilience. He designs context-driven agility, guiding teams and leaders beyond frameworks toward genuine value, adaptive systems, and meaningful transformation. You can link with Mario Aiello on LinkedIn, visit his website at Agile Ways.

Renee Troughton: Analytics From Day One and Four Other Principles of Great POs Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Product owners who think about their products as just a backlog that I prioritize, and I get some detailed requirements from stakeholders, and I give that to the team... that's not empowering the team. And it's probably leading you to building the wrong thing, just faster." The Bad Product Owner: The Backlog Manager Without Vision Renee describes a pattern of Product Owners who don't understand product management—they lack roadmaps, strategy, and never speak to customers. These POs focus solely on backlogs, prioritizing detailed requirements from stakeholders without testing hypotheses or learning about their market. Taking an empathetic view, Renee notes these individuals may have fallen into the role without passion, never seeing what excellence looks like, and struggling with extreme time poverty. Product ownership is one of the hardest roles from a time perspective—dealing with legislative requirements, compliance, risk, fail-and-fix work, and constant incoming demands. Drowning in day-to-day urgency, they lack breathing space for strategic thinking. These POs also struggle with vulnerability, feeling they should have all answers as leaders, making it difficult to admit knowledge gaps. Without organizational safety to fail, they can't demonstrate the confidence balanced with humility needed to test hypotheses and potentially be wrong. The result is building the wrong thing faster, without empowering teams or creating real value. Self-reflection Question: Are you managing your Product Owners' workload and supporting their strategic thinking time, or are you allowing them to drown in tactical work that prevents them from truly leading their products? The Great Product Owner: Analytics from Day One and Market Awareness "They really iterated, I think, 5 key principles quite consistently... the one thing that did really shape my thinking at that time was... Analytics from day one." Renee celebrates a Chief Product Owner who led 13 teams with extraordinary effectiveness. This PO consistently communicated five key principles, with "analytics from day one" being paramount—emphasizing the critical need to know immediately if new features work and understanding customer behavior from launch. This PO demonstrated deep market awareness, regularly spending time in Silicon Valley, understanding innovation trends and where the industry was heading. They maintained a clear product vision and could powerfully sell the dream to stakeholders. Perhaps most impressively, they brought urgency during a competitive "space race" situation when a former leader left with intellectual property to build a competing product. Despite this pressure, they never allowed compromise on quality—rallying teams with mission and purpose while maintaining standards. This combination of strategic vision, market knowledge, data-driven decision-making, and balanced urgency created an environment where teams delivered excellence under competitive pressure. [The Scrum Master Toolbox Podcast Recommends]

Renee Troughton: From Lower-Order to Higher-Order Values in Scrum Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "If you, as a senior leader, demonstrate vulnerability, it creates real magic in an organization where others can open up and be their authentic self." Renee defines success for Scrum Masters through deeply human values: integrity, holding her truth, being compassionately authentic, caring, open, honest, listening, and vulnerable. She emphasizes that vulnerability as a senior leader creates transformative magic in organizations, allowing others to bring their authentic selves to work. Drawing on Byron Katie's "Loving What Is" and Frederick Laloux's "Reinventing Organizations," Renee explains that many corporate organizations focus on lower-order values like results and performance, while more autonomous organizations prioritize higher-order values rooted in the heart. When having conversations with people, Renee connects with them as human beings first—not rushing to business if someone is struggling personally. Success means seeing people completely for who they are, not as resources to be changed or leveraged. The foundation for collaboration, empowerment, and autonomy is trust, respect, and safety. Renee emphasizes that without these fundamental values in place, everything else implodes. She demonstrates how vulnerability, active listening, and accepting people where they are creates the fertile ground for successful teams and organizations. Self-reflection Question: Do you demonstrate vulnerability as a leader, creating space for others to bring their authentic selves to work, or do you hide behind a professional facade that prevents genuine human connection? Featured Retrospective Format for the Week: Themed Retrospectives (Monopoly, Sports, Current Events) "It gave a freshness to it. And it gave almost like a livelihood or a joyfulness to it as an activity as well." Renee recommends themed retrospectives like the Monopoly Retro or sports-themed formats that use current events or cultural references (aka metaphor retrospectives). While working at a consultancy, they would theme retrospectives every week around different topics—football, news events, or various scenarios—using collages of pictures showing different emotions (upset, angry, happy). Team members would identify with feelings and reframe their week within the theme's context, such as "it was a rough game" or "we didn't score enough goals." The brilliance of this approach is covering the same retrospective questions while bringing freshness, creativity, and joyfulness to the activity. These metaphorical formats allow teams to verbalize things that aren't easily expressible in structured formats, triggering different perspectives and creative thinking. The format stays consistent while feeling completely new, maintaining engagement while avoiding retrospective fatigue. [The Scrum Master Toolbox Podcast Recommends]

Renee Troughton: Managing Dependencies and Downstream Bottlenecks in Scrum Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "For the actual product teams, it's not a problem for them... It's more the downstream teams that aren't the product teams, that are still dependencies... They just don't see that work until, hey, we urgently need this." Renee brings a dual-edged challenge from her current work with dozens of teams across multiple business lines. While quarterly planning happens at a high level, small downstream teams—middleware, AI, data, and even non-technical teams like legal—are not considered in the planning process. These teams experience unexpected work floods with dramatic peaks and troughs throughout the quarter. The product teams are comfortable with ambiguity and incremental delivery, but downstream service teams don't see work coming until it arrives urgently. Through a coaching conversation, Renee and Vasco explore multiple experimental approaches: top-to-bottom stack ranking of initiatives, holding excess capacity based on historical patterns, shared code ownership where downstream teams advise rather than execute changes, and using Theory of Constraints to manage flow into bottleneck teams. They discuss how lack of discovery work compounds the problem, as teams "just start working" without identifying all players who need involvement. The solution requires balancing multiple strategies while maintaining an experimentation mindset, recognizing that complex systems require sensing our way toward solutions rather than predicting them. Self-reflection Question: Are you actively managing the flow of work to prevent downstream bottlenecks, or are you allowing your "downstream teams" to be repeatedly overwhelmed by last-minute urgent requests? [The Scrum Master Toolbox Podcast Recommends]

Renee Troughton: The Hidden Cost of Constant Restructuring in Agile Organizations Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Trust and safety are the most fundamental foundations of a team to perform. And so you are just breaking the core of teams when you're doing this." Renee challenges us to look beyond team dysfunction and examine the "dirty little secrets" in organizations—leadership-driven anti-patterns that destroy team performance. She reveals a cyclical pattern of constant restructuring that occurs every six months in many organizations, driven by leaders who avoid difficult performance management conversations and instead force people through redundancy rounds. This creates a cascade of fear, panic, and victim mindset throughout the organization. Beyond restructuring, Renee identifies other destructive patterns including the C-suite shuffle (where new CEOs bring in their own teams, cascading change throughout the organization) and the insourcing/outsourcing swings that create chaos over 5-8 year cycles. These high-level decisions drain productivity for months as teams storm and reform, losing critical knowledge and breaking the trust and safety that are fundamental for high performance. Renee emphasizes that as Agile coaches and Scrum Masters, we often don't feel empowered to challenge these decisions, yet they represent the biggest drain on organizational productivity. Self-reflection Question: Have you identified the cyclical organizational anti-patterns in your workplace, and do you have the courage to raise these systemic issues with senior leadership? Featured Book of the Week: Loving What Is by Byron Katie "It teaches you around how to reframe your thoughts in the day-to-day life, to assess them in a different light than you would normally perceive them to be." Renee recommends "Loving What Is" by Byron Katie as an essential tool for Scrum Master introspection. This book teaches practical techniques for reframing thoughts and recognizing that problems we perceive "out there" are often internal framing issues. Katie's method, called "The Work," provides a worksheet-based approach to introspection that helps identify when our perceptions create unnecessary suffering. Renee also highlights Marshall Rosenberg's "Nonviolent Communication" as a companion book, which uses language to tap into underlying emotions and needs. Both books offer practical, actionable techniques for self-knowledge—a critical skill for anyone in the Scrum Master role. The journey these books provide leads to inner peace through understanding that many challenges stem from how we internally frame situations rather than external reality. We have many episodes on NVC, Nonviolent Communication, which you can dive into and learn from experienced practitioners. [The Scrum Master Toolbox Podcast Recommends]

Renee Troughton: How to Navigate Mandatory Deadlines in Scrum Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I said to the CIO at the time, we're not going to hit this. In fact, we'll be... I can actually tell you, we're gonna be 3 weeks late... And he said: ‘Just make it work!'" Renee shares a powerful story from her work on a mandatory legislative compliance project where reality clashed with executive expectations. Working with a team new to Agile, she carefully established velocity over two sprints and projected the delivery timeline. The challenge intensified when sales continued promising bespoke features to clients while the deadline remained fixed. Despite transparently communicating the team would miss the mandatory date by three weeks, leadership demanded she "just make it work" without providing solutions. Renee found herself creating a misleading burn-up chart to satisfy executive confidence, while the organization played a dangerous game of chicken—waiting for another implementer to admit delays first. This experience taught her the critical importance of courage in conversations with leaders and the need to clearly separate business decisions from development team responsibilities. Sometimes the best we can do is provide transparency and let leaders own the consequences of their choices. In this episode, we refer to the seminal book on large projects: The Mythical Man Month, by Frederick Brooks. Self-reflection Question: When faced with unrealistic demands from leadership, do you have the courage to maintain transparency about your team's reality, even when it means refusing to create false artifacts of confidence? [The Scrum Master Toolbox Podcast Recommends]

BONUS: Consulting is Different—How Consulting Contracts Work Against Agile Development, With Jakob Wolman and Wilko Nienhaus In this BONUS episode, we explore the critical differences between building software as a consultant versus inside a product company. Jakob Wolman contributed an insightful article to the Global Agile Summit book examining how third-party software development operates under entirely different constraints than in-house product development. Joined by Wilko Nienhaus, CTO of Vaimo, a consulting company in Estonia, we dive into ownership dynamics, misaligned incentives, contracting challenges, and the business pressures that shape consulting—along with practical stories from the field about what really works. The Cobbler's Shoes Problem "I come back to the office from this workshop, and suddenly, with these eyes on looking for improvements in process, I just suddenly am hit by this revelation of why things are so slow here? Why are we working so inefficiently?" Jakob describes the striking paradox many consultancies face: they excel at helping clients improve their processes while their own internal operations remain inefficient. This "shoemaker's children" phenomenon reflects a fundamental challenge in consulting—the difficulty of investing in your own improvements when all energy flows toward billable client work. Digital agencies often have outdated or poorly implemented websites despite building sophisticated solutions for others, illustrating how consultancies struggle to apply their own expertise internally. Misaligned Incentives Create Antagonistic Dynamics "It's almost as if the clients are actually paying us to be slow, because our incentive is to spend more time on achieving what the client wants, because we get paid by the hour." The incentive structures in consulting create inherent conflicts that don't exist in product companies. Consultants typically bill by the hour, creating a perverse incentive to spend more time rather than deliver efficiently. Meanwhile, clients pursue business outcomes and want results as quickly and cheaply as possible. This fundamental misalignment leads to: Clients adopting a procurement mindset, treating software development like ordering from a catalog A "wall" between stakeholders and development teams that's even stronger than in product companies Antagonistic relationships where scope changes feel like financial traps rather than necessary learning Contracting processes that reinforce waterfall thinking even when both parties claim to want agility Wilko emphasizes that contracting has a huge impact on these dynamics, and companies must deliberately change their engagement models to break free from these patterns. The Budgeting Trap and Specification Overload "Because of this budgeting process where you now need to motivate what this budget does, or you need to spend that budget, you essentially create this necessity to define everything." Consulting projects often suffer from the same problem that plagued waterfall development: annual budgeting cycles that force stakeholders to cram everything into a single specification. When there's only one chance per year to secure funding, everyone stuffs the requirements document with every conceivable feature, leading to: Massive specifications that attempt to predict all needs upfront Endless discovery meetings and documentation that add cost without improving outcomes Developers working from outdated assumptions with delayed feedback Clients who don't really know what they want but feel pressured to specify everything Jakob points out the frustration that "we've already fixed this problem" in product development through iterative approaches, yet it keeps reappearing in consulting because of the separation between entities. Ownership and Quality in Consulting Environments "Skilled engineers will be frustrated if they're not allowed to do a proper job. People that have spent a lot of time in an environment where they're never allowed to do a proper job, or maybe even punished for doing a proper job, they will have given up, and not care." The difference in ownership between product and consulting development profoundly affects how engineers think about quality, technical debt, and long-term design. In product companies, developers know they'll maintain their code, creating natural incentives for quality. In consulting, the transient nature of engagements can erode quality standards. Key challenges include: Engineers knowing they won't return to the codebase, reducing long-term thinking Clients who lack technical expertise dictating approaches they don't understand Pressure to complete fixed-scope contracts regardless of quality trade-offs The role of estimates in forcing teams to "just complete this thing" even when learning suggests changes Wilko notes that teams controlled by clients versus teams managed as stable units by the consultancy show markedly different levels of ownership and engagement. Engineers want to do great work, but without real-world feedback loops, they may either overengineer based on theoretical ideals or give up on quality entirely. Breaking the Cycle: Going Live in Two Weeks "We said to them, what if we try to actually go live in a single sprint, which in most companies is 2 weeks. And they were like, nah, we're not so sure. And we said, don't worry, you're going to get everything you want in your scope by the end. But just let's try these first 2 weeks." Wilko shares a transformative story about an e-commerce project where his team convinced a client to abandon their two-year roadmap and instead focus on going live with something—anything—in two weeks. The goal: enable one existing customer to place one order for one product they already knew. This constraint forced radical prioritization. The team didn't need images, extensive product catalogs, or elaborate descriptions. They delivered a minimal but functioning system, and the results were revelatory: The client's internal discussion shifted from "we need everything" to "what should we prioritize next?" Real customer interaction revealed unexpected problems, like internal incentive conflicts where salespeople wouldn't direct customers to the website because it threatened their commissions Senior leadership embraced the iterative approach more readily than middle management The faster feedback cycle enabled genuine agility even in a consulting context This story demonstrates that iterative approaches are more likely to lead to success in consulting, and that senior leadership is often more receptive to faster feedback cycles than people expect. The key is changing the dynamic from "deliver a complete spec" to "let's go live quickly and learn." AI as a Game-Changer for Consulting Dynamics "The groundbreaking thing that's happening right now is AI, and it really feeds into this direction. Because instead of speaking, you can actually be building, you can see things, you can do stuff that you can really test in a much more real way than you could just a few years ago." Both Jakob and Wilko see artificial intelligence as a potential solution to many consulting challenges. AI tools enable rapid prototyping and visualization, allowing teams to show rather than tell. This addresses the fundamental problem that clients don't know what they want until they see it, by dramatically reducing the cost of creating tangible demonstrations that generate meaningful feedback. If you want to know more about how AI is reshaping programming, check out our AI Assisted Coding series of episodes. Quality and Testing Should Not Be Negotiable "I just simply think it shouldn't be a choice. We have to be very firm on this is how we work. We are the experts you are paying us." When clients ask to skip testing, reduce code reviews, or cut corners on infrastructure, Jakob argues consultancies must stand firm. Quality practices shouldn't be line items that clients can negotiate away. One consulting company that works strictly with Extreme Programming principles demonstrates this approach—they don't explain every detail to clients, but they clearly establish that "this is how we do all our projects. It's not a choice." Wilko adds that testing often saves time rather than adding cost, serving as a development tool that eliminates repetitive manual verification. The challenge comes during estimation, where padding for testing can make consultancies less competitive, creating pressure to compromise on quality. Jakob emphasizes that some responsibility lies with consultancies themselves, which sometimes over-promise and underbid to win business, then struggle to deliver quality within unrealistic constraints. This "race to the bottom" hurts the entire industry. The Path Forward: Deliberate Collaboration "It is fixable in a consultancy setting as well. I've seen it. I've been part of it. But you have to be very deliberate in your collaboration with the customer." Success in consulting requires deliberately designing the engagement model to support iterative development: Working backward from customer needs, not forward from specifications Establishing short feedback loops with both client stakeholders and end users Creating stable teams rather than assembling ad-hoc groups based on client requests Changing contracting models to align incentives (as explored in Sven Ditz's article in the Global Agile Summit book on delivering incrementally) Being firm about quality practices while remaining flexible about features Using AI and rapid prototyping to generate early, concrete feedback The consulting model doesn't have to default to waterfall, but it requires conscious effort to overcome the structural forces pushing in that direction. Recommended Reading In this episode, we refer to multiple resources for further reading. Here's a list of those resources: Secrets of Consulting by Gerald Weinberg The Global Agile Summit book, including articles by the speakers at the conference Real World Agility by Daniel Gullo The #NoEstimates book by Vasco Duarte Extreme Programming principles About Jakob Wolman and Wilko Nienhaus Jakob Wolman is an experienced engineering leader who knows how to build great software, and how to mess it up. He has worked in both product companies and consulting environments, giving him unique insight into the contrasts between these models. You can connect with Jakob Wolman on LinkedIn. Wilko Nienhaus is CTO of Vaimo, a consulting company in Estonia, where he focuses on the challenges of delivering software in a consulting environment. He concentrates on delivery mechanisms and technical solutions for challenging projects. You can connect with Wilko Nienhaus on LinkedIn.

AI Assisted Coding: From Deterministic to AI-Driven—The New Paradigm of Software Development, With Markus Hjort In this BONUS episode, we dive deep into the emerging world of AI-assisted coding with Markus Hjort, CTO of Bitmagic. Markus shares his hands-on experience with what's being called "vibe coding" - a paradigm shift where developers work more like technical product owners, guiding AI agents to produce code while focusing on architecture, design patterns, and overall system quality. This conversation explores not just the tools, but the fundamental changes in how we approach software engineering as a team sport. Defining Vibecoding: More Than Just Autocomplete "I'm specifying the features by prompting, using different kinds of agentic tools. And the agent is producing the code. I will check how it works and glance at the code, but I'm a really technical product owner." Vibecoding represents a spectrum of AI-assisted development approaches. Markus positions himself between pure "vibecoding" (where developers don't look at code at all) and traditional coding. He produces about 90% of his code using AI tools, but maintains technical oversight by reviewing architectural patterns and design decisions. The key difference from traditional autocomplete tools is the shift from deterministic programming languages to non-deterministic natural language prompting, which requires an entirely different way of thinking about software development. The Paradigm Shift: When AI Changed Everything "It's a different paradigm! Looking back, it started with autocomplete where Copilot could implement simple functions. But the real change came with agentic coding tools like Cursor and Claude Code." Markus traces his journey through three distinct phases. First came GitHub Copilot's autocomplete features for simple functions - helpful but limited. Next, ChatGPT enabled discussing architectural problems and getting code suggestions for unfamiliar technologies. The breakthrough arrived with agentic tools like Cursor and Claude Code that can autonomously implement entire features. This progression mirrors the historical shift from assembly to high-level languages, but with a crucial difference: the move from deterministic to non-deterministic communication with machines. Where Vibecoding Works Best: Knowing Your Risks "I move between different levels as I go through different tasks. In areas like CSS styling where I'm not very professional, I trust the AI more. But in core architecture where quality matters most, I look more thoroughly." Vibecoding effectiveness varies dramatically by context. Markus applies different levels of scrutiny based on his expertise and the criticality of the code. For frontend work and styling where he has less expertise, he relies more heavily on AI output and visual verification. For backend architecture and core system components, he maintains closer oversight. This risk-aware approach is essential for startup environments where developers must wear multiple hats. The beauty of this flexibility is that AI enables developers to contribute meaningfully across domains while maintaining appropriate caution in critical areas. Teaching Your Tools: Making AI-Assisted Coding Work "You first teach your tool to do the things you value. Setting system prompts with information about patterns you want, testing approaches you prefer, and integration methods you use." Success with AI-assisted coding requires intentional configuration and practice. Key strategies include: System prompts: Configure tools with your preferred patterns, testing approaches, and architectural decisions Context management: Watch context length carefully; when the AI starts making mistakes, reset the conversation Checkpoint discipline: Commit working code frequently to Git - at least every 30 minutes, ideally after every small working feature Dual AI strategy: Use ChatGPT or Claude for architectural discussions, then bring those ideas to coding tools for implementation Iteration limits: Stop and reassess after roughly 5 failed iterations rather than letting AI continue indefinitely Small steps: Split features into minimal increments and commit each piece separately In this segment we refer to the episode with Alan Cyment on AI Assisted Coding, and the Pachinko coding anti-pattern. Team Dynamics: Bigger Chunks and Faster Coordination "The speed changes a lot of things. If everything goes well, you can produce so much more stuff. So you have to have bigger tasks. Coordination changes - we need bigger chunks because of how much faster coding is." AI-assisted coding fundamentally reshapes team workflows. The dramatic increase in coding speed means developers need larger, more substantial tasks to maintain flow and maximize productivity. Traditional approaches of splitting stories into tiny tasks become counterproductive when implementation speed increases 5-10x. This shift impacts planning, requiring teams to think in terms of complete features rather than granular technical tasks. The coordination challenge becomes managing handoffs and integration points when individuals can ship significant functionality in hours rather than days. The Non-Deterministic Challenge: A New Grammar "When you're moving from low-level language to higher-level language, they are still deterministic. But now with LLMs, it's not deterministic. This changes how we have to think about coding completely." The shift to natural language prompting introduces fundamental uncertainty absent from traditional programming. Unlike the progression from assembly to C to Python - all deterministic - working with LLMs means accepting probabilistic outputs. This requires developers to adopt new mental models: thinking in terms of guidance rather than precise instructions, maintaining checkpoints for rollback, and developing intuition for when AI is "hallucinating" versus producing valid solutions. Some developers struggle with this loss of control, while others find liberation in focusing on what to build rather than how to build it. Code Reviews and Testing: What Changes? "With AI, I spend more time on the actual product doing exploratory testing. The AI is doing the coding, so I can focus on whether it works as intended rather than syntax and patterns." Traditional code review loses relevance when AI generates syntactically correct, pattern-compliant code. The focus shifts to testing actual functionality and user experience. Markus emphasizes: Manual exploratory testing becomes more important as developers can't rely on having written and understood every line Test discipline is critical - AI can write tests that always pass (assert true), so verification is essential Test-first approach helps ensure tests actually verify behavior rather than just existing Periodic test validation: Randomly modify test outputs to verify they fail when they should Loosening review processes to avoid bottlenecks when code generation accelerates dramatically Anti-Patterns and Pitfalls to Avoid Several common mistakes emerge when developers start with AI-assisted coding: Continuing too long: When AI makes 5+ iterations without progress, stop and reset rather than letting it spiral Skipping commits: Without frequent Git checkpoints, recovery from AI mistakes becomes extremely difficult Over-reliance without verification: Trusting AI-generated tests without confirming they actually test something meaningful Ignoring context limits: Continuing to add context until the AI becomes confused and produces poor results Maintaining traditional task sizes: Splitting work too granularly when AI enables completing larger chunks Forgetting exploration: Reading about tools rather than experimenting hands-on with your own projects The Future: Autonomous Agents and Automatic Testing "I hope that these LLMs will become larger context windows and smarter. Tools like Replit are pushing boundaries - they can potentially do automatic testing and verification for you." Markus sees rapid evolution toward more autonomous development agents. Current trends include: Expanded context windows enabling AI to understand entire codebases without manual context curation Automatic testing generation where AI not only writes code but also creates and runs comprehensive test suites Self-verification loops where agents test their own work and iterate without human intervention Design-to-implementation pipelines where UI mockups directly generate working code Agentic tools that can break down complex features autonomously and implement them incrementally The key insight: we're moving from "AI helps me code" to "AI codes while I guide and verify" - a fundamental shift in the developer's role from implementer to architect and quality assurance. Getting Started: Experiment and Learn by Doing "I haven't found a single resource that covers everything. My recommendation is to try Claude Code or Cursor yourself with your own small projects. You don't know the experience until you try it." Rather than pointing to comprehensive guides (which don't yet exist for this rapidly evolving field), Markus advocates hands-on experimentation. Start with personal projects where stakes are low. Try multiple tools to understand their strengths. Build intuition through practice rather than theory. The field changes so rapidly that reading about tools quickly becomes outdated - but developing the mindset and practices for working with AI assistance provides durable value regardless of which specific tools dominate in the future. About Markus Hjort Markus is Co-founder and CTO of Bitmagic, and has over 20 years of software development expertise. Starting with Commodore 64 game programming, his career spans gaming, fintech, and more. As a programmer, consultant, agile coach, and leader, Markus has successfully guided numerous tech startups from concept to launch. You can connect with Markus Hjort on LinkedIn.

AI Assisted Coding: Pachinko Coding—What They Don't Tell You About Building Apps with Large Language Models, With Alan Cyment In this BONUS episode, we dive deep into the real-world experience of coding with AI. Our guest, Alan Cyment, brings honest perspectives from the trenches—sharing both the frustrations and breakthroughs of using AI tools for software development. From "Pachinko coding" addiction loops to "Mecha coding" breakthroughs, Alan explores what actually works when building software with large language models. From Thermomix Dreams to Pachinko Reality "I bought into the Thermomix coding promise—describe the whole website and it would spit out the finished product. It was a complete disaster." Alan started his AI coding journey with high expectations, believing he could simply describe a complete application and receive production-ready code. The reality was far different. What he discovered instead was an addictive cycle he calls "Pachinko coding" (Pachinko, aka Slot Machines in Japan)—repeatedly feeding error messages back to the AI, hoping each iteration would finally work, while burning through tokens and time. The AI's constant reassurances that "this time I fixed it" created a gambling-like feedback loop that left him frustrated and out of pocket, sometimes spending over $20 in API credits in a single day. The Drunken PhD with Amnesia "It felt like working with a drunken PhD with amnesia—so wise and so stupid at the same time." Alan describes the maddening experience of anthropomorphizing AI tools that seem brilliant one moment and completely lost the next. The key breakthrough came when he stopped treating the AI as a person and started seeing it as a function that performs extrapolations—sometimes accurate, sometimes wildly wrong. This mental shift helped him manage expectations and avoid the "rage coding" that came from believing the AI should understand context and maintain consistency like a human collaborator. Making AI Coding Actually Work "I learned to ask for options explicitly before any coding happens. Give me at least three options and tell me the pros and cons." Through trial and error, Alan developed practical strategies that transformed AI from a frustrating Pachinko machine into a useful tool: Ask for options first: Always request multiple approaches with pros and cons before any code is generated Use clover emoji convention: Implement a consistent marker at the start of all AI responses to track context Small steps and YAGNI principles: Request tiny, incremental changes rather than large refactoring Continuous integration: Demand the AI run tests and checks after every single change Explicit refactoring requests: Regularly ask for simplification and readability improvements Take two steps back: When stuck in a loop, explicitly tell the AI to simplify and start fresh Choose the right tech stack: Use technologies with abundant training data (like Svelte over React Native in Alan's experience) The Mecha Coding Breakthrough "When it worked, I felt like I was inside a Lego Mecha robot—the machine gave me superpowers, but I was still the one in control." Alan successfully developed a birthday reminder app in Swift in just one day, despite never having learned Swift. He made architectural decisions and guided the development without understanding the syntax details. This experience convinced him that AI represents a genuine new level of abstraction in programming—similar to the jump from assembly language to high-level languages, or from procedural to object-oriented programming. You can now think in English about what you want, while the AI handles the accidental complexity of syntax and boilerplate. The Cost Reality Check "People writing about vibe coding act like it's free. But many people are going to pay way more than they would have paid a developer and end up with empty hands." Alan provides a sobering cost analysis based on his experience. Using DeepSeek through Aider, he typically spends under $1 per day. But when experimenting with premium models like Claude Sonnet 3.5, he burned through $5 in just minutes. The benchmark comparisons are revealing: DeepSeek costs $4 for a test suite, DeepSeek R1 plus Sonnet costs $16, while Open AI's O1 costs $190. For non-developers trying to build complete applications through pure "vibe coding," the costs can quickly exceed what hiring a developer would cost—with far worse results. When Thermomix Actually Works "For small, single-purpose scripts that I'm not interested in learning about and won't expand later, the Thermomix experience was real." Despite the challenges, Alan found specific use cases where AI truly delivers on the "just describe it and it works" promise. Processing Zoom attendance logs, creating lookup tables for video effects, and other single-file scripts worked remarkably well. The pattern: clearly defined context, no need for ongoing maintenance, and simple enough to verify the output without deep code inspection. For these thermomix moments, AI proved genuinely transformative. The Pachinko Trap and Tech Stack Matters "It became way more stable when I switched to Svelte from React Native and Flutter, even following the same prompting practices. The AI is just more proficient in certain tech stacks." Alan discovered that some frameworks and languages work dramatically better with AI than others, likely due to the amount of training data available. His e-learning platform attempts with React Native and Flutter kept breaking, but switching to Svelte with web-based deployment became far more stable. This suggests a crucial strategy: choose mainstream, well-documented technologies when planning AI-assisted projects. From Coding to Living with AI Alan has completely stopped using traditional search engines, relying instead on LLMs for everything from finding technical documentation to getting recommendations for books based on his interests. While he acknowledges the risk of hallucinations, he finds the semantic understanding capabilities too valuable to ignore. He's even used image analysis to troubleshoot his father's cable TV problems and figure out hotel air conditioning controls. The Agile Validation "My only fear is confirmation bias—but the conclusion I see other experienced developers reaching is that the only way to make LLMs work is by making them use agility. So look at who's dead now." Alan notes the irony that the AI coding tools that actually work all require traditional software engineering best practices: small iterations, test-driven development, continuous integration, and explicit refactoring. The promise of "just describe what you want" falls apart without these disciplines. Rather than replacing software engineering principles, AI tools seem to validate their importance. About Alan Cyment Alan Cyment is a consultant, trainer, and facilitator based in Buenos Aires, specializing in organizational fluency, agile leadership, and software development culture change. A Certified Scrum Trainer with deep experience across Latin America and Europe, he blends agile coaching with theatre-based learning to help leaders and teams transform. You can link with Alan Cyment on LinkedIn.

AI Assisted Coding: Agile Meets AI—How to Code Fast Without Breaking Things, With Llewellyn Falco In this BONUS episode we explore the practice of coding with AI—not just the buzzwords, but the real-world experience. Our guest, Llewellyn Falco, has been learning by doing, exploring the space of AI-assisted coding from the experimental and intuitive—what some call vibecoding—to the more structured world of professional, world-class software engineering. This is a conversation for practitioners who want to understand what's actually happening on the ground when we code with AI. Understanding Vibecoding "You can now program without looking at code. When you're in that space, vibecoding is the word we're using to say, we are programming in a way that does not relate to programming last year." The software development landscape shifted dramatically in early 2025. Vibecoding represents a fundamental change in how we create software—programming without constantly looking at the code itself. This approach removes many traditional limitations around technology, language, and device constraints, allowing developers to move seamlessly between different contexts. However, this power comes with responsibility, as developers can now move so fast that traditional safety practices become even more critical. From Concept to Working App in 15 Minutes "We wrote just a markdown page of ‘here's what we want this to look like'. And then we fed that to Claude Code. And 15 minutes later we had a working app on the phone." At the Agile 2025 conference in Denver, Llewellyn participated in a hackathon focused on helping psychologists prevent child abuse. Working with customer Amanda, a psychologist, and data scientist Rachel, the team identified a critical problem: clinicians weren't using the most effective parenting intervention technique because recording 60 micro-interactions in 5 minutes was too difficult and time-consuming. The team's approach embodied lean startup principles turned up to eleven. After understanding the customer's needs through exposition and conversation, they created a simple markdown specification and used Claude Code to generate a working mobile app in just 15 minutes. When Amanda tested it, she was moved to tears—after 20 years of trying to make progress on this problem, she finally had hope. Over three days, the team released 61 iterations, constantly getting feedback and refining the solution. Iterative Development Still Matters When Coding With AI "We need to see things working to know what to deliver next. That's never going to change. Unless you're building something that's already there." The team's success wasn't about writing a complete requirements document upfront. Instead, they delivered a minimal viable product quickly, tested it with real users, and iterated based on feedback. This agile approach proved essential even—or especially—when working with AI. One breakthrough came when Amanda used the number keypad instead of looking at her phone screen. With her full attention on the training video she'd watched hundreds of times, she noticed an interaction she had missed before. At that moment, the team knew they had created real value, regardless of what additional features they might build. Good Engineering Practices Without Looking at Code "We asked it to do good engineering practices, even though we didn't really understand what it was doing. We just sort of say, okay, yeah, that seems sensible." A critical moment came when the code had grown large and complex. Rather than diving into the code themselves, Llewellyn and his partner Lotta asked the AI to refactor the code to make a panel easy to switch before actually making the change. They verified functionality worked through manual testing but never looked at how the refactoring was implemented. This demonstrates that developers can maintain good practices like refactoring and clean architecture even when working at a higher level of abstraction. Key practices for AI-assisted development include: Don't accept AI's default settings—they're based on popularity, not best practices Prime the AI with the practices you want it to use through configuration files Tell AI to be honest and help you avoid mistakes, not just be agreeable Ask for explanations of architecture and evaluate whether approaches make sense Keep important decisions documented in markdown files that can be referenced later “The documentation is now executable. I can turn it into code” "The documentation is now executable. I can turn it into code. If I had to choose between losing my documentation or losing my code, I would keep the docs. I think I could regenerate the code pretty easily." In this new paradigm, documentation takes on new importance—it becomes the specification from which code can be regenerated. The team created and continuously updated markdown files for project context, architecture, and individual features. This practice allowed them to reset AI context when needed while maintaining continuity of their work. The workflow was bidirectional: sometimes they'd write documentation first and have AI generate code; other times they'd build features iteratively and have AI update the documentation. This approach using tools like Super Whisper for voice-to-text made creating and maintaining documentation effortless. Remove Deterministic Tasks from AI "AI is sloppy. It's inconsistent. Everything that can be deterministic—take it out. AI can write that code. But don't make AI do repetitive tasks." A crucial principle emerged: anything that needs to be consistently and repeatedly correct should be automated with traditional code, not left to AI. The team wrote shell scripts for tasks like auto-incrementing version numbers and created git hooks to ensure these scripts ran automatically. They also automated file creation with dates at the top, removing the need for AI to track temporal information. This principle works both ways—deterministic logic should be removed from underneath AI (via scripts and hooks) and from above AI (via orchestration scripts that call AI in loops with verification steps in between). Anti-Patterns to Avoid "The biggest anti-pattern is you're not committing frequently. I really want the ability to drop my context and revert my changes at a moment's notice." The primary anti-pattern when coding with AI is failing to commit frequently to version control. The ability to quickly drop context, revert changes, and start fresh becomes essential when working at this pace. Getting important decisions into documentation files and code into version control enables rapid experimentation without fear of losing work. Other challenges include knowing when to focus on the right risks. The team had to navigate competing priorities—customers wanted certain UX features, but the team identified data collection and storage as the critical unknown risk that needed solving first. This required diplomatic firmness in prioritizing work based on technical risk assessment rather than just user requests. Essential Tools for AI-Assisted Development "If you are using AI by going to a website, that is not what we are talking about here." To work effectively with AI, developers need agentic tools that can interact with files and run programs, not just chat interfaces. Recommended tools include: Claude Code (CLI for file interaction) Windsurf (VS Code-like interface) Cursor (code editor with AI integration) RooCode (alternative option) Super Whisper (voice-to-text transcription for Mac) Most developers working at this level have disabled safety guards, allowing AI to run programs without asking permission each time. While this carries risks, committing frequently to version control provides the safety net needed for rapid experimentation. The Power of Voice Interaction "Most of the time coding now looks like I'm talking. It's almost like Star Trek—you're talking to the computer and then code shows up." Using voice transcription tools like Super Whisper transformed the development experience. Speaking instead of typing not only increased speed but also changed the nature of communication with AI. When speaking, developers naturally provide more context and explanation than when typing, leading to better results from AI systems. This proved especially valuable in a crowded conference room where Super Whisper could filter out background noise and accurately transcribe the speakers' voices. The tool enabled natural, conversational interaction with development tools. Balancing Speed with Safety Over three days, the team released 61 times without comprehensive automated testing, focusing instead on validating user value through manual testing with the actual customer. However, after the hackathon, Llewellyn added automated testing by creating a test plan document through voice dictation, having AI clean it up and expand it, then generating Puppeteer tests and shell scripts to run them—all in about 40 minutes. This demonstrates a pragmatic approach: when exploring and validating with users, manual testing may suffice; but for ongoing maintenance and confidence, automated tests remain valuable and can be generated efficiently with AI assistance. The Future of Software Development "If you want to make something, there could not be a better time than now." The skills required for effective software development are shifting. Understanding how to assess risk, knowing when to commit code, maintaining good engineering practices, and finding creative solutions within system constraints remain critical. What's changing is that these skills are now applied at a higher level of abstraction, with AI handling much of the detailed implementation. The space is evolving rapidly—practices that work today may need adjustment in months. Developers need to continuously experiment, stay current with new tools and models, and develop instincts for working effectively with AI systems. The fundamentals of agile development—rapid iteration, customer feedback, risk assessment, and incremental delivery—matter more than ever. About Llewellyn Falco Llewellyn is an Agile and XP (Extreme Programming) expert with over two decades of experience in Java, OO design, and technical practices like TDD, refactoring, and continuous delivery. He specializes in coaching, teaching, and transforming legacy code through clean code, pair programming, and mob programming. You can link with Llewellyn Falco on LinkedIn.

AI Assisted Coding: Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't With Tudor Girba In this BONUS episode, we explore Moldable Development with Tudor Girba, CEO of feenk.com and creator of the Glamorous Toolkit. We dive into why developers spend over 50% of their time reading code—not because they want to, but because they lack the answers they need. Tudor shares how building contextual tools can transform software development, making systems truly understandable and enabling decisions at the speed of thought. The Hidden System: A Telco's Three-Year Quest "They had a system consisting of five boxes, but they could only enumerate four. If this is your level of awareness about what is reality around you, you have almost no chance of systematically affecting that reality." Tudor opens with a striking case study from a telecommunications company that spent three years and hundreds of person-years trying to optimize a data pipeline. Despite massive effort and executive mandate, the pipeline still took exactly one day to process data—no improvement whatsoever. When Tudor's team investigated, they asked for an architecture diagram. The team drew four boxes representing their system. But when Tudor's team started building tools to mirror this architecture back from the actual code, they discovered something shocking: there was an entire fifth system between the first and second boxes that nobody knew existed. This missing system was likely the bottleneck they'd been trying to optimize for three years. Why Reading Code Doesn't Scale "Developers spend more than 50% of their time reading code. The problem is that our systems are typically larger than anyone can read, and by the time you finish reading, the system has already changed many times." The real issue isn't the time spent reading—it's that reading is the most manual, least scalable way to extract information from systems. When developers read code, they're actually trying to answer questions so they can make decisions. But a 250,000-line system would take one person-month to read at high speed, and the system changes constantly during that time. This means everything you learned yesterday becomes merely a hypothesis, not a reliable answer. The fundamental problem is that we cannot perceive anything in a software system except through tools, yet we've never made how we read code an explicit, optimizable activity. The Context Problem: Why Generic Tools Fail "Software is highly contextual, which means we can predict classes of problems people will have, but we cannot predict specific problems people will have." Tudor draws a powerful parallel with testing. Nobody downloads unit tests from the web and applies them to their system—that would be absurd. Instead, we download test frameworks and build tests contextually for our specific system, encoding what's valuable about our particular business logic. Yet for almost everything else in software development, we download generic tools and expect them to work. This is why teams have tens of thousands of static analysis warnings they ignore, while a single failing test stops deployment. The test encodes contextual value; the generic warning doesn't. Moldable Development extends this principle: every question about your system should be answered by a contextual tool you build for that specific question. Tools That Mirror Your Mental Model "Whatever you draw on the whiteboard—that's your mental model. But as soon as the system exists, we want the system to mirror you back that thing. We make it the job of the system to show our mental model back to us." When someone draws an architecture diagram on a whiteboard, they're not documenting the system—they're documenting their beliefs about the system. The diagram represents wishes when drawn before the system exists, but beliefs when drawn after. Moldable Development flips this: instead of humans reading code and creating approximations, the system itself generates the visualization directly from the actual code. This eliminates the layers of belief and inference. Whether you're looking at high-level architecture, data lineage across multiple technologies, performance bottlenecks, or business domain structure, you build small tools that extract and present exactly the information you need from the system as it actually is. The Test-Driven Development Parallel "Testing was a way to find some kind of class of answers. But there are many other questions we have, and the question is: is there a systematic way to approach arbitrary questions?" Tudor explains that Moldable Development applies test-driven development principles to all forms of system understanding. Just as we write tests after we understand the functionality we need, we build visualization and analysis tools after we understand the questions we need answered. Both approaches share key characteristics: they're built contextually for the specific system, created by developers during development, and composed of many small tools that collectively model the system. The difference is that TDD focuses on functional decomposition and known expectations, while Moldable Development addresses architecture, security, domain structure, performance, and any other perspective where functional tests aren't the most useful decomposition. From Thousands of Features to Thousands of Tools "In my development environment, I don't have features. I have thousands of tools that coexist. Development environments should be focused not on what exists out of the box, but on how quickly you can create a contextual tool." Traditional development environments offer dozens of features—buttons, plugins, generic views. But Moldable Development environments contain thousands of micro-tools, each answering a specific question about a specific system. The key is making these tools composable and fast to create. Rather than building monolithic tools that try to handle every scenario, you build small inspectors that show one perspective on one object or concept. These inspectors chain together naturally as you drill down from high-level questions to detailed investigations. You might have one inspector showing test failures grouped by exception type, another showing PDF document comparisons, another showing cluster performance, and another showing memory usage—all coexisting and available when needed. The Real Bottleneck To Learning A System: Time to the Next Question "Once you do this, you will see that the interesting bottleneck is in the time to the next interesting question. This is by far the most interesting place to be spending energy." When you commoditize access to answers through contextual tools, something remarkable happens: the bottleneck shifts from getting answers to asking better questions. Right now, because answers come so slowly through manual reading and analysis, we rarely exercise the skill of formulating good questions. We make decisions based on gut feelings and incomplete data because we can't afford to dig deeper. But when answers arrive at the speed of thought, you can explore, follow hunches, test hypotheses, and develop genuine insight. The conversation between person and system becomes fluid, enabling decision-making based on actual evidence rather than belief. Moldable Development in Practice: The Lifeware Case "They are investing in software engineering as their competitive advantage. They have 150,000 tests that would take 10 days to run on a single machine, but they run them in 16 minutes distributed across AWS." Tudor shares a powerful case study of Lifeware, a life insurance software company that was featured in Kent Beck's "Test-Driven Development by Example" in 2002 with 4,000 tests. Today they have 150,000 tests and have fully adopted Moldable Development as their core practice. Their business model is remarkable: they take data from insurance companies, throw away the old systems, and reverse-engineer new systems by TDD-ing the business—replaying history to produce pixel-identical documents. They've deployed Glamorous Toolkit as their sole development environment across 100+ developers. Their approach demonstrates that Moldable Development isn't just a research concept but a practical competitive advantage that scales to large teams and complex systems. Why AI Doesn't Solve This Problem "When you ask AI, you will get exactly the same kind of answers. The answer comes quickly, but you will not know whether this is accurate, whether this represents the whole thing, and you definitely do not have an explanation as to why the answer is the way it is." In the age of AI code assistants, it might seem like language models could solve the problem of understanding systems. But Tudor explains why they can't. When you ask an AI about your architecture, you get an opinion—fast but unverifiable. Just like asking a developer to draw the architecture on a whiteboard, you receive filtered information without knowing if it's complete or accurate. Moldable Development, by contrast, extracts answers deterministically from the actual system. Software systems have almost no ambiguity in meaning—they're mathematical, not linguistic. We don't need probabilistic interpretation of source code; we need precise extraction and presentation. The tools you build give you not just answers but explanations of how those answers were derived from the actual system state. Scaling Through Language, Not Features "You need a new kind of development environment where the goal is to create tools much quicker. You need some sort of language in which to express development environments." The technical challenge of Moldable Development is enabling thousands of tools to coexist productively. This requires a fundamentally different approach to development environments. Instead of adding features—buttons and menu items that quickly become overwhelming—you need a language for expressing tools and a system for composing them. Glamorous Toolkit demonstrates this through its inspector architecture, where any object can define custom views that appear contextually. These views compose naturally as you navigate through your investigation, reusing earlier perspectives while adding new ones. The environment becomes a medium for tool creation, not just a collection of pre-built features. Making the Invisible Visible "We cannot perceive anything in a software system except through a tool. If that's so important, then the ability to control that shape is probably kind of important too." Software has no inherent shape—it's just data. Every perception we have of it comes through some tool that renders it into a form we can reason about. This means tools aren't nice-to-have accessories; they're fundamental to our ability to work with software at all. The text editor showing code is a tool. The debugger showing variables is a tool. But these are generic tools built once and reused everywhere, which means they show generic perspectives. What if we could control the shape of our software as easily as we write it? What if the system could show us exactly the view we need for exactly the question we have? That's the promise of Moldable Development. About Tudor Girba Tudor Girba is CEO of feenk.com and creator of Moldable Development. He leads the team behind Glamorous Toolkit, a novel IDE that helps developers make sense of complex systems. His work focuses on transforming how teams understand, navigate, and modernize legacy software through custom, insightful tools. Tudor and Simon Wardley are writing a book about Moldable Development which you can get at: https://moldabledevelopment.com/, and read more about in this Medium article. You can link with Tudor Girba on LinkedIn.

Tom Molenaar: When Product Owners “Eat the Grass” for Their Teams Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: The Vision Catalyst "This PO had the ability to communicate the vision and enthusiasm about the product, even I felt inspired." Tom describes an exceptional Product Owner who could communicate vision and enthusiasm so effectively that even he, as the Scrum Master, felt inspired about the product. This PO excelled at engaging teams in product discovery techniques, helping them move from merely delivering features to taking outcome responsibility. The PO introduced validation techniques, brought customers directly to the office for interviews, and consistently showed the team the impact of their work, creating a strong connection between engineers and end users. The Bad Product Owner: The Micromanager "This PO was basically managing the team with micro-managing approach, this blocked the team from self-organizing." Tom encountered a Product Owner who was too controlling, essentially micromanaging the team instead of empowering them. This PO hosted daily stand-ups, assigned individual tasks, and didn't give the team space for self-organization. When Tom investigated the underlying motivation, he discovered the PO believed that without tight control, the team would underperform. Tom helped the PO understand the benefits of trusting the team and worked with both sides to clarify roles and responsibilities, moving from micromanagement to empowerment. In this segment, we refer to the book “Empowered” by Marty Cagan. Self-reflection Question: How do you help Product Owners find the balance between providing clear direction and allowing team autonomy? [The Scrum Master Toolbox Podcast Recommends]

Tom Molenaar: Purpose, Process, and People—The Three Pillars of Scrum Master Success Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "I always try to ask the team first, what is your problem? Or what is the next step, do you think? Having their input, having my input, bundle it and share it." Tom defines success for Scrum Masters through three essential pillars: purpose (achieving the team's product goals), process (effective Agile practices), and people (team maturity and collaboration). When joining new teams, he uses a structured approach combining observation with surveys to get a 360-degree view of team performance. Rather than immediately implementing his own improvement ideas, Tom prioritizes asking teams what problems they want to solve and finding common ground for a "handshake moment" on what needs to be addressed. Featured Retrospective Format for the Week: Creative Drawing of the Sprint Tom's favorite retrospective format involves having team members draw their subjective experience of the sprint, then asking others to interpret each other's drawings. This creative approach brings people back to their childhood, encourages laughter and fun, and helps team members tap into each other's experiences in ways that traditional verbal retrospectives cannot achieve. The exercise stimulates understanding between team members and often reveals important topics for improvement while building connection through shared interpretation of creative expressions. Example activity you can use to “draw the sprint”. [The Scrum Master Toolbox Podcast Recommends]