POPULARITY
Categories
AI is bringing massive changes to our industry, but it's not just about how fast you can write code or use agentic flows. In this episode, I explore how AI is fundamentally shifting the economic bottleneck of software development, and how you can use your systems-thinking engineering mindset to adapt and thrive in this new era.
What’s the Real AI Opportunity? You know me. I think the AI thing is a bit of a sandstorm. A gold rush. There’s no question its useful (I use it for a ton of stuff actually) and there’s no question its only going to get better. But in the early stages of any new tech, there’s a lot of hype, a lot of bandwagon-joining, and a lot of waste. (Y2K? NFT’s? Anyone…?) What I can say for sure is there’s not a ton of benefit in building Custom GPT’s that will ‘replace what we do’. First of all…why would we hasten our own exit? But also, where’s the consistency of knowledge going to come from? The internet? The bad ideas peddled in certifications? Reddit? Then I hear there’s AI opportunity in binding and automating JIRA workflows, or even PO through development strands. Interesting ideas with no real contenders yet. But here’s what I DO know. Businesses, especially the biggest ones struggle with speed. They’re desperate to innovate – if only to beat their competitors to market. They have the best talent pools, and funds to build and experiment. They’re just no darned good at it. Same story. Silos, dependencies, lack of executive support, fear of failure, organizational inertia. And THAT growing distance between potential and capability is the real AI Opportunity. What Will Never Change AI will keep growing and shifting. The big surprises are still ahead of us. And you don’t need to call out what they are. What you need to recognize is that big groups of people are always slow to move. There’s a natural heaviness to large groups, and with only a few exceptions, they’re not nimble, or agile. It doesn’t matter what the next big thing is. ChatGPT went crazy almost overnight. I first heard about it early 2022 and its been all we’ve been discussing since. If it takes you 2 1/2 years to leverage the advantage, you’re behind. I’m not even sure people WANT to adopt AI that rapidly. Because in this moment, there’s a lot of fear hovering over job loss, ethical use and security. Which means the real AI opportunity will remain elusive to companies unless they learn how to manage that inertia. You could be helping them overcome that fear. If you liked this episode, you should also check out… Will AI Save Agile? Episode_272_AI_and_Agile Adam Smith of Tension – Navigating AI Disruption and Leading The Unknown **GET THE BUSINESS OUTCOMES PARTNER PLAYBOOK** Learn how to deliver undeniable ROI that saves your job and accelerates your future https://learning.fusechamber.com/outcomes-partner-playbook **FORGE GENESIS IS HERE** All the skills you need to stop relying on job postings and start enjoying the freedom of an Agile career on YOUR terms. First cohort starts in Q1 2026 https://learning.fusechamber.com/forge-genesis **THE ALL NEW FORGE LIGHTNING** 12 Weeks to elite leadership! https://learning.fusechamber.com/forge-lightning **JOIN MY BETA COMMUNITY FOR AGILE ENTREPRENEURS AND INTRAPRENEURS** The latest wave in professional Agile careers. Get the support you need to Forge Your Freedom! Join for FREE here: https://learning.fusechamber.com/offers/Sa3udEgz **CHECK OUT ALL MY PRODUCTS AND SERVICES HERE:** https://learning.fusechamber.com **ELEVATE YOUR PROFESSIONAL STORYTELLING – Now Live!** The most coveted communications skill – now at your fingertips! https://learning.fusechamber.com/storytelling **JOIN THE FORGE*** New cohorts for Fall 2025! Email for more information: contact@badassagile.com We’re also on YouTube! Follow the podcast, enjoy some panel/guest commentary, and get some quick tips and guidance from me: https://www.youtube.com/c/BadassAgile ****** Follow The LinkedIn Page: https://www.linkedin.com/showcase/badass-agile ****** Our mission is to create an elite tribe of leaders who focus on who they need to become in order to lead and inspire, and to be the best agile podcast and resource for effective mindset and leadership game. Contact us (contact@badassagile.com) for elite-level performance and agile coaching, speaking engagements, team-level and executive mindset/agile training, and licensing options for modern, high-impact, bite-sized learning and educational content.
MY NEWSLETTER - https://nikolas-newsletter-241a64.beehiiv.com/subscribeJoin me, Nik (https://x.com/CoFoundersNik), as I interview David Kaylor (https://x.com/@David__Kaylor). In this episode, we dive into the reality of AI adoption in the highly regulated banking sector and why the industry has been a slow follower. David, who works on digital banking services at Alchemy Technology, gives us an insider's look at how he leverages AI to eliminate repetitive tasks from his daily workflow. We discuss the specific strengths of different AI models like Gemini, Grok, ChatGPT, and Claude, and how to choose the right one for the job. Even if you have no technical coding background, David explains how you can use these AI assistants alongside tools like Replit and Google Apps Script to build powerful business automations, streamline team time tracking in Jira, and even create a fun clicker game in under 30 minutes.Questions This Episode Answers:1. Why is there so much fear around implementing AI tools in the banking and finance industry?2. How can you use AI to write simple scripts and automate repetitive business tasks without being a software developer?3. What are the unique advantages of using Gemini, Grok, and Claude over just relying on ChatGPT?4. How can you leverage Google Apps Script and Google Sheets to automatically send email alerts to your sales team?5. How can an AI-generated script seamlessly audit your team's Jira tickets and solve everyday time-tracking headaches?Enjoy the conversation!__________________________Love it or hate it, I'd love your feedback.Please fill out this brief survey with your opinion or email me at nik@cofounders.com with your thoughts.__________________________MY NEWSLETTER: https://nikolas-newsletter-241a64.beehiiv.com/subscribeSpotify: https://tinyurl.com/5avyu98yApple: https://tinyurl.com/bdxbr284YouTube: https://tinyurl.com/nikonomicsYT__________________________This week we covered:00:00 AI in Everyday Work Life02:52 The Role of AI in Banking06:13 Personal AI Applications and Tools09:06 Creating with AI: Game Development12:04 Automating Tasks with Google Apps Script14:58 Exploring Automation Opportunities
Atlas Camp 2026 has wrapped in Amsterdam, and this week on The Jira Life we're bringing you the inside scoop.Join us as we welcome Peter Van de Voorde and Dan Hardiker to break down everything that mattered from Atlas Camp 2026.From AI-first app development to the continued evolution of Forge and platform extensibility, Peter and Dan share their firsthand insights on:The biggest themes shaping the Atlassian ecosystem in 2026How AI agents and automation are changing the way developers build on Jira and ConfluenceWhat's new (and what's next) for Forge and the Atlassian developer platformRoadmap signals that solution partners and app vendors should be paying attention toThe hallway conversations and community energy you didn't seeAtlas Camp isn't just another conference — it's where Atlassian's developer future takes shape. If you build, administer, or extend Atlassian apps, this episode will help you understand how AI, extensibility, and ecosystem strategy are converging — and what that means for your work in the year ahead.Whether you attended in Amsterdam or followed from afar, this is your definitive Atlas Camp 2026 recap.
CoreStory is building code intelligence platforms that address the fundamental limitation of today's coding agents: their inability to navigate complex enterprise codebases. While foundation models excel at greenfield development, they fail at real-world engineering tasks in systems spanning millions of lines of code. CoreStory's context layer delivers a 44% improvement on SWE-bench, the industry's standard benchmark for measuring coding agent effectiveness on actual GitHub issues. In this episode of BUILDERS, I sat down with Anand Kulkarni, CEO of CoreStory, to explore how his team is enabling the shift to AI-native engineering and seeding the category of spec-driven development across Microsoft, GitHub, and Amazon. Topics Discussed: Building with GPT-3 API 18 months before ChatGPT went public Why even GPT-5 and Opus 4.5 struggle with enterprise codebases on SWE-bench The narrative shift required when selling AI pre- and post-ChatGPT CoreStory's 44% improvement in coding agent performance through context intelligence How "spec-driven development" got adopted by Microsoft, GitHub, and Amazon without formal analyst relations The parallel between JIRA monetizing Agile and CoreStory enabling AI-native engineering Three-channel distribution: direct enterprise, coding agent partnerships via MCP, and hyperscaler/GSI routes Why specs become the source of truth while code becomes disposable in the AI era GTM Lessons For B2B Founders: Match your narrative precision to technical depth: CoreStory deploys three distinct positioning strategies based on audience sophistication. For AI practitioners tracking benchmarks, they lead with "44% SWE-bench improvement"—a metric that immediately signals meaningful progress on the hardest problem in the space. For engineering leaders aware of AI tooling but not deep in the research, they focus on velocity gains and ROI metrics. For executives, they describe reverse-engineering codebases into machine-readable specs. The key insight: technical audiences dismiss vague value props, while non-technical audiences get lost in benchmark details. Map your positioning to how your audience measures success in their world. Seed category language through earned adoption, not manufactured consensus: Anand initially called their approach "requirements-driven development" before simplifying to "spec-driven development." Rather than pitching analysts, they used the term consistently in customer conversations, gave talks at GitHub Universe, and shipped demos showing the workflow. When customers naturally adopted the language and community leaders began using similar terminology independently, Microsoft and GitHub followed with their own implementations (like GitHub's SpecKit). The lesson: category language sticks when practitioners choose to use it because it clarifies their work, not because a vendor pushed it. Focus on customer adoption as proof of concept before seeking broader market validation. Position against emergent practices, not just incumbent products: CoreStory doesn't position against legacy code analysis tools—they position as the enabler of AI-native engineering, the discipline that will displace Agile. Anand's insight from watching JIRA's success: "People don't love JIRA. What they love is Agile as a way to move away from waterfall." CoreStory is betting that 10x velocity gains from AI-native practices will drive the same categorical shift. When you're early in a technology wave, attach to the practice change (how teams will work differently) rather than feature comparisons with existing tools. Movements create markets. Design channel strategy around customer problem awareness: CoreStory's three channels map to different stages of buyer sophistication. Direct enterprise comes from teams already deep in AI engineering who've hit the context limitation wall. Coding agent partnerships (via MCP integration with tools like Cognition and Factory) serve builders wanting better AI tooling who haven't diagnosed the context problem yet. Hyperscalers and GSIs distribute into modernization and maintenance projects where AI enablement is emerging as a requirement. Each channel serves a distinct buyer journey stage. Don't force one go-to-market motion—design multiple paths based on where different customer segments are in understanding the problem you solve. Navigate pre-legitimacy markets by hiding the breakthrough: Before ChatGPT, selling anything AI-driven faced immediate skepticism about whether it was "real" or just smoke and mirrors. Anand couldn't lead with AI without triggering disbelief. CoreStory focused on delivered outcomes—"here's what you'll be able to do"—with AI as the mechanism, not the message. Post-ChatGPT, the challenge flipped: everyone expects AI, but now the differentiation question becomes harder. If you're building on emerging technology before market consensus forms, deemphasize the technology until buyers have context to evaluate it. Once the market validates the technology category, shift to demonstrating your specific technical advantage within it. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
From Content Overload to Curated Influence: The Future of ConnectionOverviewIn this transformative episode, Julie Riga sits down with Kyle Hudson, founder and CEO of Stacklist, to explore how curation, not content creation, is becoming the future of trust and influence in an AI-driven world. Kyle shares his journey from building digital solutions for Google, Disney, and Coca-Cola to creating a platform that helps service professionals own their client relationships. Together, they dive into the ingredients for success in today's rapidly evolving landscape, discussing tech stack management, experimental mindsets, and becoming omnipotential leaders.From Content Overload to Curated Influence: The Future of ConnectionGuest: Kyle Hudson, Founder & CEO of StacklistHost: Julie RigaGuest BackgroundKyle Hudson is the founder and CEO of Stacklist, The Social Curation Network, helping service professionals turn local knowledge into shareable, AI-discoverable hubs. With a proven track record building digital solutions for brands like Google, Disney, and Coca-Cola, Kyle now focuses on empowering experts to own their client relationships rather than renting attention from social media platforms. Kyle is a member of the "Nintendo generation" (born 1979), shaped by growing up with technology as native rather than novel. His philosophy centers on omnipotential: the belief that you have the potential to be many things, not just one specialist.Fun Fact: Kyle is a burger connoisseur who dips his fries in mustard!Key Topics DiscussedThe Ingredients for Success:Curiosity & Experimental Mindset - Being open to trying new tools without fear of failure, treating business as one big lab experiment, and learning by doing rather than waiting for perfection.Fluidity & Adaptability - Avoiding vendor lock-in, being willing to scrap established systems for better solutions, and building the "Swiss Army Knife" skillset instead of narrow specialization.Omnipotential Leadership - Embracing multiple roles as an entrepreneur, understanding you're not defined by one label, and how generalists with AI partners become superhuman.Tech Stack Management: The $2000/month subscription problem, using Slack and Zoom as foundations, Linear as an elegant alternative to Jira, Claude Code for financial projections and custom agents. Strategy: Lock into solutions, not vendors.The Stacklist Philosophy: Curation over content creation, transforming personal expertise into discoverable resources, helping professionals own relationships instead of depending on algorithms. Everyone is known for something valuable.Memorable Quotes"Omnipotential is this idea that you are not X, you have the potential to be X, Y, Z, and A, B, C.""I just jump off the cliff, and as I'm falling, I'm learning. That's how I do it.""You only have to worry about AI taking your job if you're standing still. But if you're diving into it, you're learning skills you can teach others."Key InsightsKyle identifies as part of the "Nintendo generation," those who grew up with technology as native rather than novel, creating a fundamental difference in how leaders approach innovation. The conversation validates entrepreneurs who didn't fit the corporate mold. In the AI era, the valuable entrepreneur is the curious generalist who can leverage AI to solve novel problems.Action StepsAudit your tech stack and eliminate 70%Try one new AI tool this weekCreate your stack on Stacklist with your favorite topicsSchedule experimentation timeConnect: stacklist.app/kyle Connect: Stay On Course with Julie RigaEssential listening for entrepreneurs who want to thrive in an AI-driven future.#Leadership #Innovation #AI #Entrepreneurship #PersonalGrowth
This week the TJL crew reports and analyzes the news coming from Atlassian's Team on Tour: Government event in Washington D.C. Join us as Alex plays field reporter giving updates from the event!The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn! / the-jira-life Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/...Hosts:Alex "Dr. Jira" Ortiz / alexortiz89 / @apetechtechtutorials Rodney "The Jira Guy" Nissen / rgnissen https://thejiraguy.comSarah Wright / satwright Producer:"King Bob" Robert Wen / robert-wen-csm-spc6-a552051 Executive Producer: Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codes / monstercat Outro: Fractal - Atrium / monstercatinstinct
Prabhleen Kaur: The Art of Coaching Product Owners on What vs. How Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Master of Stakeholder Relationships and the Power of No "The best PO is the person who has the superpower of saying no, and they can deal with the stakeholders with the same prowess." - Prabhleen Kaur Prabhleen describes working with a Product Owner who managed multiple stakeholders—not just a handful, but a significant number with competing priorities. What made him exceptional was his deep understanding of each stakeholder's pulse and motivations. He knew when to push back and how to frame the "no" in a way that stakeholders could accept. This wasn't random resistance—it came from thorough preparation manifested in clear roadmaps that made most incoming work predictable for the team. His user stories stood out for their richness in context: beyond the business requirements, they included information about who would be impacted, which proved invaluable for a team dealing with multiple interconnected systems. He leveraged JIRA's priority field effectively, ensuring the moment anyone opened the board, they could immediately understand what mattered most. Prabhleen emphasizes that this PO understood his role as the "what" while respecting the team as the "how." By maintaining strong stakeholder relationships built on mutual understanding, he created space for the team to prepare, plan, and deliver without constant firefighting. Self-reflection Question: Does your Product Owner have the preparation and stakeholder relationships needed to confidently say "no" when priorities compete, or does every request become an emergency? The Bad Product Owner: Technical Experts Who Manage the Sprint Backlog "The PO is the what, and the team is the how. When POs start directing the team about how to do things, the sprint goal gets compromised." - Prabhleen Kaur Prabhleen addresses a common anti-pattern she's observed repeatedly: Product Owners with technical backgrounds who cross the line from "what" into "how." When POs come from developer or technical roles, their expertise can become a liability if they start prescribing solutions rather than defining problems. They direct the team on implementation approaches, suggest specific technical solutions in user stories, and effectively manage the sprint backlog instead of focusing on the product backlog. The consequences are predictable: stories keep getting added or removed mid-sprint, the sprint goal becomes meaningless, and the team ends up delivering nothing because focus is constantly shifting. Prabhleen's solution starts in backlog refinement, where she ensures conversations about technical approaches happen openly with the whole team during estimation. When a PO suggests a specific implementation, she facilitates discussion about alternatives, allowing the team to voice their perspective. The key insight: everyone comes from a good place—the PO suggests solutions because they believe they're helping. The Scrum Master's role is to create space for the team to own the "how" while helping the PO see the value in stepping back. Self-reflection Question: When your Product Owner has technical expertise, how do you help them contribute their knowledge without directing the team's implementation choices? [The Scrum Master Toolbox Podcast Recommends]
Prabhleen Kaur: When Team Members Raise Concerns with Clarity, Not Anger Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "My idea of success as a Scrum Master is when you look around, you see motivated people, and when something goes wrong, they come to you not in anger, but with concern." - Prabhleen Kaur Prabhleen offers a refreshing perspective on measuring success as a Scrum Master that goes beyond velocity charts and feature counts. She shares a pivotal moment when her team was in production, delivering relentlessly with barely any time to breathe. A team member approached her—not with frustration or blame—but with thoughtful concern: "This is not going to work out." He sat down with Prabhleen and the Product Owner, explaining that as the middle layer in an API creation team, delays from upstream were creating a cascading problem. What struck Prabhleen wasn't just the identification of the issue, but how he approached it: with options to discuss, not demands to make. This moment crystallized her definition of success. When team members feel safe enough to voice concerns early, when they come with ideas rather than accusations, when they see themselves as part of the solution rather than victims of circumstances—that's when a Scrum Master has truly succeeded. Prabhleen reminds us that while stakeholders may focus on features delivered, Scrum Masters should watch how well the team responds to change. That adaptability, rooted in psychological safety and mutual trust, is the true measure of a team's maturity. Self-reflection Question: When problems emerge in your team, do people approach you with defensive anger or constructive concern? What does that tell you about the psychological safety you've helped create? Featured Retrospective Format for the Week: Keep-Stop-Happy-Gratitude Prabhleen shares her favorite retrospective format, born from necessity when she joined an established team with dismal participation in their standard three-column retrospectives. She transformed it into a four-column approach: (1) What should we keep doing, (2) What should we stop doing, (3) One thing that will make you happy, and (4) Gratitude for the team. The third column—asking what would make team members happy—opened unexpected doors. Suggestions ranged from team outings to skipping Friday stand-ups, giving Prabhleen real-time insights into team needs without waiting for formal working agreement sessions. The gratitude column proved even more powerful. "Appreciation brings a space where trust is automatically built. When every 15 days you're sitting with the team making a point to say thank you to each other for all the work you've done, everybody feels mutually respected," Prabhleen explains. This ties directly to the trust-building discussed in Tuesday's episode—using retrospectives not just to improve processes, but to strengthen the human connections that make teams resilient. [The Scrum Master Toolbox Podcast Recommends]
Security doesn't fail because you missed a tool, it fails because “secure today” tricks you into relaxing tomorrow. This episode exposes why the real fight isn't compliance… it's whether your defenses hold up once attackers hit you with machine-speed pressure. Ron sits down with Sonali Shah, CEO of Cobalt, to talk about how human-led, AI-powered penetration testing is evolving into full-spectrum offensive security. Sonali shares how Cobalt can start a test in 24 hours, push findings directly into Slack/Teams and Jira, and use learnings from 5,000+ pentests a year to continuously sharpen what gets caught. The big takeaway: automation finds the easy stuff as humans find the business-logic traps and attack chains that actually break companies. Impactful Moments 00:00 - Introduction 02:21- Sonali's unexpected CEO path 06:10 - Compliance isn't real security 10:19 - PTaaS: start in 24 hours 12:33- 5,000 pentests yearly scale 17:01 - Humans beat automation limits 20:16 - AI behavior vulnerabilities emerge 27:54 - Indirect prompt injection explained 30:51 - Why juniors + AI is risky 38:27 - 2026 becomes AI battleground Links Connect with Sonali on LinkedIn: https://www.linkedin.com/in/sonalinshah/ Check out Cobalt: https://www.cobalt.io ____ Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
Do you remember the early days of your career? You likely spent hours coding late into the night, fueled not by a paycheck, but by the sheer joy of building. But somewhere along the way, that intrinsic fire faded, replaced by the extrinsic motivators of Jira tickets, performance reviews, and ultimately the almighty dollar.In this episode of the Career Growth Accelerator, I explore why this shift happens and how it might be the very thing keeping you stuck. We discuss the "Overjustification Effect"—how getting paid for your passion can actually degrade your performance—and how to reclaim the autotelic personality required to enter a flow state and accelerate your career.• The Overjustification Effect: Learn why introducing extrinsic rewards (like a salary) for a task you inherently enjoy can weaken or completely replace your intrinsic motivation, eventually making the work feel like a chore.• The Loss of Flow: Discover how moving from hobbyist to professional changes your relationship with the work, often stripping away the conditions necessary for "flow state," such as risk-taking and immediate feedback.• Autotelic Personality: Understand the concept of being "autotelic"—doing something for its own sake—and why this trait is critical for high-quality, creative work that pushes your career forward.• The Stagnation Trap: Recognize that if your only motivation is doing what is required to get paid, you are unlikely to take on the voluntary challenges necessary to grow to the next level.• Reclaiming Your Drive: I discuss how finding pockets of intrinsic motivation—even if they are ancillary to your main job—can reignite your ability to enter flow, improve your work quality, and break through career plateaus.
Prabhleen Kaur: How AI Is Changing the Way Agile Teams Deliver Value Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "AI's output is not the final output—it's always the two eyes we have that will get us the best results." - Prabhleen Kaur Prabhleen brings a timely challenge to the coaching conversation: the impact of AI on teams and how Scrum Masters should navigate this transformation. She frames it as both a challenge and an opportunity—teams are now capable of delivering faster than consumers can absorb, fundamentally changing expectations and dynamics. Prabhleen has observed her teams evolve from uncertainty about AI to confidently leveraging it for practical benefits. Developers use AI for writing and understanding code, particularly helpful for onboarding new team members who need to comprehend existing codebases quickly. QA professionals find AI invaluable for generating test cases based on story and epic context already captured in JIRA. The next frontier? Agentic AI, where AI systems communicate with each other to produce better outputs. But Prabhleen offers an important caution: AI is learning from many conversations, not all of which are reliable. The human element—critical thinking and verification—remains essential. For Scrum Masters, this means facilitating conversations about how teams want to experiment with AI, exploring edge cases in testing that AI can help identify, and helping teams navigate the evolving landscape of possibilities while maintaining quality and judgment. Self-reflection Question: How are you helping your team explore AI as a tool for improvement while ensuring they maintain critical thinking about the outputs AI produces? [The Scrum Master Toolbox Podcast Recommends]
Prabhleen Kaur: When Lack of Trust Turns Teams Into Isolated Individuals Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "Teams self-destruct despite best efforts when they lack trust." - Prabhleen Kaur Prabhleen observed a troubling pattern while shadowing a team: stand-ups had become a register activity where people reported individual status without any connection to the sprint goal. There was no "we" in the conversation—only "I." The team had experienced a missed deadline due to a PR conflict that wasn't merged in time, but instead of addressing it openly, everyone focused on fixing the immediate problem while avoiding the deeper conversation. The discomfort was never voiced, and resentment accumulated silently. Prabhleen explains that team destruction is never about one action—it's about the accumulation of unspoken concerns that eventually explode at the worst possible moment. To rebuild trust, she recommends starting with peer reviews that encourage natural collaboration and conversation. Scrum Masters must be vocal about challenges in front of the entire team, modeling the openness they want to see. For teams that have completely withdrawn, anonymous feedback and scheduled one-on-ones can create safe spaces for honest communication. The key insight? Trust is rebuilt when people realize they will be heard and understood, not judged. In this segment, we talk about how trust is the foundation of effective teams and how its absence leads to working in silos. Self-reflection Question: When your team experiences a failure or missed deadline, do you create space for open conversation about what happened, or does everyone quietly move on while resentment builds? Featured Book of the Week: Scrum: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland Prabhleen recommends Scrum: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland as a foundational read for understanding the spirit behind the framework. "When I actually read the book and understood the nuances of rugby and how the team should be, everything started making sense. I grew beyond the Scrum guide, beyond following rules—it's about how the team operates around you as a collective," she explains. Prabhleen also highly recommends Turn the Ship Around by David Marquet, summarizing its core message as "leaders lead leaders." Both books shaped her understanding that frameworks exist to enable collaboration, not to create compliance. Check out the David Marquet episodes on the Scrum Master Toolbox Podcast for more insights on intent-based leadership. [The Scrum Master Toolbox Podcast Recommends]
Prabhleen Kaur: Letting Teams Own Their Process Through Working Agreements Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. "It's about coaching the team, not teaching them." - Prabhleen Kaur Prabhleen shares a powerful lesson about the dangers of being too directive with a forming team. When she joined a new team, her enthusiasm and experience led her to immediately introduce best practices, believing she was setting the team up for success. Instead, the team felt burdened by rules they didn't understand the purpose of. The process became about following instructions rather than solving problems together. It wasn't until her one-on-one conversations with team members that Prabhleen realized the disconnect. She discovered that the team viewed the practices as mandates rather than tools for their benefit. The turning point came when she brought this observation to the retrospective, and together they unlearned what had been imposed. Now, when Prabhleen joins a new team, she takes a different approach. She first seeks to understand how the team has been functioning, then presents situations as problems to be solved collectively. By asking "How do you want to take this up?" instead of prescribing solutions, she invites team ownership. This shift from teaching to coaching means the team creates their own working agreements, their own definitions of ready and done, and their own communication norms. When people voice solutions themselves, they follow through because they own the outcome. In this episode, we refer to working agreements and their importance in team formation. Self-reflection Question: When you join a new team, do you first seek to understand their current ways of working, or do you immediately start suggesting improvements based on your past experience? [The Scrum Master Toolbox Podcast Recommends]
In this episode, Brad Hibbert (COO & Chief Strategy Officer at Brinqa) joins Ashish to explain why traditional risk-based vulnerability management (RBVM) is no longer enough in a cloud-first world .We explore the evolution from simple patch management to Exposure Management a holistic approach that sits above your security tools to connect infrastructure, code, and cloud risks to actual business impact . Brad breaks down the critical difference between a "Risk Owner" (the service owner) and a "Remediation Owner" (the team fixing the bug) and why this distinction solves the "who fixes this?" problem .This conversation covers practical steps to uplift your VM program, how AI is helping prioritize the noise , and why compliance often just "proves activity" rather than reducing real risk . Whether you're drowning in Jira tickets or trying to automate remediation, this episode provides a roadmap for modernizing your security postureGuest Socials - Brad's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:50) Who is Brad Hibbert? (Brinqa)(04:55) The Evolution: From Scanning Servers to Cloud Complexity (06:50) What is Risk-Based Vulnerability Management? (08:50) Risk Owners vs. Remediation Owners: Who Fixes What? (12:00) How AI is Changing Vulnerability Management (15:20) Defining Exposure Management: Moving Beyond the Tools (18:30) The Challenge of "Data Inconsistency" Between Tools (22:30) Readiness Check: Are You Ready for Exposure Management? (25:10) Automated Remediation: Is "Zero Tickets" Possible? (28:40) Compliance vs. Risk: Why "Activity" isn't "Impact" (31:30) Maturity Milestones for Exposure Management (36:50) Fun Questions: Golf, Turkish Kebabs & Friendships
This week the TJL crew sits down with Josh Costella, Lead Architect for Sentify and co-lead of the Phoenix ACE chapter. Join us as we talk about Atlassian Government Cloud and Isolated Cloud and other topics sure to be discussed at TEAM 26!The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://lnkd.in/g5834KixOr Follow us on LinkedIn!https://lnkd.in/epszdbRjBecome a member on YouTube to get access to perks:https://lnkd.in/gzDWDAzNHosts:- Alex "Dr. Jira" Ortiz https://lnkd.in/eP2TQHcE https://lnkd.in/ewxmQs2s- Rodney "The Jira Guy" Nissen https://lnkd.in/exhJAMVm https://thejiraguy.com- Sarah Wright https://lnkd.in/gA6vNvmX Producer:- "King Bob" Robert Wen https://lnkd.in/gTpSr7_vExecutive Producer: - Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codeshttps://lnkd.in/eZp7w7ieOutro: Fractal - Atriumhttps://lnkd.in/eMpcN8rf
Agile Is Not a Process. It's How Smart Teams Think.Most people think agile is Jira boards, sprints, standups, and sticky notes.Here's the thing.Those are just tools.Agile is a mindset about how work *should* move in a world that refuses to stay predictable.If you've ever worked on a project where requirements changed, deadlines shifted, or priorities flipped overnight, you already know why traditional project management struggles.How to connect with AgileDad:- [website] https://www.agiledad.com/- [instagram] https://www.instagram.com/agile_coach/- [facebook] https://www.facebook.com/RealAgileDad/- [Linkedin] https://www.linkedin.com/in/leehenson/
Manuela Barcenas breaks down how marketing work has flipped from “writer + editor” to “manager of agents.” She shares two concrete workflows: (1) using Claude Projects to reposition and modernize 100 legacy blog posts in a week (including updated product messaging, AI-forward advice, and internal links), and (2) using Fellow's “Ask Fellow” to mine anonymized customer-call transcripts for original quotes and pain points—then turning those insights into publish-ready integration/use-case articles in hours, not weeks. The throughline: output is easy now; taste, judgment, and review are the differentiators.Timestamps0:00–0:00 - Intro1:18–2:54 Early Fellow days: one blog/week, months-long ebooks, craftsmanship vs scale3:06–3:26 Scale expectations now: Amazon's ebook upload limit anecdote (3/day)3:40–4:30 Fellow previously managing an “army of writers” → now mostly AI/agents4:36–5:00 “Taste” as the differentiator: what good content is + standing out5:53–7:12 The 100-post update explained: not link swaps—full repositioning + modernized advice7:25–9:36 Switching from ChatGPT to Claude; LinkedIn poll results + “context retention” theme9:48–10:21 Claude Projects setup: separate projects to maintain context and instructions14:43–15:29 Prompt versioning: internal links, new features, and repeated refinement cycles18:55–19:20 Demo: paste URL → Claude fetches page → follows checklist automatically19:26–20:24 Manuela's QA: she reads/edits everything; “taste” = final layer (like editing writers)21:38–23:17 Claude Skills discussion: turning repeated workflows into reusable MD “skills” (personal vs company-wide)25:42–26:26 SEO myth: focus isn't “AI penalty,” it's originality and substance (quotes, stats, real insight)26:38–28:39 Original content engine: Ask Fellow pulls anonymized customer-call insights by feature/integration28:39–31:21 Building documents from transcripts (pain points, best practices, FAQs, quotes) → export to Doc/PDF31:21–33:29 Feed exported insights into Claude Project to draft a tight article rich with customer quotes33:29–36:06 Why it works: management loop (outcomes → constraints → review → feedback) at faster cadence36:18–37:30 What's next: Claude Code / Claude “co-work”; projects as “mini employees”37:02–38:06 Personal brand workflow: Claude analyzes best LinkedIn posts → style guide + voice-based drafting (Whisper Flow)38:28–39:12 Wrap: AI speed is real; staying current requires constant learningTools & technologies mentioned (with brief descriptions)Claude (Anthropic) — LLM used for higher-quality long-context writing, structured rewrites, and content systems.Claude Projects — Workspace feature to keep persistent instructions/context per workflow (e.g., content optimization agent).Claude Skills — Reusable capabilities packaged as uploaded markdown files (personal or org-wide) to standardize output.Claude Code / Claude “co-work” — Anthropic workflows/webinars referenced for deeper automation beyond writing (emerging).ChatGPT — Baseline comparison model; Manuela notes switching due to Claude's perceived context + output quality.Excel + Claude — Mentioned via finance demo: using Claude in Excel to build financial models.Fellow.ai — AI meeting assistant used for transcripts, summaries, action items, and cross-tool integrations.Ask Fellow — Fellow feature that queries meeting knowledge (calls/transcripts) to generate anonymized insight docs.Anonymization (in Fellow) — Removes identifying customer details while preserving job titles/quotes for safe content use.Integrations (examples named) — Slack, Asana, HubSpot, Salesforce, Linear, Jira, Confluence (tools Fellow connects with).Whisper Flow — Voice-to-text capture tool used to speak ideas, then convert into styled writing (e.g., LinkedIn drafts).Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.
This week the TJL crew welcome Paulo Ramalho, Atlassian Community Champion from Norway and Senior Atlassian Consultant. Watch as Paulo explains how Assets, as a platform app, can now establish governance of your Jira and Confluence items.The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn! / the-jira-life Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/...Hosts:Alex "Dr. Jira" Ortiz / alexortiz89 / @apetechtechtutorials Rodney "The Jira Guy" Nissen / rgnissen https://thejiraguy.comSarah Wright / satwright Producer:"King Bob" Robert Wen / robert-wen-csm-spc6-a552051 Executive Producer: Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codes / monstercat Outro: Fractal - Atrium / monstercatinstinct
Confirm uses organizational network analysis to surface hidden high performers and toxic actors that traditional performance reviews miss - identifying the quiet contributors everyone relies on and the problematic employees who manage up effectively. In this episode of BUILDERS, I sat down with David Murray, Cofounder & CEO of Confirm, to dissect their most painful go-to-market lessons. David shares why leading with methodology superiority torpedoed their early sales, the specific discovery framework that flipped their win rate, and how they segment the four distinct HR buying motions that require completely different sales approaches. Topics Discussed: Why traditional performance reviews are 60% manager bias according to research by Maynard Goff How organizational network analysis identifies introverted high performers and manages-up toxic actors The catastrophic early GTM mistake: positioning against existing processes Discovery frameworks for conservative buyers in compliance-heavy functions Talk ratio targets and silence techniques from clinical psychology applied to enterprise sales Channel testing methodology that identified LinkedIn ads as their primary acquisition driver The four-quadrant framework for HR sales: CHRO vs line manager, company-wide vs HR-only tools Messaging strategies that balance shock factor with substantive education GTM Lessons For B2B Founders: Discovery trumps differentiation in category creation: Confirm's design partner had promoted toxic employees and lost quiet high performers in the same cycle—a perfect case study for their ONA methodology. But when they pitched other HR leaders with "here's why your approach is broken," they hit walls. The shift: stop selling methodology, start diagnosing pain. Reference what you've observed at similar companies—"Some folks at your size tell us they struggle with X, is that true for you?"—then let prospects surface their version of the problem. Only after they've articulated their pain do you map your differentiated approach to their specific context. Target buyer timing, not just buyer titles: Confirm identified a specific trigger: HR leaders in their first 1-2 months at a new company. These leaders are hired to make change and need early wins. The outreach question: "How are you looking to make your mark?" This surfaces whether they're hungry for innovation or managing political capital. A newly hired CHRO has different motivations than a 5-year veteran protecting their process choices. Map your outreach to career timing, not just seniority. Enforce 50/30/20 talk ratios in discovery: David's target: prospects speak 60-80% of discovery calls, with 50% being acceptable. If you're talking more than half the time, you're pitching, not discovering. The clinical psychology technique: positive encouragers ("yeah," "huh") plus deliberate silence after open-ended questions. Prospects will fill silence with the real issues—budget constraints, political dynamics, past vendor failures. This intel is gold for multi-threading and objection handling later. Test channel-message fit with minimal spend: Confirm's approach: "do everything a little bit and see what sticks." They found LinkedIn ads with precise targeting (title, company size, recent job changes) delivered qualified pipeline cost-effectively, while other channels didn't. The framework: allocate 10-15% of budget across 5-6 channels for 60 days, measure cost-per-qualified-meeting, then concentrate spend. Plan for 3-6 month creative refresh cycles as audiences develop ad fatigue—this isn't set-and-forget. Map your product to the HR buying matrix: David identifies four distinct quadrants: (1) CHRO buyer, company-wide deployment = traditional enterprise sale, 6-18 month cycles, heavy multi-threading required; (2) CHRO buyer, HR-only tool = shorter cycles but still executive selling; (3) Line manager buyer, company-wide = requires bottom-up adoption mechanics; (4) Line manager buyer, HR-only = SMB-style transactional sale. Confirm operates in quadrant 1—the longest, most complex sale. Most founders don't explicitly map which quadrant they're in, leading to mismatched sales motions and blown forecasts. Use provocative messaging with technical substance: "One-click performance reviews" generated meetings because it triggered both excitement (managers hate writing reviews) and concern (is AI replacing human judgment?). The key: the shock factor gets the meeting, but you need depth on the call. Confirm's explanation: the AI aggregates data from Asana, Jira, OKRs, peer feedback, and self-reflections to reduce recency bias, then generates a draft managers edit. The dystopian concern becomes a feature when you explain the data anchoring. Surface-level shock without technical credibility burns trust. Adjust for organizational risk tolerance by function: HR and healthcare share conservative buying cultures due to compliance, documentation, and legal requirements. David contrasts this with selling to CTOs or engineers who "kick tires and want to break things." This affects everything: longer evaluation cycles, more stakeholders in legal/compliance, emphasis on security and data handling, reference checks weighted heavily. If you're selling to risk-averse functions, adjust your content (white papers, compliance documentation), your timeline expectations, and your change management positioning. Reframe education as extraction, not instruction: David's mental model shift: "I need to learn from them" replaced "I need to educate them." In practice: "I've heard from others that calibration meetings consume 10+ hours per cycle with unclear outcomes. They tried approaches like forced ranking or manager-only decisions. Have you experimented with either?" This positions you as a pattern-matcher across their peer group, not a lecturer. They become receptive to alternatives because you've demonstrated you understand their world through other customers' experiences. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM
Jason Cohen is a four-time founder (including two unicorns, one being WP Engine) and an investor in over 60 startups, and has been sharing his lessons on company building at A Smart Bear for nearly 20 years. In this episode, Jason shares his methodical five-step framework for diagnosing stalled growth—a problem that faces almost every team.We discuss:1. Jason's five-step framework: logo retention, pricing, NRR, marketing channels, target market2. A small tweak that'll double response rates on your cancellation surveys3. Why “it's too expensive” is almost never the real reason customers cancel4. The “elephant curve” of growth5. How repositioning the same product can increase revenue 8x6. When to reconsider if growth is even the right goal for your business—Brought to you by:10Web—Vibe coding platform as an APIStrella—The AI-powered customer research platformBrex—The banking solution for startups—Episode transcript: https://www.lennysnewsletter.com/p/why-your-product-stopped-growing—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Jason Cohen:• Preorder Jason's book: https://preorder.hiddenmultipliers.com/• X: https://x.com/asmartbear• LinkedIn: https://www.linkedin.com/in/jasoncohen• Blog: https://longform.asmartbear.com• Website: https://wpengine.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jason Cohen(05:19) Jason's writing journey(08:25) Questions to ask when your product stops growing(18:17) Getting real customer feedback(20:27) Analyzing cancellation reasons(26:54) Onboarding and activation(29:35) Quick summary(35:46) Revisiting pricing strategies(41:46) Positioning strategies(47:52) Why pricing is inseparable from your strategy(52:06) The importance of net revenue retention (NRR)(01:00:25) Asking whether or not this is good for the customer(01:04:34) Leveraging existing customers(01:06:42) Are your acquisition channels saturated? The “elephant curve”(1:09:41) Why all marketing channels eventually decline(01:12:04) Direct vs. indirect marketing channels(1:13:36) Getting creative with new channels(01:19:04) Do you actually need to grow?(01:25:57) Deciding when to quit(01:29:27) Book announcement(01:33:21) AI corner(01:34:35) Contrarian corner(01:37:43) Lightning round and final thoughts—Referenced:• Tyler Cowen's website: https://tylercowen.com• How to Perform a Customer Churn Analysis (and Why You Should): https://www.groovehq.com/blog/learn-from-customer-churn• Linear: https://linear.app• Jira: https://www.atlassian.com/software/jira• Patrick Campbell's post on X about pricing: https://x.com/Patticus/status/1702313260547006942• The art and science of pricing | Madhavan Ramanujam (Monetizing Innovation, Simon-Kucher): https://www.lennysnewsletter.com/p/the-art-and-science-of-pricing-madhavan• Pricing your AI product: Lessons from 400+ companies and 50 unicorns | Madhavan Ramanujam: https://www.lennysnewsletter.com/p/pricing-and-scaling-your-ai-product-madhavan-ramanujam• Pricing your SaaS product: https://www.lennysnewsletter.com/p/saas-pricing-strategy• M&A, competition, pricing, and investing | Julia Schottenstein (dbt Labs): https://www.lennysnewsletter.com/p/m-and-a-competition-pricing-and-investing• “Sell the alpha, not the feature”: The enterprise sales playbook for $1M to $10M ARR | Jen Abel: https://www.lennysnewsletter.com/p/the-enterprise-sales-playbook-1m-to-10m-arr• Buffer: https://buffer.com• AG1: https://drinkag1.com• How to find hidden growth opportunities in your product | Albert Cheng (Duolingo, Grammarly, Chess.com): https://www.lennysnewsletter.com/p/how-to-find-hidden-growth-opportunities-albert-cheng• How Duolingo reignited user growth: https://www.lennysnewsletter.com/p/how-duolingo-reignited-user-growth• The Elephant in the room: The myth of exponential hypergrowth: https://longform.asmartbear.com/exponential-growth• HubSpot: https://www.hubspot.com• Zigging vs. zagging: How HubSpot built a $30B company | Dharmesh Shah (co-founder/CTO): https://www.lennysnewsletter.com/p/lessons-from-30-years-of-building• Adjacency Matrix: How to expand after PMF: https://longform.asmartbear.com/adjacency/• Ecosystem is the next big growth channel: https://www.lennysnewsletter.com/p/ecosystem-is-the-next-big-growth• ChatGPT apps are about to be the next big distribution channel: Here's how to build one: https://www.lennysnewsletter.com/p/chatgpt-apps-are-about-to-be-the• 10 contrarian leadership truths every leader needs to hear | Matt MacInnis (Rippling): https://www.lennysnewsletter.com/p/10-contrarian-leadership-truths• Breaking the rules of growth: Why Shopify bans KPIs, optimizes for churn, prioritizes intuition, and builds toward a 100-year vision | Archie Abrams (VP Product, Head of Growth at Shopify): https://www.lennysnewsletter.com/p/shopifys-growth-archie-abrams• Geoffrey Moore on finding your beachhead, crossing the chasm, and dominating a market: https://www.lennysnewsletter.com/p/geoffrey-moore-on-finding-your-beachhead• ER on Prime Video: https://www.amazon.com/ER-Season-1/dp/B0FWK5WJQ4• The Pitt on Prime Video: https://www.amazon.com/The-Pitt-Season-1/dp/B0DNRR8QWD• Wispr Flow: https://wisprflow.ai• Anker: https://www.anker.com—Recommended books:• Will: https://www.amazon.com/Will-Smith/dp/1984877925• Monetizing Innovation: How Smart Companies Design the Product Around the Price: https://www.amazon.com/Monetizing-Innovation-Companies-Design-Product/dp/1119240867• Hidden Multipliers: Small Things That Accelerate Growth: https://preorder.hiddenmultipliers.com• On Writing Well: The Essential Guide to Mastering Nonfiction Writing and Effective Communication: https://www.amazon.com/Writing-Well-Classic-Guide-Nonfiction/dp/0060891548• Crossing the Chasm, 3rd Edition: The Updated Version of the Insightful Guide on Bringing Cutting-Edge Products to the Mainstream: https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream/dp/0062292986—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
In this episode of The Jira Life, we sit down with Robert Hean of Hean Tech to explore how Confluence documentation is evolving in the age of Rovo and Atlassian Intelligence.As AI teammates like Rovo become part of everyday work, the quality of your documentation matters more than ever. We discuss what “good documentation” really means today, common mistakes teams make in Confluence, and how to structure content so Rovo can deliver accurate, helpful answers instead of amplifying documentation chaos.Robert shares practical strategies for creating AI-ready Confluence spaces, improving knowledge discoverability, and aligning documentation practices with modern Jira and Jira Service Management workflows. Whether you are an Atlassian admin, documentation owner, or team lead preparing for AI-powered collaboration, this conversation will help you future-proof your knowledge base.The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn!https://www.linkedin.com/company/the-jira-life/Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/joinHosts:- Alex "Dr. Jira" Ortiz https://www.linkedin.com/in/alexortiz89/ https://www.youtube.com/@ApetechTechTutorials- Rodney "The Jira Guy" Nissen https://www.linkedin.com/in/rgnissen/ https://thejiraguy.com- Sarah Wright https://www.linkedin.com/in/satwright/ Producer:- "King Bob" Robert Wen https://www.linkedin.com/in/robert-wen-csm-spc6-a552051/Executive Producer: - Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codeshttps://www.youtube.com/c/monstercatOutro: Fractal - Atriumhttps://www.youtube.com/c/monstercatinstinct
If you're a leader in game dev who feels stuck, able to spot problems but struggling to make a real difference, there is a path forward that levels up your leadership and accelerates your team, game, and career. Sign up here to learn more: https://forms.gle/nqRTUvgFrtdYuCbr6 Stop adding meetings to fix your game. In this episode, Ben Carcich sits down with Glenn Paul Gray, Production Director at PeopleFun, to dismantle the "more syncs = more alignment" myth. They explore how piling on well-attended meetings often creates overhead rather than clarity and why your work system must adapt to the specific stage of development your team is actually in. Glenn Paul brings a unique "hardware-to-software" perspective to game production. Starting his career in the Silicon Valley semiconductor industry, he transitioned into gaming in 2017, holding pivotal roles at Wargaming, Wooga, and AppLovin before joining PeopleFun. His background in complex systems engineering informs his pragmatic approach to "de-risking" games through aggressive prototyping and early-funnel testing. What You'll Learn in this Episode: How to treat meetings as a "cost to align" What it means to shift from a discipline-centric matrix to a high-agency, general manager-led team structure Why testing for D1 retention can be inefficient for weeding out bad ideas How to build a "startup within a studio" environment Why your work system (from spreadsheets to Jira) must "mode shift" as your project moves from R&D to production Learn more about Glenn & his company:
Jira Admins, are you ready for for the new limits and changes coming to your Atlassian Cloud sites? Joining the TJL crew is Atlassian Admin extraordinaire, Darryl Lee who will discuss with us what's imminently coming, when, and how to prepare.The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn!https://www.linkedin.com/company/the-jira-life/Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/joinHosts:- Alex "Dr. Jira" Ortizhttps://www.linkedin.com/in/alexortiz89/https://www.youtube.com/@ApetechTechTutorials- Rodney "The Jira Guy" Nissenhttps://www.linkedin.com/in/rgnissen/https://thejiraguy.com- Sarah Wrighthttps://www.linkedin.com/in/satwright/Producer:- "King Bob" Robert Wenhttps://www.linkedin.com/in/robert-wen-csm-spc6-a552051/Executive Producer: - Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codeshttps://www.youtube.com/c/monstercatOutro: Fractal - Atriumhttps://www.youtube.com/c/monstercatinstinct
De retour à cinq dans l'épisode, les cast codeurs démarrent cette année avec un gros épisode pleins de news et d'articles de fond. IA bien sûr, son impact sur les pratiques, Mockito qui tourne un page, du CSS (et oui), sur le (non) mapping d'APIs REST en MCP et d'une palanquée d'outils pour vous. Enregistré le 9 janvier 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-335.mp3 ou en vidéo sur YouTube. News Langages 2026 sera-t'elle l'année de Java dans le terminal ? (j'ai ouïe dire que ça se pourrait bien…) https://xam.dk/blog/lets-make-2026-the-year-of-java-in-the-terminal/ 2026: Année de Java dans le terminal, pour rattraper son retard sur Python, Rust, Go et Node.js. Java est sous-estimé pour les applications CLI et les TUIs (interfaces utilisateur terminales) malgré ses capacités. Les anciennes excuses (démarrage lent, outillage lourd, verbosité, distribution complexe) sont obsolètes grâce aux avancées récentes : GraalVM Native Image pour un démarrage en millisecondes. JBang pour l'exécution simplifiée de scripts Java (fichiers uniques, dépendances) et de JARs. JReleaser pour l'automatisation de la distribution multi-plateforme (Homebrew, SDKMAN, Docker, images natives). Project Loom pour la concurrence facile avec les threads virtuels. PicoCLI pour la gestion des arguments. Le potentiel va au-delà des scripts : création de TUIs complètes et esthétiques (ex: dashboards, gestionnaires de fichiers, assistants IA). Excuses caduques : démarrage rapide (GraalVM), légèreté (JBang), distribution simple (JReleaser), concurrence (Loom). Potentiel : créer des applications TUI riches et esthétiques. Sortie de Ruby 4.0.0 https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/ Ruby Box (expérimental) : Une nouvelle fonctionnalité permettant d'isoler les définitions (classes, modules, monkey patches) dans des boîtes séparées pour éviter les conflits globaux. ZJIT : Un nouveau compilateur JIT de nouvelle génération développé en Rust, visant à surpasser YJIT à terme (actuellement en phase expérimentale). Améliorations de Ractor : Introduction de Ractor::Port pour une meilleure communication entre Ractors et optimisation des structures internes pour réduire les contentions de verrou global. Changements syntaxiques : Les opérateurs logiques (||, &&, and, or) en début de ligne permettent désormais de continuer la ligne précédente, facilitant le style "fluent". Classes Core : Set et Pathname deviennent des classes intégrées (Core) au lieu d'être dans la bibliothèque standard. Diagnostics améliorés : Les erreurs d'arguments (ArgumentError) affichent désormais des extraits de code pour l'appelant ET la définition de la méthode. Performances : Optimisation de Class#new, accès plus rapide aux variables d'instance et améliorations significatives du ramasse-miettes (GC). Nettoyage : Suppression de comportements obsolètes (comme la création de processus via IO.open avec |) et mise à jour vers Unicode 17.0. Librairies Introduction pour créer une appli multi-tenant avec Quarkus et http://nip.io|nip.io https://www.the-main-thread.com/p/quarkus-multi-tenant-api-nipio-tutorial Construction d'une API REST multi-tenant en Quarkus avec isolation par sous-domaine Utilisation de http://nip.io|nip.io pour la résolution DNS automatique sans configuration locale Extraction du tenant depuis l'en-tête HTTP Host via un filtre JAX-RS Contexte tenant géré avec CDI en scope Request pour l'isolation des données Service applicatif gérant des données spécifiques par tenant avec Map concurrent Interface web HTML/JS pour visualiser et ajouter des données par tenant Configuration CORS nécessaire pour le développement local Pattern acme.127-0-0-1.nip.io résolu automatiquement vers localhost Code complet disponible sur GitHub avec exemples curl et tests navigateur Base idéale pour prototypage SaaS, tests multi-tenants Hibernate 7.2 avec quelques améliorations intéressantes https://docs.hibernate.org/orm/7.2/whats-new/%7Bhtml-meta-canonical-link%7D read only replica (experimental), crée deux session factories et swap au niveau jdbc si le driver le supporte et custom sinon. On ouvre une session en read only child statelesssession (partage le contexte transactionnel) hibernate vector module ajouter binary, float16 and sparse vectors Le SchemaManager peut resynchroniser les séquences par rapport aux données des tables Regexp dans HQL avec like Nouvelle version de Hibernate with Panache pour Quarkus https://quarkus.io/blog/hibernate-panache-next/ Nouvelle extension expérimentale qui unifie Hibernate ORM with Panache et Hibernate Reactive with Panache Les entités peuvent désormais fonctionner en mode bloquant ou réactif sans changer de type de base Support des sessions sans état (StatelessSession) en plus des entités gérées traditionnelles Intégration de Jakarta Data pour des requêtes type-safe vérifiées à la compilation Les opérations sont définies dans des repositories imbriqués plutôt que des méthodes statiques Possibilité de définir plusieurs repositories pour différents modes d'opération sur une même entité Accès aux différents modes (bloquant/réactif, géré/sans état) via des méthodes de supertype Support des annotations @Find et @HQL pour générer des requêtes type-safe Accès au repository via injection ou via le métamodèle généré Extension disponible dans la branche main, feedback demandé sur Zulip ou GitHub Spring Shell 4.0.0 GA publié - https://spring.io/blog/2025/12/30/spring-shell-4-0-0-ga-released Sortie de la version finale de Spring Shell 4.0.0 disponible sur Maven Central Compatible avec les dernières versions de Spring Framework et Spring Boot Modèle de commandes revu pour simplifier la création d'applications CLI interactives Intégration de jSpecify pour améliorer la sécurité contre les NullPointerException Architecture plus modulaire permettant meilleure personnalisation et extension Documentation et exemples entièrement mis à jour pour faciliter la prise en main Guide de migration vers la v4 disponible sur le wiki du projet Corrections de bugs pour améliorer la stabilité et la fiabilité Permet de créer des applications Java autonomes exécutables avec java -jar ou GraalVM native Approche opinionnée du développement CLI tout en restant flexible pour les besoins spécifiques Une nouvelle version de la librairie qui implémenter des gatherers supplémentaires à ceux du JDK https://github.com/tginsberg/gatherers4j/releases/tag/v0.13.0 gatherers4j v0.13.0. Nouveaux gatherers : uniquelyOccurringBy(), moving/runningMedian(), moving/runningMax/Min(). Changement : les gatherers "moving" incluent désormais par défaut les valeurs partielles (utiliser excludePartialValues() pour désactiver). LangChain4j 1.10.0 https://github.com/langchain4j/langchain4j/releases/tag/1.10.0 Introduction d'un catalogue de modèles pour Anthropic, Gemini, OpenAI et Mistral. Ajout de capacités d'observabilité et de monitoring pour les agents. Support des sorties structurées, des outils avancés et de l'analyse de PDF via URL pour Anthropic. Support des services de transcription pour OpenAI. Possibilité de passer des paramètres de configuration de chat en argument des méthodes. Nouveau garde-fou de modération pour les messages entrants. Support du contenu de raisonnement pour les modèles. Introduction de la recherche hybride. Améliorations du client MCP. Départ du lead de mockito après 10 ans https://github.com/mockito/mockito/issues/3777 Tim van der Lippe, mainteneur majeur de Mockito, annonce son départ pour mars 2026, marquant une décennie de contribution au projet. L'une des raisons principales est l'épuisement lié aux changements récents dans la JVM (JVM 22+) concernant les agents, imposant des contraintes techniques lourdes sans alternative simple proposée par les mainteneurs du JDK. Il pointe du doigt le manque de soutien et la pression exercée sur les bénévoles de l'open source lors de ces transitions technologiques majeures. La complexité croissante pour supporter Kotlin, qui utilise la JVM de manière spécifique, rend la base de code de Mockito plus difficile à maintenir et moins agréable à faire évoluer selon lui. Il exprime une perte de plaisir et préfère désormais consacrer son temps libre à d'autres projets comme Servo, un moteur web écrit en Rust. Une période de transition est prévue jusqu'en mars pour assurer la passation de la maintenance à de nouveaux contributeurs. Infrastructure Le premier intérêt de Kubernetes n'est pas le scaling - https://mcorbin.fr/posts/2025-12-29-kubernetes-scale/ Avant Kubernetes, gérer des applications en production nécessitait de multiples outils complexes (Ansible, Puppet, Chef) avec beaucoup de configuration manuelle Le load balancing se faisait avec HAProxy et Keepalived en actif/passif, nécessitant des mises à jour manuelles de configuration à chaque changement d'instance Le service discovery et les rollouts étaient orchestrés manuellement, instance par instance, sans automatisation de la réconciliation Chaque stack (Java, Python, Ruby) avait sa propre méthode de déploiement, sans standardisation (rpm, deb, tar.gz, jar) La gestion des ressources était manuelle avec souvent une application par machine, créant du gaspillage et complexifiant la maintenance Kubernetes standardise tout en quelques ressources YAML (Deployment, Service, Ingress, ConfigMap, Secret) avec un format déclaratif simple Toutes les fonctionnalités critiques sont intégrées : service discovery, load balancing, scaling, stockage, firewalling, logging, tolérance aux pannes La complexité des centaines de scripts shell et playbooks Ansible maintenus avant était supérieure à celle de Kubernetes Kubernetes devient pertinent dès qu'on commence à reconstruire manuellement ces fonctionnalités, ce qui arrive très rapidement La technologie est flexible et peut gérer aussi bien des applications modernes que des monolithes legacy avec des contraintes spécifiques Mole https://github.com/tw93/Mole Un outil en ligne de commande (CLI) tout-en-un pour nettoyer et optimiser macOS. Combine les fonctionnalités de logiciels populaires comme CleanMyMac, AppCleaner, DaisyDisk et iStat Menus. Analyse et supprime en profondeur les caches, les fichiers logs et les résidus de navigateurs. Désinstallateur intelligent qui retire proprement les applications et leurs fichiers cachés (Launch Agents, préférences). Analyseur d'espace disque interactif pour visualiser l'occupation des fichiers et gérer les documents volumineux. Tableau de bord temps réel (mo status) pour surveiller le CPU, le GPU, la mémoire et le réseau. Fonction de purge spécifique pour les développeurs permettant de supprimer les artefacts de build (node_modules, target, etc.). Intégration possible avec Raycast ou Alfred pour un lancement rapide des commandes. Installation simple via Homebrew ou un script curl. Des images Docker sécurisées pour chaque développeur https://www.docker.com/blog/docker-hardened-images-for-every-developer/ Docker rend ses "Hardened Images" (DHI) gratuites et open source (licence Apache 2.0) pour tous les développeurs. Ces images sont conçues pour être minimales, prêtes pour la production et sécurisées dès le départ afin de lutter contre l'explosion des attaques sur la chaîne logistique logicielle. Elles s'appuient sur des bases familières comme Alpine et Debian, garantissant une compatibilité élevée et une migration facile. Chaque image inclut un SBOM (Software Bill of Materials) complet et vérifiable, ainsi qu'une provenance SLSA de niveau 3 pour une transparence totale. L'utilisation de ces images permet de réduire considérablement le nombre de vulnérabilités (CVE) et la taille des images (jusqu'à 95 % plus petites). Docker étend cette approche sécurisée aux graphiques Helm et aux serveurs MCP (Mongo, Grafana, GitHub, etc.). Des offres commerciales (DHI Enterprise) restent disponibles pour des besoins spécifiques : correctifs critiques sous 7 jours, support FIPS/FedRAMP ou support à cycle de vie étendu (ELS). Un assistant IA expérimental de Docker peut analyser les conteneurs existants pour recommander l'adoption des versions sécurisées correspondantes. L'initiative est soutenue par des partenaires majeurs tels que Google, MongoDB, Snyk et la CNCF. Web La maçonnerie ("masonry") arrive dans la spécification des CSS et commence à être implémentée par les navigateurs https://webkit.org/blog/17660/introducing-css-grid-lanes/ Permet de mettre en colonne des éléments HTML les uns à la suite des autres. D'abord sur la première ligne, et quand la première ligne est remplie, le prochain élément se trouvera dans la colonne où il pourra être le plus haut possible, et ainsi de suite. après la plomberie du middleware, la maçonnerie du front :laughing: Data et Intelligence Artificielle On ne devrait pas faire un mapping 1:1 entre API REST et MCP https://nordicapis.com/why-mcp-shouldnt-wrap-an-api-one-to-one/ Problématique : Envelopper une API telle quelle dans le protocole MCP (Model Context Protocol) est un anti-pattern. Objectif du MCP : Conçu pour les agents d'IA, il doit servir d'interface d'intention, non de miroir d'API. Les agents comprennent les tâches, pas la logique complexe des API (authentification, pagination, orchestration). Conséquences du mappage un-à-un : Confusion des agents, erreurs, hallucinations. Difficulté à gérer les orchestrations complexes (plusieurs appels pour une seule action). Exposition des faiblesses de l'API (schéma lourd, endpoints obsolètes). Maintenance accrue lors des changements d'API. Meilleure approche : Construire des outils MCP comme des SDK pour agents, encapsulant la logique nécessaire pour accomplir une tâche spécifique. Pratiques recommandées : Concevoir autour des intentions/actions utilisateur (ex. : "créer un projet", "résumer un document"). Regrouper les appels en workflows ou actions uniques. Utiliser un langage naturel pour les définitions et les noms. Limiter la surface d'exposition de l'API pour la sécurité et la clarté. Appliquer des schémas d'entrée/sortie stricts pour guider l'agent et réduire l'ambiguïté. Des agents en production avec AWS - https://blog.ippon.fr/2025/12/22/des-agents-en-production-avec-aws/ AWS re:Invent 2025 a massivement mis en avant l'IA générative et les agents IA Un agent IA combine un LLM, une boucle d'appel et des outils invocables Strands Agents SDK facilite le prototypage avec boucles ReAct intégrées et gestion de la mémoire Managed MLflow permet de tracer les expérimentations et définir des métriques de performance Nova Forge optimise les modèles par réentraînement sur données spécifiques pour réduire coûts et latence Bedrock Agent Core industrialise le déploiement avec runtime serverless et auto-scaling Agent Core propose neuf piliers dont observabilité, authentification, code interpreter et browser managé Le protocole MCP d'Anthropic standardise la fourniture d'outils aux agents SageMaker AI et Bedrock centralisent l'accès aux modèles closed source et open source via API unique AWS mise sur l'évolution des chatbots vers des systèmes agentiques optimisés avec modèles plus frugaux Debezium 3.4 amène plusieurs améliorations intéressantes https://debezium.io/blog/2025/12/16/debezium-3-4-final-released/ Correction du problème de calcul du low watermark Oracle qui causait des pertes de performance Correction de l'émission des événements heartbeat dans le connecteur Oracle avec les requêtes CTE Amélioration des logs pour comprendre les transactions actives dans le connecteur Oracle Memory guards pour protéger contre les schémas de base de données de grande taille Support de la transformation des coordonnées géométriques pour une meilleure gestion des données spatiales Extension Quarkus DevServices permettant de démarrer automatiquement une base de données et Debezium en dev Intégration OpenLineage pour tracer la lignée des données et suivre leur flux à travers les pipelines Compatibilité testée avec Kafka Connect 4.1 et Kafka brokers 4.1 Infinispan 16.0.4 et .5 https://infinispan.org/blog/2025/12/17/infinispan-16-0-4 Spring Boot 4 et Spring 7 supportés Evolution dans les metriques Deux bugs de serialisation Construire un agent de recherche en Java avec l'API Interactions https://glaforge.dev/posts/2026/01/03/building-a-research-assistant-with-the-interactions-api-in-java/ Assistant de recherche IA Java (API Interactions Gemini), test du SDK implémenté par Guillaume. Workflow en 4 phases : Planification : Gemini Flash + Google Search. Recherche : Modèle "Deep Research" (tâche de fond). Synthèse : Gemini Pro (rapport exécutif). Infographie : Nano Banana Pro (à partir de la synthèse). API Interactions : gestion d'état serveur, tâches en arrière-plan, réponses multimodales (images). Appréciation : gestion d'état de l'API (vs LLM sans état). Validation : efficacité du SDK Java pour cas complexes. Stephan Janssen (le papa de Devoxx) a créé un serveur MCP (Model Context Protocol) basé sur LSP (Language Server Protocol) pour que les assistants de code analysent le code en le comprenant vraiment plutôt qu'en faisant des grep https://github.com/stephanj/LSP4J-MCP Le problème identifié : Les assistants IA utilisent souvent la recherche textuelle (type grep) pour naviguer dans le code, ce qui manque de contexte sémantique, génère du bruit (faux positifs) et consomme énormément de tokens inutilement. La solution LSP4J-MCP : Une approche "standalone" (autonome) qui encapsule le serveur de langage Eclipse (JDTLS) via le protocole MCP (Model Context Protocol). Avantage principal : Offre une compréhension sémantique profonde du code Java (types, hiérarchies, références) sans nécessiter l'ouverture d'un IDE lourd comme IntelliJ. Comparaison des méthodes : AST : Trop léger (pas de compréhension inter-fichiers). IntelliJ MCP : Puissant mais exige que l'IDE soit ouvert (gourmand en ressources). LSP4J-MCP : Le meilleur des deux mondes pour les workflows en terminal, à distance (SSH) ou CI/CD. Fonctionnalités clés : Expose 5 outils pour l'IA (find_symbols, find_references, find_definition, document_symbols, find_interfaces_with_method). Résultats : Une réduction de 100x des tokens utilisés pour la navigation et une précision accrue (distinction des surcharges, des scopes, etc.). Disponibilité : Le projet est open source et disponible sur GitHub pour intégration immédiate (ex: avec Claude Code, Gemini CLI, etc). A noter l'ajout dans claude code 2.0.74 d'un tool pour supporter LSP ( https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#2074 ) Awesome (GitHub) Copilot https://github.com/github/awesome-copilot Une collection communautaire d'instructions, de prompts et de configurations pour optimiser l'utilisation de GitHub Copilot. Propose des "Agents" spécialisés qui s'intègrent aux serveurs MCP pour améliorer les flux de travail spécifiques. Inclut des prompts ciblés pour la génération de code, la documentation et la résolution de problèmes complexes. Fournit des instructions détaillées sur les standards de codage et les meilleures pratiques applicables à divers frameworks. Propose des "Skills" (compétences) sous forme de dossiers contenant des ressources pour des tâches techniques spécialisées. (les skills sont dispo dans copilot depuis un mois : https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/ ) Permet une installation facile via un serveur MCP dédié, compatible avec VS Code et Visual Studio. Encourage la contribution communautaire pour enrichir les bibliothèques de prompts et d'agents. Aide à augmenter la productivité en offrant des solutions pré-configurées pour de nombreux langages et domaines. Garanti par une licence MIT et maintenu activement par des contributeurs du monde entier. IA et productivité : bilan de l'année 2025 (Laura Tacho - DX)) https://newsletter.getdx.com/p/ai-and-productivity-year-in-review?aid=recNfypKAanQrKszT En 2025, l'ingénierie assistée par l'IA est devenue la norme : environ 90 % des développeurs utilisent des outils d'IA mensuellement, et plus de 40 % quotidiennement. Les chercheurs (Microsoft, Google, GitHub) soulignent que le nombre de lignes de code (LOC) reste un mauvais indicateur d'impact, car l'IA génère beaucoup de code sans forcément garantir une valeur métier supérieure. Si l'IA améliore l'efficacité individuelle, elle pourrait nuire à la collaboration à long terme, car les développeurs passent plus de temps à "parler" à l'IA qu'à leurs collègues. L'identité du développeur évolue : il passe de "producteur de code" à un rôle de "metteur en scène" qui délègue, valide et exerce son jugement stratégique. L'IA pourrait accélérer la montée en compétences des développeurs juniors en les forçant à gérer des projets et à déléguer plus tôt, agissant comme un "accélérateur" plutôt que de les rendre obsolètes. L'accent est mis sur la créativité plutôt que sur la simple automatisation, afin de réimaginer la manière de travailler et d'obtenir des résultats plus impactants. Le succès en 2026 dépendra de la capacité des entreprises à cibler les goulots d'étranglement réels (dette technique, documentation, conformité) plutôt que de tester simplement chaque nouveau modèle d'IA. La newsletter avertit que les titres de presse simplifient souvent à l'excès les recherches sur l'IA, masquant parfois les nuances cruciales des études réelles. Un développeur décrit dans un article sur Twitter son utilisation avancée de Claude Code pour le développement, avec des sous-agents, des slash-commands, comment optimiser le contexte, etc. https://x.com/AureaLibe/status/2008958120878330329?s=20 Outillage IntelliJ IDEA, thread dumps et project Loom (virtual threads) - https://blog.jetbrains.com/idea/2025/12/thread-dumps-and-project-loom-virtual-threads/ Les virtual threads Java améliorent l'utilisation du matériel pour les opérations I/O parallèles avec peu de changements de code Un serveur peut maintenant gérer des millions de threads au lieu de quelques centaines Les outils existants peinent à afficher et analyser des millions de threads simultanément Le débogage asynchrone est complexe car le scheduler et le worker s'exécutent dans des threads différents Les thread dumps restent essentiels pour diagnostiquer deadlocks, UI bloquées et fuites de threads Netflix a découvert un deadlock lié aux virtual threads en analysant un heap dump, bug corrigé dans Java 25. Mais c'était de la haute voltige IntelliJ IDEA supporte nativement les virtual threads dès leur sortie avec affichage des locks acquis IntelliJ IDEA peut ouvrir des thread dumps générés par d'autres outils comme jcmd Le support s'étend aussi aux coroutines Kotlin en plus des virtual threads Quelques infos sur IntelliJ IDEA 2025.3 https://blog.jetbrains.com/idea/2025/12/intellij-idea-2025-3/ Distribution unifiée regroupant davantage de fonctionnalités gratuites Amélioration de la complétion des commandes dans l'IDE Nouvelles fonctionnalités pour le débogueur Spring Thème Islands devient le thème par défaut Support complet de Spring Boot 4 et Spring Framework 7 Compatibilité avec Java 25 Prise en charge de Spring Data JDBC et Vitest 4 Support natif de Junie et Claude Agent pour l'IA Quota d'IA transparent et option Bring Your Own Key à venir Corrections de stabilité, performance et expérience utilisateur Plein de petits outils en ligne pour le développeur https://blgardner.github.io/prism.tools/ génération de mot de passe, de gradient CSS, de QR code encodage décodage de Base64, JWT formattage de JSON, etc. resumectl - Votre CV en tant que code https://juhnny5.github.io/resumectl/ Un outil en ligne de commande (CLI) écrit en Go pour générer un CV à partir d'un fichier YAML. Permet l'exportation vers plusieurs formats : PDF, HTML, ou un affichage direct dans le terminal. Propose 5 thèmes intégrés (Modern, Classic, Minimal, Elegant, Tech) personnalisables avec des couleurs spécifiques. Fonctionnalité d'initialisation (resumectl init) permettant d'importer automatiquement des données depuis LinkedIn et GitHub (projets les plus étoilés). Supporte l'ajout de photos avec des options de filtre noir et blanc ou de forme (rond/carré). Inclut un mode "serveur" (resumectl serve) pour prévisualiser les modifications en temps réel via un navigateur local. Fonctionne comme un binaire unique sans dépendances externes complexes pour les modèles. mactop - Un moniteur "top" pour Apple Silicon https://github.com/metaspartan/mactop Un outil de surveillance en ligne de commande (TUI) conçu spécifiquement pour les puces Apple Silicon (M1, M2, M3, M4, M5). Permet de suivre en temps réel l'utilisation du CPU (E-cores et P-cores), du GPU et de l'ANE (Neural Engine). Affiche la consommation électrique (wattage) du système, du CPU, du GPU et de la DRAM. Fournit des données sur les températures du SoC, les fréquences du GPU et l'état thermique global. Surveille l'utilisation de la mémoire vive, de la swap, ainsi que l'activité réseau et disque (E/S). Propose 10 mises en page (layouts) différentes et plusieurs thèmes de couleurs personnalisables. Ne nécessite pas l'utilisation de sudo car il s'appuie sur les API natives d'Apple (SMC, IOReport, IOKit). Inclut une liste de processus détaillée (similaire à htop) avec la possibilité de tuer des processus directement depuis l'interface. Offre un mode "headless" pour exporter les métriques au format JSON et un serveur optionnel pour Prometheus. Développé en Go avec des composants en CGO et Objective-C. Adieu direnv, Bonjour misehttps://codeka.io/2025/12/19/adieu-direnv-bonjour-mise/ L'auteur remplace ses outils habituels (direnv, asdf, task, just) par un seul outil polyvalent écrit en Rust : mise. mise propose trois fonctions principales : gestionnaire de paquets (langages et outils), gestionnaire de variables d'environnement et exécuteur de tâches. Contrairement à direnv, il permet de gérer des alias et utilise un fichier de configuration structuré (mise.toml) plutôt que du scripting shell. La configuration est hiérarchique, permettant de surcharger les paramètres selon les répertoires, avec un système de "trust" pour la sécurité. Une "killer-feature" soulignée est la gestion des secrets : mise s'intègre avec age pour chiffrer des secrets (via clés SSH) directement dans le fichier de configuration. L'outil supporte une vaste liste de langages et d'outils via un registre interne et des plugins (compatibilité avec l'écosystème asdf). Il simplifie le workflow de développement en regroupant l'installation des outils et l'automatisation des tâches au sein d'un même fichier. L'auteur conclut sur la puissance, la flexibilité et les excellentes performances de l'outil après quelques heures de test. Claude Code v2.1.0 https://github.com/anthropics/claude-code/blob/main/CHANGELOG.md#210 Rechargement à chaud des "skills" : Les modifications apportées aux compétences dans ~/.claude/skills sont désormais appliquées instantanément sans redémarrer la session. Sous-agents et forks : Support de l'exécution de compétences et de commandes slash dans un contexte de sous-agent forké via context: fork. Réglages linguistiques : Ajout d'un paramètre language pour configurer la langue de réponse par défaut (ex: language: "french"). Améliorations du terminal : Shift+Enter fonctionne désormais nativement dans plusieurs terminaux (iTerm2, WezTerm, Ghostty, Kitty) sans configuration manuelle. Sécurité et correction de bugs : Correction d'une faille où des données sensibles (clés API, tokens OAuth) pouvaient apparaître dans les logs de débogage. Nouvelles commandes slash : Ajout de /teleport et /remote-env pour les abonnés claude.ai afin de gérer des sessions distantes. Mode Plan : Le raccourci /plan permet d'activer le mode plan directement depuis le prompt, et la demande de permission à l'entrée de ce mode a été supprimée. Vim et navigation : Ajout de nombreux mouvements Vim (text objects, répétitions de mouvements f/F/t/T, indentations, etc.). Performance : Optimisation du temps de démarrage et du rendu terminal pour les caractères Unicode/Emoji. Gestion du gitignore : Support du réglage respectGitignore dans settings.json pour contrôler le comportement du sélecteur de fichiers @-mention. Méthodologies 200 déploiements en production par jour, même le vendredi : retours d'expérience https://mcorbin.fr/posts/2025-03-21-deploy-200/ Le déploiement fréquent, y compris le vendredi, est un indicateur de maturité technique et augmente la productivité globale. L'excellence technique est un atout stratégique indispensable pour livrer rapidement des produits de qualité. Une architecture pragmatique orientée services (SOA) facilite les déploiements indépendants et réduit la charge cognitive. L'isolation des services est cruciale : un développeur doit pouvoir tester son service localement sans dépendre de toute l'infrastructure. L'automatisation via Kubernetes et l'approche GitOps avec ArgoCD permettent des déploiements continus et sécurisés. Les feature flags et un système de permissions solide permettent de découpler le déploiement technique de l'activation fonctionnelle pour les utilisateurs. L'autonomie des développeurs est renforcée par des outils en self-service (CLI maison) pour gérer l'infrastructure et diagnostiquer les incidents sans goulot d'étranglement. Une culture d'observabilité intégrée dès la conception permet de détecter et de réagir rapidement aux anomalies en production. Accepter l'échec comme inévitable permet de concevoir des systèmes plus résilients capables de se rétablir automatiquement. "Vibe Coding" vs "Prompt Engineering" : l'IA et le futur du développement logiciel https://www.romenrg.com/blog/2025/12/25/vibe-coding-vs-prompt-engineering-ai-and-the-future-of-software-development/ L'IA est passée du statut d'expérimentation à celui d'infrastructure essentielle pour le développement de logiciels en 2025. L'IA ne remplace pas les ingénieurs, mais agit comme un amplificateur de leurs compétences, de leur jugement et de la qualité de leur réflexion. Distinction entre le "Vibe Coding" (rapide, intuitif, idéal pour les prototypes) et le "Prompt Engineering" (délibéré, contraint, nécessaire pour les systèmes maintenables). L'importance cruciale du contexte ("Context Engineering") : l'IA devient réellement puissante lorsqu'elle est connectée aux systèmes réels (GitHub, Jira, etc.) via des protocoles comme le MCP. Utilisation d'agents spécialisés (écriture de RFC, revue de code, architecture) plutôt que de modèles génériques pour obtenir de meilleurs résultats. Émergence de l'ingénieur "Technical Product Manager" capable d'abattre seul le travail d'une petite équipe grâce à l'IA, à condition de maîtriser les fondamentaux techniques. Le risque majeur : l'IA permet d'aller très vite dans la mauvaise direction si le jugement humain et l'expérience font défaut. Le niveau d'exigence global augmente : les bases techniques solides deviennent plus importantes que jamais pour éviter l'accumulation de dette technique rapide. Une revue de code en solo (Kent Beck) ! https://tidyfirst.substack.com/p/party-of-one-for-code-review?r=64ov3&utm_campaign=post&utm_medium=web&triedRedirect=true La revue de code traditionnelle, héritée des inspections formelles d'IBM, s'essouffle car elle est devenue trop lente et asynchrone par rapport au rythme du développement moderne. Avec l'arrivée de l'IA ("le génie"), la vitesse de production du code dépasse la capacité de relecture humaine, créant un goulot d'étranglement majeur. La revue de code doit évoluer vers deux nouveaux objectifs prioritaires : un "sanity check" pour vérifier que l'IA a bien fait ce qu'on lui demandait, et le contrôle de la dérive structurelle de la base de code. Maintenir une structure saine est crucial non seulement pour les futurs développeurs humains, mais aussi pour que l'IA puisse continuer à comprendre et modifier le code efficacement sans perdre le contexte. Kent Beck expérimente des outils automatisés (comme CodeRabbit) pour obtenir des résumés et des schémas d'architecture afin de garder une conscience globale des changements rapides. Même si les outils automatisés sont utiles, le "Pair Programming" reste irremplaçable pour la richesse des échanges et la pression sociale bénéfique qu'il impose à la réflexion. La revue de code solo n'est pas une fin en soi, mais une adaptation nécessaire lorsque l'on travaille seul avec des outils de génération de code augmentés. Loi, société et organisation Lego lance les Lego Smart Play, avec des Brique, des Smart Tags et des Smart Figurines pour faire de nouvelles constructions interactives avec des Legos https://www.lego.com/fr-fr/smart-play LEGO SMART Play : technologie réactive au jeu des enfants. Trois éléments clés : SMART Brique : Brique LEGO 2x4 "cerveau". Accéléromètre, lumières réactives, détecteur de couleurs, synthétiseur sonore. Réagit aux mouvements (tenir, tourner, taper). SMART Tags : Petites pièces intelligentes. Indiquent à la SMART Brique son rôle (ex: hélicoptère, voiture) et les sons à produire. Activent sons, mini-jeux, missions secrètes. SMART Minifigurines : Activées près d'une SMART Brique. Révèlent des personnalités uniques (sons, humeurs, réactions) via la SMART Brique. Encouragent l'imagination. Fonctionnement : SMART Brique détecte SMART Tags et SMART Minifigurines. Réagit aux mouvements avec lumières et sons dynamiques. Compatibilité : S'assemble avec les briques LEGO classiques. Objectif : Créer des expériences de jeu interactives, uniques et illimitées. Conférences La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 14-17 janvier 2026 : SnowCamp 2026 - Grenoble (France) 22 janvier 2026 : DevCon #26 : sécurité / post-quantique / hacking - Paris (France) 28 janvier 2026 : Software Heritage Symposium - Paris (France) 29-31 janvier 2026 : Epitech Summit 2026 - Paris - Paris (France) 2-5 février 2026 : Epitech Summit 2026 - Moulins - Moulins (France) 3 février 2026 : Cloud Native Days France 2026 - Paris (France) 3-4 février 2026 : Epitech Summit 2026 - Lille - Lille (France) 3-4 février 2026 : Epitech Summit 2026 - Mulhouse - Mulhouse (France) 3-4 février 2026 : Epitech Summit 2026 - Nancy - Nancy (France) 3-4 février 2026 : Epitech Summit 2026 - Nantes - Nantes (France) 3-4 février 2026 : Epitech Summit 2026 - Marseille - Marseille (France) 3-4 février 2026 : Epitech Summit 2026 - Rennes - Rennes (France) 3-4 février 2026 : Epitech Summit 2026 - Montpellier - Montpellier (France) 3-4 février 2026 : Epitech Summit 2026 - Strasbourg - Strasbourg (France) 3-4 février 2026 : Epitech Summit 2026 - Toulouse - Toulouse (France) 4-5 février 2026 : Epitech Summit 2026 - Bordeaux - Bordeaux (France) 4-5 février 2026 : Epitech Summit 2026 - Lyon - Lyon (France) 4-6 février 2026 : Epitech Summit 2026 - Nice - Nice (France) 5 février 2026 : Web Days Convention - Aix-en-Provence (France) 12 février 2026 : Strasbourg Craft #1 - Strasbourg (France) 12-13 février 2026 : Touraine Tech #26 - Tours (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 9-10 avril 2026 : AndroidMakers by droidcon - Paris (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 24-25 avril 2026 : Faiseuses du Web 5 - Dinan (France) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/
If you're a leader in game dev who feels stuck, able to spot problems but struggling to make a real difference, there is a path forward that levels up your leadership and accelerates your team, game, and career. Sign up here to learn more: https://forms.gle/nqRTUvgFrtdYuCbr6 If you disappeared for a week, would your team lose their momentum—or just their note-taker? Most game producers are stuck in a "checklist trap"—spending their days managing tickets, booking meetings, and taking notes without ever understanding the actual goal. In this episode, Ben breaks down why production is fundamentally a leadership role based on influence, not just project management software. If you feel like your team is "drifting" despite hitting every milestone, you might be failing at the one thing that actually matters: moving the organization towards a valuable goal. What You'll Learn: The importance of shifting from a "task-doer" to a big-picture leader who influences the entire studio Why specific tools like Jira and sticky notes aren't the "point" of your job Why shipping a "good enough" game to pass a milestone is a dangerous trap that can break your team How prioritizing towards the goal outperforms doing what worked last time Connect with us:
Scrum Is NOT Dead... It's Obsolete?(Did someone actually Go here?) AAAAAAAhhhhhhhh!Stand-ups are still happening. Sprint planning still blocks calendars every few weeks. Retrospectives still end with “we should communicate better.” Jira boards are still very busy.How to connect with AgileDad:- [website] https://www.agiledad.com/- [instagram] https://www.instagram.com/agile_coach/- [facebook] https://www.facebook.com/RealAgileDad/- [Linkedin] https://www.linkedin.com/in/leehenson/
AI agents are exploding across the enterprise—but security hasn't caught up. In this episode of Today in Tech, host Keith Shaw talks with Michael Bargury, co-founder and CTO of Zenity, about why every AI agent is inherently vulnerable, how zero-click attacks work, and what companies must do now to reduce their risk. Bargury explains how attackers can hijack AI agents with simple persuasion, plant malicious “memories,” and silently exfiltrate sensitive data from tools like Microsoft Copilot, ChatGPT, Salesforce, and Cursor, often without users ever clicking on anything. You'll learn: * Why AI agents are always vulnerable by design * How prompt injection = persuasion, not just a technical bug * What zero-click agent attacks look like in the real world * How attackers can weaponize shared docs, Jira tickets, and email automations * Why there is no such thing as a “fully secure” agent platform * Practical steps to monitor, contain, and manage AI agent risk Chapters 0:00 – Introduction, overview: Why every AI agent can be hacked 1:00 – First enterprise AI attack on Microsoft Copilot 3:15 – Systemic vulnerabilities and why things got worse 4:35 – Why agents are always gullible by design 6:10 – Prompt injection vs simple persuasion 8:00 – Zero-click attacks explained 10:30 – Hacking ChatGPT via Google Drive & shared docs 13:40 – Planting malicious “memories” in your AI 15:30 – The Cursor + Jira “apples” exploit for stealing secrets 20:10 – Thousands of exposed Copilot Studio agents on the internet 23:30 – Goal hijacking: convincing agents to change their mission 24:50 – Dumping Salesforce data via a customer-success agent 26:50 – Soft vs hard security boundaries for AI 28:15 – What vendors fixed—and what they can't fix 31:10 – Why “secure AI platform” is a myth 33:30 – What enterprises must own in the shared responsibility model 36:20 – Treating agents like risky insiders to monitor 39:00 – How AI security needs to evolve next 40:57 – Closing thoughts
Greet the New Year with the TJL crew! On our first episode of 2026, we aim to look forward and see what the new year will bring to our favorite Atlassian products and the Atlassian community!The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn!https://www.linkedin.com/company/the-jira-life/Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/joinHosts:- Alex "Dr. Jira" Ortizhttps://www.linkedin.com/in/alexortiz89/https://www.youtube.com/@ApetechTechTutorials- Rodney "The Jira Guy" Nissenhttps://www.linkedin.com/in/rgnissen/https://thejiraguy.com- Sarah Wrighthttps://www.linkedin.com/in/satwright/Producer:- "King Bob" Robert Wenhttps://www.linkedin.com/in/robert-wen-csm-spc6-a552051/Executive Producer: - Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codeshttps://www.youtube.com/c/monstercatOutro: Fractal - Atriumhttps://www.youtube.com/c/monstercatinstinct
Joy Is a KPI This episode is for anyone who knows something is off when the work gets done but the joy is gone. In Episode 6, Verta and Naa make the case for joy as a real metric, not a soft add-on or a vibes-only bonus. They unpack why joy has been squeezed out of so many workplaces and lives, and what it costs us when productivity is the only acceptable emotion. From the difference between happiness and joy, to who actually gets permission to be joyful, they explore how creativity, laughter, and delight are not distractions but essential indicators of wellness. At work and beyond. They also talk honestly about burnout, overfunctioning, and what it takes to build lives where joy is part of the infrastructure, not the reward at the end. Because you can't track joy in Jira. But you can always feel when it's gone. Learn more about Naa & Verta here: Email: thatpart@45Lemons.com Website: www.45lemons.com/thatpart Instagram: @fortyfivelemons
Trust doesn't show up on an org chart. It isn't tracked in Jira. And yet it determines how fast work moves, how decisions get made, and whether teams feel safe enough to do their best thinking.When trust is present, teams move with confidence, leaders step back, and progress accelerates.When trust is missing, control creeps in, fear takes over, and even “good” processes start to collapse.This podcast explores why trust is the foundation of effective teamwork, how it quietly erodes in modern organizations, and what leaders and teams can do to intentionally rebuild it-one small, visible action at a time.
We kick off with a playful threat to delete the archive and end up reaffirming why consistency, community, and humane work habits matter. Between holiday calendar chaos and culture clashes over PTO, we find wins in empowered teams, better tools, and a thriving Discord.• joking about nuking the archive to highlight creative burnout• whiplash from deletion talk to preserving episodes forever• holiday downtime realities and quiet office tactics• European PTO envy contrasted with U.S. grind culture• frustration with last‑minute meeting cancellations• calling out performative Slack activity around holidays• team empowerment as the manager's real job• replacing Excel with Jira to create clarity and intake systems• celebrating consistency, guest episodes, and community growth• being easy to work with as a core career advantage• historical perspective as a lens on modern labor normsJoin our Discord. It's in your show notes. You can also buy some swag from our shop. More importantly, share the pod with your friends, family, and coworkers. We would love you forever if you did so.Click/Tap HERE for everything Corporate StrategyElevator Music by Julian Avila Promoted by MrSnoozeDon't forget ⭐⭐⭐⭐⭐ it helps!
Episode web page: https://bit.ly/3MWTjzQ Episode description: In this forward-looking episode of Insights Unlocked, Mike McDowell returns to the mic to share what's ahead for UserTesting in 2026—and it's all about speed, scale, and smarter insights. Mike and host Nathan Isaacs dive into the latest developments in AI-powered research, from automated test creation and participant feedback to enhanced report generation and seamless integrations with tools like Figma. As always, Mike brings a ton of energy and clarity to what these innovations mean not just for researchers, but for anyone trying to get closer to their customers. Whether you're a product manager, designer, or marketer, this episode will leave you inspired by what's possible when AI meets human insight. Key takeaways AI-enhanced test creation: Just type what you want to learn, and AI builds the test plan for you—making customer feedback more accessible to non-researchers than ever. New Figma plug-in: Beta users can now launch usability tests directly from Figma, without leaving the design environment. Automated insight generation: From smart analysis to video summaries and report creation, AI is speeding up the time from question to answer. Smarter screener tools: AI-powered fraud detection and screener guidance ensure better participant quality and more reliable feedback. Customer empathy at scale: Mike emphasizes the power of embedding customer videos in tools like Jira, Confluence, and Figma to build buy-in and challenge internal assumptions. Resources & links Mike on LinkedIn (https://www.linkedin.com/in/mmcdowell1/) Nathan Isaacs on LinkedIn (https://www.linkedin.com/in/nathanisaacs/) Learn more about Insights Unlocked: https://www.usertesting.com/podcast
As we wrap up 2025, join the TJL crew as we recall the best, worst, and overall amazing adventures we went through this past year! Recount with us the highs and lows that happened in the Atlassian community this year!Thank you to Revyz for backing us up and making The Jira Life possible. https://www.revyz.io/The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn!https://www.linkedin.com/company/the-jira-life/Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/joinHosts:- Alex "Dr. Jira" Ortizhttps://www.linkedin.com/in/alexortiz89/https://www.youtube.com/@ApetechTechTutorials- Rodney "The Jira Guy" Nissenhttps://www.linkedin.com/in/rgnissen/https://thejiraguy.com- Sarah Wrighthttps://www.linkedin.com/in/satwright/Producer:- "King Bob" Robert Wenhttps://www.linkedin.com/in/robert-wen-csm-spc6-a552051/Executive Producer: - Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codeshttps://www.youtube.com/c/monstercatOutro: Fractal - Atriumhttps://www.youtube.com/c/monstercatinstinct
In this episode, we're joined by Youssef Hounat, product leader, ex-auditor, and (unexpectedly) freestyle-rap-ready builder of tools for accountants. He went from training at Ernst & Young to helping scale DataSnipper into one of the Netherlands' unicorns, and now he's building again as Head of Product at ComplianceWise. We unpack what's actually changing inside product teams: AI stops being a rewrite/search tool and becomes a teammate that takes real work off your plate. Youssef shares how the best teams reduce context switching, turn customer research into a habit, and use agentic workflows + MCPs to connect tools like email, Jira, Figma, and docs without becoming a “fleshy meat puppet” copy-pasting between 10 tabs. Here are some of the key questions we address: Why do 99% of teams still use AI wrong, and what mindset shift fixes it? How do you turn customer research into a continuous habit using transcripts + automated pipelines? What's a real example of AI helping product push back on “build this to close the deal” and finding the true request underneath? How do top teams use MCP + coding agents to move from idea → PRD → Jira tickets without leaving the terminal? What's the difference between a prototype you build to learn vs a product you build to earn — and why vibe-coded output can't go straight to production? How do you avoid reinventing the wheel and start with small weekly automations that compound? What's the real risk behind shadow AI usage and how do you get IT onside instead of blocked?
Czy Twój zespół naprawdę dowozi to, co zaplanuje? A może się przyzwyczailiście do tego, że zrealizowany jest tylko ułamek planu na iterację? Rozkładamy na czynniki pierwsze przewidywalność zespołu. Miara ta może być potężnym wsparciem dla zespołu, ale i źródłem frustracji czy złych decyzji. Pokażemy Ci jak mierzyć ją w Jira, Excelu czy innych narzędziach. Podpowiemy też, jak interpretować wyniki, by realnie ustabilizować w zespole proces dowożenia zaplanowanego zakresu prac. Cała rozmowa odnosi się do case study z naszej pracy z jednym z zespołów. Jeśli masz już dość niewykonanych planów oraz ciągłych tłumaczeń, ten odcinek jest dla Ciebie. Porządny Agile · Przewidywalność zespołu Zapraszamy Cię do obejrzenia nagrania podcastu Transkrypcja podcastu „Przewidywalność zespołu” Poniżej znajdziesz pełny zapis rozmowy z tego odcinka podcastu Porządny Agile. Jacek: Ostatnio na naszej stronie pojawiło się nowe case study. Dotyczy ono tego, jak w jednym z zespołów poprawiliśmy przewidywalność. Uznaliśmy z Kubą, że jest to dobra okazja, żeby powiedzieć trochę więcej o przewidywalności w ramach tego odcinka. Kuba: Adres case jest nie do przedyktowania w nagraniu audio, więc po prostu zachęcam Cię do tego, żeby znaleźć wspomniany case study w notatkach do odcinka i przeczytanie tego, co Jacek tam pisze. Jacek: Jaki spis treści na dzisiaj? Przede wszystkim zdefiniujemy, czym dla nas jest przewidywalność. Opowiemy, jak mierzyć przewidywalność. Podzielimy się wskazówkami na temat stosowania przewidywalności i na koniec damy kilka wskazówek, jak faktycznie, jakimi praktykami poprawić przewidywalność zespołu. Kuba: To przechodząc do rzeczy, pierwsza część to definicja przewidywalności. Przewidywalność rozumiemy to, jak zespół dowozi czy dostarcza to, co zaplanował. W jakim stopniu realizuje ten plan, który sobie przyjął? Czy, jak mówią, że coś będzie zrealizowane, czy to faktycznie będzie? Z jakim prawdopodobieństwem zespół realizuje swoje zamiary. Jacek: Więc jest to dla nas z jednej strony miara, o której powiemy za chwilę trochę więcej, bo można ją bardzo konkretnie wyrazić, a z drugiej strony, jak mówimy o tym, że zespół jest przewidywalny, to myślimy też w kontekście takim, że jest to pewna pożądana cecha zespołu. To jest taki zespół, na którym w kontekście tych prognoz, z którymi się dzielą z organizacją, można na nim polegać. Kuba: Dla równowagi powiemy też, czym nie jest przewidywalność według nas, choć niektórzy to też tak rozumieją. Niektórzy rozumieją przewidywalność jako pewną taką cechę generyczną rozumianą jako prawdopodobieństwo dostarczania, ale również prawdopodobieństwo o bardzo niskim stopniu albo o bardzo dużej zmienności tej wartości. W sensie, takim matematycznym, to też jest przewidywalność, tak samo jak smród jest zapachem albo jakiś brunatny też jest kolorem, ale jednak jako przewidywalność rozumiemy coś pozytywnego, zjawisko korzystne. W tym sensie nie cieszy nas fakt, że jakiś zespół ma przewidywalność, tylko ta przewidywalność to jest jedno zadanie na cztery zaplanowane albo 20% planu. W sensie matematycznym to jest przewidywalność, ale my się od takiej przewidywalności i takiego rozumienia tego słowa odcinamy, uważamy, że przewidywalność jest cechą czy charakterystyką pozytywną. Miarą, która powinna dążyć też do pewnych wartości. Zespół przewidywalny to taki, który dostarcza to, co planuje, a nie dostarcza tyle, ile zazwyczaj dostarcza. Nawet jeśli zazwyczaj dostarcza bardzo mało. Zespół, który przewidywalnie dostarcza mało, to jest dla nas zespół nieprzewidywalny, a nie przewidywalny w jakimś dziwnym znaczeniu. Jacek: To prowadzi nas do pytania, w jaki sposób możemy przeżyć przewidywalność. Ogólny wzór jest bardzo prosty. W dużym uproszczeniu jest to stosunek tego, co zostało faktycznie zrealizowane w konkretnym Sprincie czy w konkretnej iteracji w stosunku do tego, co było zaplanowane. Najczęściej jest to wyrażone po prostu w procentach. Kuba: Natomiast w szczegółach już może być trochę bogato. Różne zespoły uwzględniają do tego wzoru różne składowe elementy. Najprościej, gdy po prostu bierze się wszystko to, co zespół realizuje, niezależnie od tego, jakie typy pracy, jakie typy elementów planów wchodzą w skład danego Sprintu właśnie czy iteracji. Ale wiemy też i obserwujemy, i czasem ma to sens, że są zespoły, które liczą na przykład tylko historyjki użytkownika, storki, czy jakkolwiek to się w danym zespole nazywa. Czasem ficzery, czasem jakieś wyłącznie prace rozwojowe. Inne zespoły uwzględniają zadania czy jakiś rodzaj subtasków, jakaś praca techniczna do wykonania tego, co jest potrzebne do zrobienia w danym Sprincie. Kontrowersje mogą się zaczynać gdzieś w sferze tego, gdy się zaczyna liczyć do przewidywalności zaplanowane rozwiązania błędów, które wiemy, że istnieją, gdy zaczyna się Sprint, ale jest plan w zespole, żeby je rozwiązać. No i kontrowersją mogą być też zadania utrzymaniowe, czyli jakieś zadania powtarzalne, które z góry wiadomo, że trzeba zrealizować, no i choćby nie wiadomo, co się działo, to one po prostu faktycznie są częścią pracy w Sprincie. W ewentualnej kontrowersji głębiej się nie chcemy zagłębiać. Tutaj tylko jakby sygnalizuje, że jest temat, jakie typy pracy uwzględniać w mierze przewidywalności. No moim zdaniem jest tu temat do przemyślenia i bardzo świadomego zaplanowania czy do doprecyzowania, co jest uwzględnione we wzorze dla Twojego zespołu. Jacek: Może to jest dobry czas na taki prosty, namacalny przykład. Jeżeli zespół planował dostarczyć 10 elementów, nazwijmy to bardzo ogólnie, i dostarczył tylko dwa elementy, no to dla nas, patrząc na ten wzór przewidywalność, to jest 20%. Jeżeli planował dostarczyć 10, a dostarczył 5, no to przewidywalność jest 50%, natomiast jeśli planował dostarczyć 10, a dostarczył 12, to przewidywalność wynosi 120%. Tak więc przewidywalność jest miarą, w której ta wartość oczekiwana raczej jest pewnym zakresem. Takim dla nas powiedzmy akceptowalnym punktem do rozpoczęcia rozmowy, to jest przewidywalność między 80 a 120%. I bardziej chodzi nam o przebywanie w tym zakresie, niż osiąganie jakiegoś konkretnego, precyzyjnego wyniku. W szczególności powtarzalne osiąganie 100% może oczywiście wskazywać na to, że no ta miara być może za bardzo jest traktowana jako jakiś taki punkt do osiągnięcia. Z kolei o tym zakresie, który można nazwać, że jest powiedzmy zdrowy, można myśleć tak jak na przykład o wskaźnikach, kiedy idziemy na badanie krwi. Dostajemy wylistowaną listę, dostajemy poukładaną listę wyników i najczęściej jesteśmy w stanie znaleźć informacje, czy ta konkretna wartość zbadania jest w normie, czy mieści się w jakimś tam spodziewanym zakresie. I bardzo podobnie, właściwie można powiedzieć, wręcz identycznie działa to w przypadku przewidywalności. Kuba: Jeśli chodzi o przewidywalność, warto też wspomnieć to, jak narzędziowo można to mierzyć, jak można to liczyć, czyli jak konkretnie w narzędziu, jakim sposobem to zrealizować. Jest kilka opcji, wymienimy cztery. Jacek: Tak, pierwsze narzędzie takie, no, najczęściej nadal spotykane przez nas w organizacjach, to jest JIRA. Należałoby się skierować do sekcji raportów i znaleźć tam w wersji anglojęzycznej Velocity Chart i na tym wykresie oprócz tej informacji, ile faktycznie zespół zrealizował, czyli jaka jest prędkość zespołu, no, można również znaleźć tę informację o tym, ile na dany Sprint zostało zaplanowane. Te dane, te wykresy powinny się właściwie same wyświetlić, jeśli tylko przestrzegasz jakiejś takiej podstawowej higieny pracy w JIRA. To znaczy faktycznie uruchamiane są Sprinty. We właściwych momentach takich prawdziwych, kiedy zaczyna się Sprint, to ten Sprint jest startowany, powinien też być zamykany faktycznie wtedy, kiedy Sprint się kończy. Sprint powinien zawierać w sobie tę faktycznie wykonywaną pracę. Jak również pewna taka otoczka dotycząca tego boardu, na którym się znajdujemy, czy projektu, który realizujemy, te rzeczy też powinny być poprawnie skonfigurowane, no i wtedy można powiedzieć, że ten wykres dostajemy z pudełka. Właściwie nic nie musimy dodatkowego zrobić, żeby móc zobaczyć sobie historycznie, jak ta przewidywalność się w naszym zespole układała. Kuba: Drugą opcją narzędziową jest po prostu Excel. W porównaniu do JIRA, Excel stanie się, czy jest o wiele bardziej elastyczny, co prawda nie budują się dane same, jak w JIRA. Jeśli dobrze zachować tą dyscyplinę, o której mówi Jacek, no to JIRA liczy to sama, no w Excelu siłą rzeczy, ktoś odpowiedzialny za proces, albo członek zespołu, albo jakiś jego rodzaj lidera, po prostu musi te dane do tego Excela wprowadzić. Pamiętać o tym, żeby je przepisać, żeby złapać te dane historyczne bazowe i też pewnie w odpowiednie formuły wprowadzić te dane, żeby pokazały pewien wynik. Jest to oczywiście praca trochę ręczna, ale za to po drugiej stronie, zwłaszcza gdyby zespół miał jakąś bardziej skomplikowaną sytuację, albo bardziej wysublimowane warunki, co uwzględniać, czego nie uwzględniać, no to może się okazać, że ten Excel jest bardziej wiarygodny i pod większą kontrolą, niż narzędzia, które biorą po prostu wynik jakiegoś filtru lub nie są tak dobrze prowadzone. Jacek: Innymi narzędziami mogą być wszelkiego rodzaju narzędzia, które pomagają nam wizualizować pracę i pewne koncepcje z nią związaną. Czyli z jednej strony w warunkach online’owych to może być jakiś Mural czy Miro. W warunkach stacjonarnych to może być tablica, flipchart czy nawet wręcz kartka papieru. Tak naprawdę istotne jest, żeby te dane się znalazły w tych miejscach, wokół których będziemy się skupiać jako zespół. Na bazie moich doświadczeń bardzo często zespoły pracujące online dokonują refleksji na przykład na Muralu. No i w takim przypadku śledzenie tych informacji procesowych w kontekście tego odcinka, mówię tutaj w szczególności o przewidywalności, może być takim naturalnym miejscem, na które i tak spojrzymy w momencie, kiedy będziemy realizować cotygodniową czy co dwutygodniową refleksję. Tak więc posiadając komplet informacji w miejscu, do którego i tak rutynowo zaglądamy, drastycznie zwiększa szanse, że na te dane spojrzymy i zastanowimy się co z tych informacji, które posiadamy płynie, jakie wnioski do zespołu. Kuba: Ostatnią opcją, którą wymienimy, jeśli chodzi o narzędziowe pokazanie, mierzenie i uwidacznianie przewidywalności to są narzędzia BI-owe. W kilku organizacjach niezależnie od siebie widziałem taki efekt podłączenia bazy danych. Najczęściej pod spodem była jakaś JIRA, może Azure DevOps, albo tego typu narzędzia do mierzenia zadań, pokazywania tych zadań, kończenia ich. Dane surowe z takich narzędzi można przerzucić do narzędzi BI-owych. Czy to jest Power BI, czy to jest Tablo, czy to jest jeszcze coś innego. Kilka narzędzi różnie popularnych w różnych organizacjach. Oczywiście wymaga to już pewnych konkretnych kompetencji, żeby to wszystko podłączyć, żeby też być może odpowiednio skonfigurować raporty. No potencjalnie po stronie nagrody jest dosyć atrakcyjny sposób wizualizacji, być może sposób też jakiejś konfiguracji dodatkowego filtrowania dodatkowego, może dokładania kolejnych danych. W kontekście dużej organizacji wartością w sobie samo może być też pokazanie na jednym dash-boardzie wyników wielu zespołów, czy może pewien rodzaj standaryzacji pomiędzy zespołami, jakie aspekty są tam odpowiednio uwzględniane. Potencjalnie nagroda wielka, no ale tak jak wspomniałem też potencjalnie pewien koszt. Jeśli ma się te kompetencje w zespole, to może ten koszt jest siłą rzeczy pomijalny, a czasami warto to zainwestować, żeby dostać wartościowe widoki, czy wartościowe mierniki. Kuba: Ostatnią opcją, którą wymienimy, jeśli chodzi o narzędziowe pokazanie, mierzenie i uwidacznianie przewidywalności to są narzędzia BI-owe. W kilku organizacjach niezależnie od siebie widziałem taki efekt podłączenia bazy danych. Najczęściej pod spodem była jakaś JIRA, może Azure DevOps, albo tego typu narzędzia do mierzenia zadań, pokazywania tych zadań, kończenia ich. Dane surowe z takich narzędzi można przerzucić do narzędzi BI-owych. Czy to jest Power BI, czy to jest Tablo, czy to jest jeszcze coś innego? Kilka narzędzi różnie popularnych w różnych organizacjach. Oczywiście wymaga to już pewnych konkretnych kompetencji, żeby to wszystko podłączyć, żeby też być może odpowiednio skonfigurować raporty. No potencjalnie po stronie nagrody jest dosyć atrakcyjny sposób wizualizacji, być może sposób też jakiejś konfiguracji dodatkowego filtrowania dodatkowego, może dokładania kolejnych danych. W kontekście dużej organizacji wartością w sobie samo może być też pokazanie na jednym dash-boardzie wyników wielu zespołów, czy może pewien rodzaj standaryzacji pomiędzy zespołami, jakie aspekty są tam odpowiednio uwzględniane. Potencjalnie nagroda wielka, no ale tak jak wspomniałem też potencjalnie pewien koszt. Jeśli ma się te kompetencje w zespole, to może ten koszt jest siłą rzeczy pomijalny, a czasami warto to zainwestować, żeby dostać wartościowe widoki, czy wartościowe mierniki. Kuba: I zanim przejdziemy do następnego rozdziału, przypominamy, że jeżeli chcesz pogłębić wiedzę, jeszcze bardziej niż robimy to w podcaście, to znajdziesz nasze płatne produkty na stronie porzadnyagile.pl/sklep. Jacek: Przechodzimy do kolejnej sekcji dzisiejszego odcinka, czyli kilka wskazówek na temat tego, jak stosować miary przewidywalności w praktyce. Kuba: Pierwsza rzecz, od której chcę zacząć, to uwzględnij stopień innowacyjności zespołu. Przewidywalność jako miara w typowym zespole wytwórczym powinna być mierzona. To jest też cecha, którą taki zespół powinien posiadać. Natomiast mamy w swoim doświadczeniu kilka przykładów takich zespołów, które są naprawdę mocno innowacyjne, robią zadania takie mocno polegające na jakimś rodzaju research and development, jakimś badaniu, jakimś odkrywaniu, w takim stopniu innowacyjności naprawdę dużym. Te zespoły siłą rzeczy z racji na taką dużą chaotyczność czy dużą złożoność swojej pracy badawczej, po prostu tej przewidywalności osiągnąć nie za bardzo mogą, w takim znaczeniu, o jakim mówimy w tym odcinku. Dlatego tutaj bierzemy taką poprawkę, może taką dokładamy gwiazdkę do przewidywalności. W wybranej organizacji to niektóre zespoły będą siłą rzeczy nieprzewidywalne, w których firmach może w ogóle wszystkie, bo taka jest natura produktu czy branży, w której się działa, więc może wziąć warto poprawkę na to, że nie we wszystkich zespołach, nie we wszystkich firmach ta przewidywalność, o której dzisiaj powiedzieliśmy i jeszcze będziemy mówić, jest adekwatna, czy jest miarą, na którą warto spoglądać. Jacek: Jednocześnie przy tej okazji warto zwrócić uwagę na taki pewien ewenement, który obserwujemy z Kubą, że wiele zespołów wpada w poczucie, że są właśnie takim bardzo wyjątkowym i innowacyjnym zespołem, który ze względu na naturę swojej pracy nie jest w stanie pracować w przewidywalny sposób i nasze doświadczenie jest takie, że raczej nie do końca jest tak na takiej zasadzie, że faktycznie takie zespoły spotykamy, ale tych zespołów jest zdecydowania mniejszość. Nawet jeśli to faktycznie jest ten research, o którym wspominał Kuba, takie działania też można planować, można dzielić je na mniejsze kroki, bardzo precyzyjnie sobie określać kryteria akceptacji. I też w miarę w uporządkowany sposób decydować, czy to, co zaplanowaliśmy sobie zrobić, niekoniecznie te uzyskane rezultaty, ale tę pracę wykonaną, którą planowaliśmy, jesteśmy w stanie zaplanować. Raczej większość zespołów tę pracę, którą wykonuje ona, ma najczęściej jednak charakter taki, że jesteśmy w stanie przewidzieć, co będziemy realizować. Więc tutaj chcemy z Kubą wyraźnie zaznaczyć taką potencjalną pułapkę, żeby dokonać faktycznej refleksji, czy rzeczywiście ta nasza praca nosi znamiona takiej absolutnie niezarządzalnej, nieprzewidywalnej, czy tylko wpadliśmy w tę pułapkę, że tak o tej pracy myślimy. Jacek: Druga wskazówka, świadomie wybierz zmienne do wzoru. Wspomnieliśmy, jak taki wzór mógłby wyglądać, wspomnieliśmy, w jakiej jednostce wyrażony jest wynik. Taką główną wątpliwością osób, które podchodzą do tematu przewidywalności, jest wybór tego, czy powinniśmy patrzeć na konkretne elementy, które posiadamy jako zakres w danym konkretnym Sprincie, czy iteracji, czy raczej powinniśmy patrzeć na sumę story pointów I o ile historycznie pierwsze próby mierzenia się z przewidywalnością kierowały nas z Kubą w stronę story pointów, no to dzisiaj zdecydowanie jest nam bliżej do tego, żeby raczej patrzeć na tę liczbę elementów, które bierzemy do Sprintu. Konkretnie w Jirze można sobie przestawić wykres, ustawić go na to, żeby pokazywał issue count, czyli żeby po prostu policzył nam tę liczbę elementów, którą mamy w Sprincie. No i generalnie zbliża nas to do myślenia bardziej o patrzeniu i mierzeniu przepustowości i przewidywalności na tej bazie, niż na takie klasyczne Velocity, które najczęściej wyrażane jest jako suma story pointów zaplanowanych na konkretny Sprint. Kuba: Dlaczego poświęcamy na to czas w tym nagraniu? Bo wiele zespołów poświęca niepotrzebnie czas na przykład szczegółowy wycenianie, bo inaczej nie będzie pewien element uwzględniony we wzorze, a po wszystkim zwłaszcza też niezależne próby to potwierdzają w wielu zespołach, w wielu firmach korelacja między ilością skończonych elementów a story pointami zakończonymi jest na tyle silna, że w zasadzie nie ma potrzeby wkładać dodatkowej energii w to, żeby nawyceniać wszystkie prace. Zwłaszcza jeśli ma to prowadzić do, no naszym zdaniem, absurdów takich jak wycenianie błędów czy wycenianie jakichś zadań technicznych, tylko po to, żeby one się później ładnie w słupki sumowały. Może się okazać, że prosta suma ilości elementów jakichkolwiek, które uwzględniamy w takim predictability po prostu są do wzięcia i tyle, to jest dosyć łatwe, łatwo mechanicznie wyliczyć taki wzór i po prostu niepotrzebnie nie wkładać dodatkowej energii w coś, co nie wniesie dodatkowej wartości. I zaakcentuję, czy może tak trochę refrenem powtórzę to, co powiedział Jacek, niestety domyślnie Jira pokazuje, a Jira jest też najbardziej popularnym narzędziem z tego, co widzimy, pokazuje właśnie po story pointach, co może oznaczać, że nie uwzględnia rzeczy niewycenionych do tego typu wzorów na przewidywalność, no i z drugiej strony właśnie trochę miesza w przewidywalności, jeśli zespół cierpi na zadania przechodzące między Sprintami. Jeśli zespół właśnie uwzględnia w swoich działaniach również elementy, które są niewyceniane, więc tutaj domyślny sposób pokazania przewidywalności mierzonej w story pointach może być pewną pułapką, stąd wskazówka świadomie wybierz zmienne do wzoru. Kuba: Trzecia wskazówka to traktuj przewidywalność jako wewnętrzny kompas zespołu. Dużo nieszczęścia dzieje się w organizacjach, w których zostaje się celem. Jacek już to lekko zaznaczył, ja to wzmocnię. Są organizacje, które wręcz żądają, domagają się, zostawiają w celach rocznych, uzależniają premię od tego, czy zespół będzie przewidywalny, ustawiając też konkretne oczekiwane wartości. Najczęściej spotykam, że wartością oczekiwaną jest dokładnie 100%, czyli róbcie dokładnie tyle, ile planujecie, to poprowadzi do pewnych pułapek, ale znam też organizację, w której oczekiwana wartość przewidywalności to jest nie powinna przekraczać powiedzmy 80%. Czyli przewidywalny zespół to taki, który w przewidywalny sposób zawsze trochę nie dowozi. Też nie najszczęśliwszy pomysł. Więc tutaj mocno opieramy się na pomyśle, że przewidywalność to jest raczej miara wewnętrzna do mierzenia procesu przez zespół, do traktowania go jako punkt odniesienia przy usprawnianiu się, do myślenia o nim w czasie planowania, myślenia o nim w czasie Retrospektyw, myślenia o nim w jakimś tam dłuższym horyzoncie, ale na pewno nie jako sposób czy podstawa do tego, żeby dostać nagrodę albo karę, bo siłą rzeczy, zresztą jak każda inna miara tego typu, może się to łatwo przeinaczyć czy wręcz wypaczyć, stać się celem samym w sobie zamiast wiarygodną podstawą do usprawniania. Jacek: I czwarta porada, nie polegaj wyłącznie na przewidywalności. Tutaj zdecydowanie rekomendujemy, żeby przewidywalność nie była jedyną miarą procesu, którą zespół monitoruje. Dobrze jest od czegoś zacząć, ale zdecydowanie nie spoczywałbym tutaj na laurach. Przykładowo jednocześnie warto spojrzeć na throughput, czyli na przepustowość. Można do tego dołożyć sobie jakąś miarę jakości, można dołożyć jakąś miarę wartości biznesowej. To, co jest dla nas w danym momencie istotne i to, na co chcemy zwracać uwagę i wtedy patrzeć na pewien zestaw miar. Patrzeć jak one się wzajemnie zachowują. Może być tak, że poprawa jednej konkretnej miary może pogarszać wyniki w drugiej. Warto na to zwrócić uwagę i tak sobie skonfigurować te miary, żebyśmy mieli taki dosyć pełny obraz tego, jaka jest kondycja naszego zespołu i jego otoczenia. Kuba: I ostatni rozdział. Jak poprawiać przewidywalność zespołu? Ten rozdział będzie krótki, bo tak naprawdę to, co poprawia przewidywalność było tematem masy z poprzednich odcinków. My w zasadzie sami się z Jackiem zaśmialiśmy, że tak późno z naszej strony odcinek o przewidywalności w czasie, gdy mnóstwo praktyk poprawy przewidywalności już było przez nas poruszonych. Więc tutaj nie będziemy pogłębiać tematu, co dokładnie oznacza dana praktyka. Raczej potraktuj tę zawartość tego jako pewnego rodzaju spis treści czy nasze rekomendowane tak dokładnie osiem praktyk poprawy przewidywalności. Jeśli które z nich brzmi dla Ciebie intrygująco albo coś, czego jeszcze nie stosujesz, to po prostu odsyłamy Cię do materiałów, które też zamieszczamy w opisie odcinka. Jacek: Ok, czyli jakie praktyki zastosować, żeby poprawić przewidywalność zespołu? Kuba: Przede wszystkim zacznij kończyć, skończ zaczynać. Stosuj krótkie Sprinty. Wzmacniaj odpowiedzialność zespołu za produkt i dziel pracę na mniejsze kawałki. Jacek: Dodatkowo planuj zespołowo, zarządzaj zależnościami zewnętrznymi, traktuj codzienny stand-up jako bezpiecznik i usprawniaj się w oparciu o miary dostarczania produktu. Kuba: Wszystkie wymienione koncepcje, tak jak powiedziałem, znajdziesz w naszych starszych odcinkach, które linkujemy w opisie odcinka i na stronie tego odcinka porzadnyagile.pl/140 Jacek: Przewidywalność to miara i jednocześnie pożądana cecha zespołu, który realizuje zakres pracy, jaki sobie zaplanował na Sprint. Najczęściej przewidywalność podaje się w procentach jako stosunek liczby elementów faktycznie zrealizowanych do liczby elementów pierwotnie zaplanowanych. Kuba: Przewidywalność jest miarą, której wartość oczekiwana jest zakresem. Naszym zdaniem powinna mieścić się zazwyczaj między 80 a 120 procent. Istnieje szereg praktyk wspierających przewidywalność zespołu i zachęcamy do ich zastosowania w Twoim zespole. Jacek: Przyczyny braku przewidywalności w danym zespole mogą oczywiście być różne. Jako doświadczenie eksperci dołączamy do zespołu lub wskazanej części firmy i jasno je wskazujemy wraz z rekomendacjami sposobów, aby zmienić proces wytwórczy tak, by przewidywalność faktycznie rosła. Sprawdź naszą propozycję na stronie 202procent.pl/diagnoza. Kuba: A notatki do tego odcinka, artykuł, transkrypcję, wspomniane linki do innych rekomendowanych materiałów oraz zapis wideo znajdziesz na stronie porzadnyagile.pl/140. Jacek: I to by było wszystko na dzisiaj. Dzięki Kuba. Kuba: Dzięki Jacek. I do usłyszenia wkrótce. ________ To była pełna transkrypcja odcinka podcastu Porządny Agile. Dziękujemy za lekturę! The post Przewidywalność zespołu first appeared on Porządny Agile.
So, what does Marketing ops actually look like? Atlassian's Head of Lifecycle Marketing Ops Kelly Jo Horton joins Daniel to break down what ops actually is, why it's so complex, and how high-performing teams are evolving the function for 2026 and beyond. She explains why MOPS isn't “just sending an email,” why process is everything, and why marketers need to stop treating ops like a drive-thru and start treating it like a Michelin-star kitchen. She also reveals how Atlassian structures its ops organization and why she believes the MQL is officially dead. You'll also learn: > What modern Marketing Ops actually does and why it varies by company > How AI can automate repetitive ops tasks (like list cleaning and lead investigations) > How Atlassian uses Jira, Confluence, Slack bots, and Loom to run ops like engineering This is for anyone in Marketing, rev ops, or GTM who wants to build a scalable system…and for every Marketer who's ever said “it's just an email.” Easily record and share AI-powered video messages with your teammates and customers to supercharge productivity at https://www.loom.com/ Follow Kelly Jo: LinkedIn: https://www.linkedin.com/in/kellyjohorton/ Follow Daniel: LinkedIn: https://www.linkedin.com/in/daniel-murray-marketing/ Sign up for The Marketing Millennials newsletter: https://themarketingmillennials.com/ Daniel is a Workweek friend, working to produce amazing podcasts. To find out more, visit: https://workweek.com/
Joining the TJL crew this week is Long Nguyen, Senior Software Engineer for Atlassian. Long has worked on Rovo starting with Rovo Chat to his present responsibilities to improving Rovo Agents. Tune in to hear background stories of the initial development as well as the tips and tricks Alex (and you) need to improve your prompts to Rovo Agents.Thank you to Revyz for backing us up and making The Jira Life possible. https://www.revyz.io/The Jira Life=====================================Having trouble keeping up with when we are live? Sign up for our Atlassian Community Group!https://ace.atlassian.com/the-jira-life/Or Follow us on LinkedIn! / the-jira-life Become a member on YouTube to get access to perks:https://www.youtube.com/@thejiralife/...Hosts:Alex "Dr. Jira" Ortiz / alexortiz89 / @apetechtechtutorials Rodney "The Jira Guy" Nissen / rgnissen https://thejiraguy.comSarah Wright / satwright Producer:"King Bob" Robert Wen / robert-wen-csm-spc6-a552051 Executive Producer: Lina OrtizMusic provided by Monstercat:=====================================Intro: Nitro Fun - Cheat Codes / monstercat Outro: Fractal - Atrium / monstercatinstinct
Les agents IA permettent aujourd'hui une "hyper-automatisation" des tâches en entreprise. C'est la mission que s'est fixée la startup française MindflowInterview : Evan Bourgouin, Directeur des opérations de MindflowL'hyper-automatisation agentique, concrètement, qu'est-ce que cela change pour les entreprises ?Nous automatisons les tâches répétitives dès qu'un humain, un ordinateur et un processus entrent en jeu. Beaucoup d'organisations utilisent déjà des services comme AWS, Microsoft Azure ou encore Salesforce et SAP, mais ces systèmes restent souvent isolés.Chez Mindflow, notre obsession, c'est l'intégration : connecter chaque service, chaque opération, au niveau le plus granulaire.Sur cette base, nous automatisons des processus dans la cybersécurité, l'IT ou les ressources humaines — par exemple l'onboarding d'un collaborateur, la création d'accès, de rôles, de comptes sur des outils comme Jira ou un CRM. Ce sont des tâches indispensables, mais pas celles où la valeur humaine est la plus forte.Quel est l'impact sur la cybersécurité et la charge des équipes ?Dans la cybersécurité, recevoir 100 alertes par jour sur un SIEM comme Splunk ou Microsoft Sentinel est devenu courant. Avec une équipe restreinte, une partie finit forcément par ne pas être traitée.Nous automatisons donc une part de ces réponses, tout en gardant l'humain dans la boucle.Cela change radicalement le quotidien : c'est un secteur où l'épuisement professionnel est très élevé. Les jeunes analystes arrivent et se font submerger par les tâches répétitives. En retirant cette charge, on leur permet de se concentrer sur l'analyse et la résolution de nouvelles menaces.Les utilisateurs vont du C-level jusqu'à l'alternant : chacun retrouve une capacité à créer, à améliorer son travail, en s'appuyant sur la plateforme.Automatisation ou agentique : comment expliquer la différence ?L'automatisation est déterministe : même input → même output.L'agentique, elle, adapte son comportement en fonction du contexte — par exemple une alerte différente sur ServiceNow ou une anomalie détectée dans un ERP. Mais on n'a pas besoin d'IA partout : certaines entreprises ne souhaitent pas envoyer leurs données dans des modèles d'IA pour des raisons de confidentialité.La vraie différence, c'est que nous avons résolu le problème de l'intégration, ce qui fait de Mindflow « l'IA du dernier kilomètre ». Une fois qu'on sait se connecter à AWS, Azure, Salesforce, Jira, un ERP ou un data lake, l'agent peut vraiment agir. Sans intégration, rien n'est possible.Comment une entreprise démarre-t-elle un projet d'automatisation ?Tout commence par une volonté interne et une culture favorable. Avec nos clients — souvent de grands groupes comme LVMH, Hermès, Thales ou Auchan — nous réalisons un état des lieux : où sont les goulots d'étranglement, quelles équipes sont surchargées, quels profils veulent devenir "builders".Une fois l'intégration réalisée, tout s'accélère. Les quick wins sont fréquemment dans la cyber, l'IT ou le support opérationnel, mais chaque entreprise a ses propres cas d'usage, même si elles utilisent parfois les mêmes outils.-----------♥️ Soutien : https://mondenumerique.info/don
W tym odcinku bierzemy na warsztat kolejne narzędzie z ekosystemu Atlassiana – Great Gadgets. To dodatek do Jiry (Data Center i Cloud), który pozwala budować rozbudowane dashboardy z metrykami przepływu, a następnie udostępniać je zespołom, także np. na Confluence. Opowiadam: - dla kogo jest Great Gadgets i w jakich organizacjach ma największy sens, - jak wygląda model licencjonowania i dlaczego cena z Marketplace'u nie zawsze jest tą, którą realnie płacicie, - jak konfigurować gadżety na dashboardach (źródła danych: board, filtr, JQL), - jak korzystać z histogramu czasów realizacji i percentyli (50, 85, 95) zamiast średniej, - co daje Work in Progress Aging Chart (WIP Aging) przy śledzeniu starzenia się pracy w toku, - jak mierzyć tempo dostarczania (throughput / Kanban Velocity), - jak czytać Time in Status i gdzie „kiszą się” zadania, - jak wyglądają CFD, WIP Run Chart oraz trend/cycle time (scatterplot) w Great Gadgets. Jeśli wolisz widzieć niż tylko słuchać, zajrzyj na: kanbanprzykawie.pl – we wpisie do tego odcinka znajdziesz screenshoty omawianych gadżetów, YouTube – kanał „Kanban przy Kawie” – tam w kilka dni po premierze audio również wersja wideo z wizualizacjami. Na koniec proszę Cię o podpowiedź: jakie kolejne narzędzia chcesz, żebym wziął na warsztat? Komercyjne dodatki? A może coś budowanego samodzielnie? Daj znać w komentarzu lub wiadomości na LinkedIn.
Kurfii Hardhaa kana keessatti:Haala naannoo Gaanfa Afrikaa fi naannoo ArabiyaaHaal biyyoota Nepal, Argentina fi Suudan Kibbaa keessatti mumullataa jiruufii,Hujii mishaa Abiy Ahmad Ali raawwataa jirurraa, Lammiin biyyaa akka namaatti kabajaa dhabuufi serri bakka dhabuuf kkf gabaabsaan yaada dhuunfaaDhihaadhaa!
Qophii torban kanaa keessatti gabaasa gara-garagaraa aduunyaa keessatti mullateefii qabsoo haqaa kan mirgaafi seenaa dhugaaf gaggeefamurratti yaada kiyya gabaabsaan.Dhihaadhaa!
In this episode, Dave and Jamison answer these questions: Hey Dave and Jamison, Big fan of the show — listening from Portugal! (Proof that even across the Atlantic, software politics are universal.) I'm a tech lead, and lately I've noticed a culture where people seem to care way more about how things look than what actually gets done. It's like the appearance of productivity matters more than real impact. Honestly, it drives me nuts!! I know politics are part of any organization, and way more in a leadership role, but this feels excessive. As someone who values substance and solid engineering, how do I deal with or influence this kind of culture without losing my sanity (or turning into one of those “optics-first” people myself)? Thanks for all the insights and laughs. Kudos from Portugal! Listener Charlie says, I'm fresh out of college at my first software engineering job. Several months ago I was appointed the accessibility champion for my team. I proposed a few items in the quarterly planning session, but I think it wasn't enough. My project manager called out our whole team, but I think it was mostly aimed at me. I've been struggling with creating Jira cards, shaping with the team, writing a11y guidelines, etc. It's tedious and I'm not really familiar with this kind of work. How can I get better at the “other stuff” besides just writing code? P.S. I volunteered for this responsibility
The key to longevity in today's ever-changing tech landscape? Yes, you've heard it before: maintaining a positive outlook and a growth mindset so you can remain as versatile as possible. Easier said than done? Perhaps. But this week's guest, Japna Sethi, absolutely embodies this. Japna runs the Jira product group at Atlassian, and has turned her origins in physics and materials science into a career that spans hardware design, software development, growth product management, advising, angel investing, even real estate. Hear about her path and be inspired by her advice on networking, lifelong learning, and doubling down on your strengths. Japna encourages us to get to know our authentic selves better, and to engage in regular, healthy bouts of self-reflection. 00:00 Introduction 01:48 An unusual path to product management06:08 The scientific method works for product, too09:15 Always be learning10:13 Double down on your strengths12:45 Leverage network effects13:38 How to think about angel investing15:30 The “Get Sh*t Done” framework21:00 Why we should be excited about AI22:50 Save time to explore24:45 Where to learn more about Japna
Karim Harbott: From Requirements Documents to Customer Obsession—Redefining the PO Role Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. The Great Product Owner: Strategic, Customer-Obsessed, and Vision-Driven "The PO role in the team is strategic. These POs focus on the customer, outcomes, and strategy. They're customer-obsessed and focus on the purpose and the why of the product." - Karim Harbott Karim believes the industry fundamentally misunderstands what a Product Owner should be. The great Product Owners he's seen are strategic thinkers who are obsessed with the customer. They don't just manage a backlog—they paint a vision for the product and help the entire team become customer-obsessed alongside them. These POs focus relentlessly on outcomes rather than outputs, asking "why are we building this?" before diving into "what should we build?" They understand the purpose of the product and communicate it compellingly. Karim references Amazon's "working backwards" approach, where Product Owners start with the customer experience they want to create and work backwards to figure out what needs to be built. Great POs also embrace the framework of Desirability (what customers want), Viability (what makes business sense), Feasibility (what's technically possible), and Usability (what's easy to use). While the PO owns desirability and viability, they collaborate closely with designers on usability and technical teams on feasibility. This is critical: software is a team sport, and great POs recognize that multiple roles share responsibility for delivery. Like David Marquet teaches, they empower the team to own decisions rather than dictating every detail. The result? Teams that understand the "why" and can innovate toward it autonomously. Self-reflection Question: Does your Product Owner paint a compelling vision that inspires the team, or do they primarily manage a list of tasks? The Bad Product Owner: The User Story Writer "The user story writer PO thinks it's their job to write full, long requirements documents, put it in JIRA, and assign it to the team. This is far away from what the PO role should be." - Karim Harbott The anti-pattern Karim sees most often is the "User Story Writer" Product Owner. These POs believe their job is to write detailed requirements documents, load them into JIRA, and assign them to the team. It's essentially waterfall disguised as Agile—treating user stories like mini-specifications rather than conversation starters. This approach completely misses the collaborative nature of product development. Instead of engaging the team in understanding customer needs and co-creating solutions, these POs hand down fully-formed requirements and expect the team to execute without question. The problem is that this removes the team's ownership and creativity. When POs act as the sole source of product knowledge, they become bottlenecks. The team can't make smart tradeoffs or innovate because they don't understand the underlying customer problems or business context. Using the Desirability-Viability-Feasibility-Usability framework, bad POs try to own all four dimensions themselves instead of recognizing that designers, developers, and other roles bring essential perspectives. The result is disengaged teams, slow delivery, and products that miss the mark because they were built to specifications rather than shaped by collaborative discovery. Software is a team sport—but the User Story Writer PO forgets to put the team on the field. Self-reflection Question: Is your Product Owner engaging the team in collaborative discovery, or just handing down requirements to be implemented? [The Scrum Master Toolbox Podcast Recommends]
In this episode, Aydin chats with Allan Isfan, Senior Director of Global Video Platform at Warner Bros Discovery, about how AI is reshaping creativity, software development, and large-scale enterprise culture. Allan explains how he drives AI literacy for 1,500+ employees, the power of internal demos and sandboxes, and gives a hands-on walkthrough of generative video tools like Gemini V3, Flow, and Sora. He also dives into AI video analysis, the Wizard of Oz project at The Sphere, and the future of creative storytelling powered by AI.
If you're a leader in game dev who feels stuck, able to spot problems but struggling to make a real difference, there is a path forward that levels up your leadership and accelerates your team, game, and career. Sign up here to learn more: https://forms.gle/nqRTUvgFrtdYuCbr6 Are you inadvertently forcing your team to serve a tool, instead of letting your tools serve your team and game? In a recent conversation with Clinton Keith, Ben asked how Clint would help all of game development. Clint's response? "Delete Jira" - and Ben laughed to keep from crying. Jira is a powerful tool, but in the hands of uninformed game development leadership, it often becomes a weapon against the very teams it's meant to help. Ben, who has used Jira and other tools as a producer within large studios, dissects the common, catastrophic misuses of Jira. While you might be better off deleting the tool, the real work is about fixing the broken cultural and organizational patterns that turn a simple work management system into the "boss" of your game studio. Learn the four cascading failure patterns that are draining your team's effectiveness and how to correct them, making collaboration and player outcomes your true north. What You'll Learn In This Episode: Why senior leaders keep breaking Jira without realizing it How Jira causes centralization and decision bottlenecks What Jira DOESN'T tell you, and why that makes it dangerous How perverse incentives emerge from overreliance on Jira and other tools like it The reason you end up feeling like a slave to the tool How to avoid the traps Jira leads you into Connect with us:
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Mike Cannon-Brookes is the Co-Founder and Co-CEO of Atlassian, the $50BN software giant behind products like Jira, Confluence, and Trello. Since founding the company in 2002, he has scaled it to over 300,000 customers globally, generating more than $5BN in annual revenue. Atlassian now employs over 10,000 people across 13 countries and is one of the most successful bootstrapped-to-IPO stories in tech history. Mike is also a leading climate investor and co-owner of several major sports teams. AGENDA: 00:00 Why Unreasonable Men Win in Startups 07:22 How to Make Co-CEOs Work 13:22 Are We in an AI Bubble? Is Everything Overvalued? 26:46 The Future of Software Development: More or Less Devs 32:53 Do Margins Matter in a World of AI 34:02 The Future of Vibe Coding… 36:35 Does Defensibility Exist in a World of AI 42:09 Is Per Seat Pricing Dead in a World of AI 49:01 The Founder Journey and Leadership 54:28 Quick Fire Round: Parenting Advice, Relationship to Money
Nesrine Changuel helped build Spotify, Google Chrome, and Google Meet. Her work has helped her discover the importance of emotional connection in building successful products. At Google, she served as a dedicated “delight PM,” a role specifically focused on making products more delightful. She recently published Product Delight, a book that provides a practical framework for creating products that serve both functional and emotional needs. Based in Paris, she now coaches founders and CPOs on implementing delight strategies in their organizations.What you'll learn:1. Why delight is a business strategy, not just “sprinkling confetti” on top of functionality2. How to identify emotional motivators that drive product retention3. The 50-40-10 rule for balancing delight in your roadmap4. The 4-step delight model5. The origin story of Spotify's Discover Weekly6. Why B2B products need delight just as much as B2C products7. How to get buy-in from skeptical leaders who think delight is a luxury—Brought to you by:DX—The developer intelligence platform designed by leading researchers: https://getdx.com/lennyJira Product Discovery—Confidence to build the right thing: https://atlassian.com/lennyLucidLink—Real-time cloud storage for teams: https://www.lucidlink.com/lenny—Transcript: https://www.lennysnewsletter.com/p/a-4-step-framework-for-building-delightful-products—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/174199489/my-biggest-takeaways-from-this-conversation—Where to find Nesrine Changuel:• LinkedIn: https://www.linkedin.com/in/nesrinechanguel/• Newsletter: https://nesrinechanguel.substack.com/• Website: https://nesrine-changuel.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Nesrine and product delight(04:56) Why delight matters(09:17) What makes a feature “delightful”(12:29) The three pillars of delight(13:03) Pillar 1: Removing friction (Uber refund example)(15:07) Pillar 2: Anticipating needs (Revolut eSIM example)(17:21) Pillar 3: Exceeding expectations (Edge coupon example)(18:35) The “confetti effect” and when it actually works(22:02) B2B vs. B2C: Why all products need emotional connection(29:52) The Delight Model: A 4-step framework(30:57) Step 1: Identifying user motivators (functional and emotional)(33:55) Step 2: Converting motivators into product opportunities(34:46) Step 3: Identifying solutions with the delight grid(36:46) Step 4: Validating ideas with the delight checklist(40:22) The Delight Model summarized(42:18) The importance of familiarity (Spotify Discover Weekly story)(45:21) Real examples: Chrome's tab management solution(51:32) Google Meet's solution for “Zoom fatigue”(55:02) Getting buy-in from skeptical leaders(59:39) Prioritizing delight: The 50-40-10 rule(1:02:41) Creating a culture of delight in your organization(1:06:45) The habituation effect(1:08:15) When delight goes wrong: Apple reactions example(1:10:21) How delight motivates product teams(1:12:24) Lightning round and final thoughts—Referenced:• Spotify: https://open.spotify.com/• Linear: https://linear.app/• How Linear builds product: https://www.lennysnewsletter.com/p/how-linear-builds-product• Jira: https://www.atlassian.com/software/jira• Asana: https://asana.com/• Monday: https://monday.com/• The Product Delight Model: https://nesrinechanguel.substack.com/p/the-product-delight-model• Revolut: https://www.revolut.com/• How Revolut trains world-class product managers: The “local CEO” model, raw intellect over experience, and a cultural obsession with building wow products | Dmitry Zlokazov (Head of Product): https://www.lennysnewsletter.com/p/how-revolut-trains-world-class-product-managers• Microsoft Cashback: https://www.microsoft.com/en-us/edge/features/shopping-cashback• Superhuman's secret to success: Ignoring most customer feedback, manually onboarding every new user, obsessing over every detail, and positioning around a single attribute: speed | Rahul Vohra (CEO): https://www.lennysnewsletter.com/p/superhumans-secret-to-success-rahul-vohra• Brian Chesky's secret mentor who died 9 times, started the Burning Man board, and built the world's first midlife wisdom school | Chip Conley (founder of MEA): https://www.lennysnewsletter.com/p/chip-conley• Workday: https://www.workday.com/• SAP: https://www.sap.com/• ServiceNow: https://www.servicenow.com/• Salesforce: https://www.salesforce.com/• GitHub: https://github.com/• Atlassian: https://www.atlassian.com/• Snowflake: https://www.snowflake.com/• Data Superheroes: https://www.snowflake.com/en/data-superheroes/• Google Meet: https://meet.google.com/• Andy Nesling on LinkedIn: https://www.linkedin.com/in/andynesling/• Matic: https://maticrobots.com/• Diego Sanchez's (Senior Product Manager at Buffer) post on LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7365014292091346945/• Miro: https://miro.com/• Arc browser: https://arc.net/• Competing with giants: An inside look at how The Browser Company builds product | Josh Miller (CEO): https://www.lennysnewsletter.com/p/competing-with-giants-an-inside-look• Migros Supermarket: https://www.migros.ch/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Linear's secret to building beloved B2B products | Nan Yu (Head of Product): https://www.lennysnewsletter.com/p/linears-secret-to-building-beloved-b2b-products-nan-yu• Suno: https://suno.com• Snapchat: https://www.snapchat.com/• Use Reactions, Presenter Overlay, and other effects when videoconferencing on Mac: https://support.apple.com/en-us/105117• Dr. Lipp: https://drlipp.com/• How to be the best coach to product people | Petra Wille (Strong Product People): https://www.lennysnewsletter.com/p/how-to-be-the-best-coach-to-product• The Great American Baking Show: https://www.imdb.com/title/tt21822674/• Le Meilleur Pâtissier: https://en.wikipedia.org/wiki/Le_Meilleur_P%C3%A2tissier• The Upside on Amazon Prime: https://www.amazon.com/gp/video/detail/amzn1.dv.gti.3cb8500f-31af-9f4f-5dec-701e086d58e8• The Intouchables: https://www.imdb.com/title/tt1675434/• Yoyo stroller: https://www.stokke.com/USA/en-us/category/strollers/yoyo-strollers• UppaBaby strollers: https://uppababy.com/strollers/—Recommended books:• Product Delight: How to Make Your Product Stand Out with Emotional Connection: https://www.amazon.com/Product-Delight-Stand-Emotional-Connection-ebook/dp/B0FGZ93D9Y/• Factfulness: Ten Reasons We're Wrong About the World—and Why Things Are Better Than You Think: https://www.amazon.com/Factfulness-Reasons-World-Things-Better/dp/1250107814• STRONG Product Communities: The Essential Guide to Product Communities of Practice: https://www.amazon.com/STRONG-Product-Communities-Essential-Practice/dp/3982235189/r—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com