Software design using test cases
POPULARITY
What happens when Agile, Extreme Programming, and Test-Driven Development meet a world dominated by hardware, graphical programming, and binary artifacts? In this episode of the Mob Mentality Show, we're joined by Sam Taggart to explore what it really takes to introduce modern software engineering practices into environments like LabVIEW, embedded systems, and industrial software teams. These are contexts where deployment can be slow, feedback loops can be expensive, and “just refactor it” feels like it is not an option. We dig into why applying XP, TDD, mob programming, and continuous integration looks very different when your software is tightly coupled to physical devices, firmware, and test equipment. Sam shares practical insights on adapting Agile ideas so they actually work in hardware-constrained environments, rather than forcing patterns designed for web apps onto teams that live in a very different reality. A major theme of the conversation is change. How do you sell new engineering practices to skeptical teams? How do you introduce better ways of working without triggering resistance or fear? And how do you help organizations move forward when legacy code, specialized tools, and long-established habits get in the way? We also spend time on a deceptively simple but critical idea: knowing what “good” looks like. From testing strategies and code quality to team collaboration and delivery confidence, having a clear vision of good engineering makes it far easier to experiment with better practices and avoid cargo-cult Agile. This episode is especially relevant if you work with LabVIEW, embedded systems, firmware, industrial or hardware-adjacent software, or if you're leading teams where Agile adoption feels harder than the books make it sound. Topics include: - Applying TDD and XP in graphical, binary, and legacy codebases - Mob programming and collaboration in hardware-heavy environments - Continuous integration and delivery when deployment is constrained - Introducing Agile ideas without alienating experienced engineers - Reducing risk while improving feedback and quality - Helping teams see and aim for better engineering outcomes Video and Show Notes: https://youtu.be/Kxzn_2aYMIM
BONUS: The Operating System for Software-Native Organizations - The Five Core Principles In this BONUS episode, the final installment of our Special Xmas 2025 reflection on Software-native businesses, we explore the five fundamental principles that form the operating system for software-native organizations. Building on the previous four episodes, this conversation provides the blueprint for building organizations that can adapt at the speed of modern business demands, where the average company lifespan on the S&P 500 has dropped from 33 years in the 1960s to a projected 12 years by 2027. The Challenge of Adaptation "What we're observing in Ukraine is adaptation happening at a speed that would have been unthinkable in traditional military contexts - new drone capabilities emerge, countermeasures appear within days, and those get countered within weeks." The opening draws a powerful parallel between the rapid adaptation we're witnessing in drone warfare and the existential threats facing modern businesses. While our businesses aren't facing literal warfare, they are confronting dramatic disruption. Clayton Christensen documented this in "The Innovator's Dilemma," but what he observed in the 1970s and 80s is happening exponentially faster now, with software as the accelerant. If we can improve businesses' chances of survival even by 10-15%, we're talking about thousands of companies that could thrive instead of fail, millions of jobs preserved, and enormous value created. The central question becomes: how do you build an organization that can adapt at this speed? Principle 1: Constant Experimentation with Tight Feedback Loops "Everything becomes an experiment. Not in the sense of being reckless or uncommitted, but in being clear about what we're testing and what we expect to learn. I call this: work like a scientist: learning is the goal." Software developers have practiced this for decades through Test-Driven Development, but now this TDD mindset is becoming the ruling metaphor for managing products and entire businesses. The practice involves framing every initiative with three clear elements: the goal (what are we trying to achieve?), the action (what specific thing will we do?), and the learning (what will we measure to know if it worked?). When a client says "we need to improve our retrospectives," software-native organizations don't just implement a new format. Instead, they connect it to business value - improving the NPS score for users of a specific feature by running focused retrospectives that explicitly target user pain points and tracking both the improvements implemented and the actual NPS impact. After two weeks, you know whether it worked. The experiment mindset means you're always learning, never stuck. This is TDD applied to organizational change, and it's powerful because every process change connects directly to customer outcomes. Principle 2: Clear Connection to Business Value "Software-native organizations don't measure success by tasks completed, story points delivered, or features shipped. Or even cycle time or throughput. They measure success by business outcomes achieved." While this seems obvious, most organizations still optimize for output, not outcomes. The practice uses Impact Mapping or similar outcome-focused frameworks where every initiative answers three questions: What business behavior are we trying to change? How will we measure that change? What's the minimum software needed to create that change? A financial services client wanted to "modernize their reporting system" - a 12-month initiative with dozens of features in project terms. Reframed through a business value lens, the goal became reducing time analysts spend preparing monthly reports from 80 hours to 20 hours, measured by tracking actual analyst time, starting with automating just the three most time-consuming report components. The first delivery reduced time to 50 hours - not perfect, but 30 hours saved, with clear learning about which parts of reporting actually mattered. The organization wasn't trying to fulfill requirements; they were laser focused on the business value that actually mattered. When you're connected to business value, you can adapt. When you're committed to a feature list, you're stuck. Principle 3: Software as Value Amplifier "Software isn't just 'something we do' or a support function. Software is an amplifier of your business model. If your business model generates $X of value per customer through manual processes, software should help you generate $10X or more." Before investing in software, ask whether this can amplify your business model by 10x or more - not 10% improvement, but 10x. That's the threshold where software's unique properties (zero marginal cost, infinite scale, instant distribution) actually matter, and where the cost/value curve starts to invert. Remember: software is still the slowest and most expensive way to check if a feature would deliver value, so you better have a 10x or more expectation of return. Stripe exemplifies this principle perfectly. Before Stripe, accepting payments online required a merchant account (weeks to set up), integration with payment gateways (months of development), and PCI compliance (expensive and complex). Stripe reduced that to adding seven lines of code - not 10% easier, but 100x easier. This enabled an entire generation of internet businesses that couldn't have existed otherwise: subscription services, marketplaces, on-demand platforms. That's software as amplifier. It didn't optimize the old model; it made new models possible. If your software initiatives are about 5-10% improvements, ask yourself: is software the right medium for this problem, or should you focus where software can create genuine amplification? Principle 4: Software as Strategic Advantage "Software-native organizations use software for strategic advantage and competitive differentiation, not just optimization, automation, or cost reduction. This means treating software development as part of your very strategy, not a way to implement a strategy that is separate from the software." This concept, discussed with Tom Gilb and Simon Holzapfel on the podcast as "continuous strategy," means that instead of creating a strategy every few years and deploying it like a project, strategy and execution are continuously intertwined when it comes to software delivery. The practice involves organizing around competitive capabilities that software uniquely enables by asking: How can software 10x the value we generate right now? What can we do with software that competitors can't easily replicate? Where does software create a defensible advantage? How does our software create compounding value over time? Amazon Web Services didn't start as a product strategy but emerged from Amazon building internal capabilities to run their e-commerce platform at scale. They realized they'd built infrastructure that was extremely hard to replicate and asked: "What if we offered it to others?" AWS became Amazon's most profitable business - not because they optimized their existing retail business, but because they turned an internal capability into a strategic platform. The software wasn't supporting the strategy - the software became the strategy. Compare this to companies that use software just for cost reduction or process optimization - they're playing defense. Software-native companies use software to play offense, creating capabilities that change the competitive landscape. Continuous strategy means your software capabilities and your business strategy evolve together, in real-time, not in annual planning cycles. Principle 5: Real-Time Observability and Adaptive Systems "Software-native organizations use telemetry and real-time analytics not just to understand their software, but to understand their entire business and adapt dynamically. Observability practices from DevOps are actually ways of managing software delivery itself. We're bootstrapping our own operating system for software businesses." This principle connects back to Principle 1 but takes it to the organizational level. The practice involves building systems that constantly sense what's happening and can adapt in real-time: deploy with feature flags so you can turn capabilities on/off instantly, use A/B testing not just for UI tweaks but for business model experiments, instrument everything so you know how users actually behave, and build feedback loops that let the system respond automatically. Social media companies and algorithmic trading firms already operate this way. Instagram doesn't deploy a new feed algorithm and wait six months to see if it works - they're constantly testing variations, measuring engagement in real-time, adapting the algorithm continuously. The system is sensing and responding every second. High-frequency trading firms make thousands of micro-adjustments per day based on market signals. Imagine applying this to all businesses: a retail company that adjusts pricing, inventory, and promotions in real-time based on demand signals; a healthcare system that dynamically reallocates resources based on patient flow patterns; a logistics company whose routing algorithms adapt to traffic, weather, and delivery success rates continuously. This is the future of software-native organizations - not just fast decision-making, but systems that sense and adapt at software speed, with humans setting goals and constraints but software executing continuous optimization. We're moving from "make a decision, deploy it, wait to see results" to "deploy multiple variants, measure continuously, let the system learn." This closes the loop back to Principle 1 - everything is an experiment, but now the experiments run automatically at scale with near real-time signal collection and decision making. It's Experiments All The Way Down "We established that software has become societal infrastructure. That software is different - it's not a construction project with a fixed endpoint; it's a living capability that evolves with the business." This five-episode series has built a complete picture: Episode 1 established that software is societal infrastructure and fundamentally different from traditional construction. Episode 2 diagnosed the problem - project management thinking treats software like building a bridge, creating cascade failures throughout organizations. Episode 3 showed that solutions already exist, with organizations like Spotify, Amazon, and Etsy practicing software-native development successfully. Episode 4 exposed the organizational immune system - the four barriers preventing transformation: the project mindset, funding models, business/IT separation, and risk management theater. Today's episode provides the blueprint - the five principles forming the operating system for software-native organizations. This isn't theory. This is how software-native organizations already operate. The question isn't whether this works - we know it does. The question is: how do you get started? The Next Step In Building A Software-Native Organization "This is how transformation starts - not with grand pronouncements or massive reorganizations, but with conversations and small experiments that compound over time. Software is too important to society to keep managing it wrong." Start this week by doing two things. First, start a conversation: pick one of these five principles - whichever resonates most with your current challenges - and share it with your team or leadership. Don't present it as "here's what we should do" but as "here's an interesting idea - what would this mean for us?" That conversation will reveal where you are, what's blocking you, and what might be possible. Second, run one small experiment: take something you're currently doing and frame it as an experiment with a clear goal, action, and learning measure. Make it small, make it fast - one week maximum, 24 hours if you can - then stop and learn. You now have the blueprint. You understand the barriers. You've seen the alternatives. The transformation is possible, and it starts with you. Recommended Further Reading Tom Gilb and Simon Holzapfel episodes on continuous strategy The book by Christensen, Clayton: "The Innovator's Dilemma" The book by Gojko Adzic: Impact Mapping Ukraine drone warfare Company lifespan statistics: Innosight research on S&P 500 turnover Stripe's impact on internet businesses Amazon AWS origin story DevOps observability practices About Vasco Duarte Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success. You can link with Vasco Duarte on LinkedIn.
In this special crossover episode with the brand-new Embedded AI Podcast, Luca and Jeff are joined by Ryan Torvik, Luca's co-host on the Embedded AI podcast, to explore the intersection of AI-powered development tools and agile embedded systems engineering. The hosts discuss practical strategies for using Large Language Models (LLMs) effectively in embedded development workflows, covering topics like context management, test-driven development with AI, and maintaining code quality standards in safety-critical systems. The conversation addresses common anti-patterns that developers encounter when first adopting LLM-assisted coding, such as "vibe coding" yourself off a cliff by letting the AI generate too much code at once, losing control of architectural decisions, and failing to maintain proper test coverage. The hosts emphasize that while LLMs can dramatically accelerate prototyping and reduce boilerplate coding, they require even more rigorous engineering discipline - not less. They discuss how traditional agile practices like small commits, continuous integration, test-driven development, and frequent context resets become even more critical when working with AI tools. For embedded systems engineers working in safety-critical domains like medical devices, automotive, and aerospace, the episode provides valuable guidance on integrating AI tools while maintaining deterministic quality processes. The hosts stress that LLMs should augment, not replace, static analysis tools and human code reviews, and that developers remain fully responsible for AI-generated code. Whether you're just starting with AI-assisted development or looking to refine your approach, this episode offers actionable insights for leveraging LLMs effectively while keeping the reins firmly in hand. ## Key Topics * [03:45] LLM Interface Options: Web, CLI, and IDE Plugins - Choosing the Right Tool for Your Workflow* [08:30] Prompt Engineering Fundamentals: Being Specific and Iterative with LLMs* [12:15] Building Effective Base Prompts: Learning from Experience vs. Starting from Templates* [16:40] Context Window Management: Avoiding Information Overload and Hallucinations* [22:10] Understanding LLM Context: Files, Prompts, and Conversation History* [26:50] The Nature of Hallucinations: Why LLMs Always Generate, Never Judge* [29:20] Test-Driven Development with AI: More Critical Than Ever* [35:45] Avoiding 'Vibe Coding' Disasters: The Importance of Small, Testable Increments* [42:30] Requirements Engineering in the AI Era: Becoming More Specific About What You Want* [48:15] Extreme Programming Principles Applied to LLM Development: Small Steps and Frequent Commits* [52:40] Context Reset Strategies: When and How to Start Fresh Sessions* [56:20] The V-Model Approach: Breaking Down Problems into Manageable LLM-Sized Chunks* [01:01:10] AI in Safety-Critical Systems: Augmenting, Not Replacing, Deterministic Tools* [01:06:45] Code Review in the AI Age: Maintaining Standards Despite Faster Iteration* [01:12:30] Prototyping vs. Production Code: The Superpower and the Danger* [01:16:50] Shifting Left with AI: Empowering Product Owners and Accelerating Feedback Loops* [01:19:40] Bootstrapping New Technologies: From Zero to One in Minutes Instead of Weeks* [01:23:15] Advice for Junior Engineers: Building Intuition in the Age of AI-Assisted Development ## Notable Quotes > "All of us are new to this experience. Nobody went to school back in the 80s and has been doing this for 40 years. We're all just running around, bumping into things and seeing what works for us." — Ryan Torvik > "An LLM is just a token generator. You stick an input in, and it returns an output, and it has no way of judging whether this is correct or valid or useful. It's just whatever it generated. So it's up to you to give it input data that will very likely result in useful output data." — Luca Ingianni > "Tests tell you how this is supposed to work. You can have it write the test first and then evaluate the test. Using tests helps communicate - just like you would to another person - no, it needs to function like this, it needs to have this functionality and behave in this way." — Ryan Torvik > "I find myself being even more aggressively biased towards test-driven development. While I'm reasonably lenient about the code that the LLM writes, I am very pedantic about the tests that I'm using. I will very thoroughly review them and really tweak them until they have the level of detail that I'm interested in." — Luca Ingianni > "It's really forcing me to be a better engineer by using the LLM. You have to go and do that system level understanding of the problem space before you actually ask the LLM to do something. This is what responsible people have been saying - this is how you do engineering." — Ryan Torvik > "I can use LLMs to jumpstart me or bootstrap me from zero to one. Once there's something on the screen that kind of works, I can usually then apply my general programming skill, my general engineering taste to improve it. Getting from that zero to one is now not days or weeks of learning - it's 20 minutes of playing with it." — Jeff Gable > "LLMs are fantastic at small-scale stuff. They will be wonderful at finding better alternatives for how to implement a certain function. But they are absolutely atrocious at large-scale stuff. They will gleefully mess up your architecture and not even notice because they cannot fit it into their tiny electronic brains." — Luca Ingianni > "Don't be afraid to try it out. We're all noobs to this. This is the brave noob world of AI exploration. Be curious about it, but also be cautious about it. Don't ever take your hands off the reins. Trust your engineering intuition - even young folks that are just starting, trust your engineering intuition." — Ryan Torvik > "As the saying goes, good judgment comes from experience. Experience comes from bad judgment. You'll find spectacular ways of messing up - that is how you become a decent engineer. LLMs do not change that. Junior engineers will still be necessary, will still be around, and they will still evolve into senior engineers eventually after they've fallen on their faces enough times." — Luca Ingianni You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click hereAre you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/
What happens when you combine daily mini-retrospectives, Test-Driven Development in absurdly small steps, and Chess Clock Mobbing? You get a radically different iteration on collaboration, continuous improvement, and extreme programming—and that's exactly what we explore in this episode of the Mob Mentality Show with guests Kevin Vicencio and Alex Bird. Kevin and Alex are on a team who didn't just mob the canonical way—they experimented with variations and discovered something that seems faster, tighter, and even more collaborative in many ways. From refining how teams use retrospectives to guide daily improvements, to pioneering a new high-intensity form of teaming called “Chess Clock Mobbing,” their approach is relentless in its pursuit of learning and team flow. In this conversation, we dig into: - How daily retros and real-time feedback can evolve your team culture fast - Why working in smaller TDD steps can paradoxically lead to faster results - The mechanics and mindset behind Chess Clock Mobbing - “Evil TDD Ping Pong” as a way to level up test design and shared understanding - Building a culture of trust, safety, and continuous experimentation - Techniques for maintaining momentum, engagement, and learning in remote-first dev teams - The power of absurdly small experiments and the compounding effect of micro-improvements Whether you're an Agile coach, XP practitioner, software engineer, or just curious about pushing the boundaries of collaborative development, this episode delivers deep insights, real practices, and actionable takeaways you can try with your team tomorrow.
Topics covered in this episode: Cyclopts: A CLI library * The future of Python web services looks GIL-free* * Free-threaded GC* * Polite lazy imports for Python package maintainers* Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Cyclopts: A CLI library A CLI library that fixes 13 annoying issues in Typer Much of Cyclopts was inspired by the excellent Typer library. Despite its popularity, Typer has some traits that I (and others) find less than ideal. Part of this stems from Typer's age, with its first release in late 2019, soon after Python 3.8's release. Because of this, most of its API was initially designed around assigning proxy default values to function parameters. This made the decorated command functions difficult to use outside of Typer. With the introduction of Annotated in python3.9, type-hints were able to be directly annotated, allowing for the removal of these proxy defaults. The 13: Argument vs Option Positional or Keyword Arguments Choices Default Command Docstring Parsing Decorator Parentheses Optional Lists Keyword Multiple Values Flag Negation Help Defaults Validation Union/Optional Support Adding a Version Flag Documentation Brian #2: The future of Python web services looks GIL-free Giovanni Barillari “Python 3.14 was released at the beginning of the month. This release was particularly interesting to me because of the improvements on the "free-threaded" variant of the interpreter. Specifically, the two major changes when compared to the free-threaded variant of Python 3.13 are: Free-threaded support now reached phase II, meaning it's no longer considered experimental The implementation is now completed, meaning that the workarounds introduced in Python 3.13 to make code sound without the GIL are now gone, and the free-threaded implementation now uses the adaptive interpreter as the GIL enabled variant. These facts, plus additional optimizations make the performance penalty now way better, moving from a 35% penalty to a 5-10% difference.” Lots of benchmark data, both ASGI and WSGI Lots of great thoughts in the “Final Thoughts” section, including “On asynchronous protocols like ASGI, despite the fact the concurrency model doesn't change that much – we shift from one event loop per process, to one event loop per thread – just the fact we no longer need to scale memory allocations just to use more CPU is a massive improvement. ” “… for everybody out there coding a web application in Python: simplifying the concurrency paradigms and the deployment process of such applications is a good thing.” “… to me the future of Python web services looks GIL-free.” Michael #3: Free-threaded GC The free-threaded build of Python uses a different garbage collector implementation than the default GIL-enabled build. The Default GC: In the standard CPython build, every object that supports garbage collection (like lists or dictionaries) is part of a per-interpreter, doubly-linked list. The list pointers are contained in a PyGC_Head structure. The Free-Threaded GC: Takes a different approach. It scraps the PyGC_Head structure and the linked list entirely. Instead, it allocates these objects from a special memory heap managed by the "mimalloc" library. This allows the GC to find and iterate over all collectible objects using mimalloc's data structures, without needing to link them together manually. The free-threaded GC does NOT support "generations” By marking all objects reachable from these known roots, we can identify a large set of objects that are definitely alive and exclude them from the more expensive cycle-finding part of the GC process. Overall speedup of the free-threaded GC collection is between 2 and 12 times faster than the 3.13 version. Brian #4: Polite lazy imports for Python package maintainers Will McGugan commented on a LI post by Bob Belderbos regarding lazy importing “I'm excited about this PEP. I wrote a lazy loading mechanism for Textual's widgets. Without it, the entire widget library would be imported even if you needed just one widget. Having this as a core language feature would make me very happy.” https://github.com/Textualize/textual/blob/main/src/textual/widgets/__init__.py Well, I was excited about Will's example for how to, essentially, allow users of your package to import only the part they need, when they need it. So I wrote up my thoughts and an explainer for how this works. Special thanks to Trey Hunner's Every dunder method in Python, which I referenced to understand the difference between __getattr__() and __getattribute__(). Extras Brian: Started writing a book on Test Driven Development. Should have an announcement in a week or so. I want to give folks access while I'm writing it, so I'll be opening it up for early access as soon as I have 2-3 chapters ready to review. Sign up for the pythontest newsletter if you'd like to be informed right away when it's ready. Or stay tuned here. Michael: New course!!! Agentic AI Programming for Python I'll be on Vanishing Gradients as a guest talking book + ai for data scientists OpenAI launches ChatGPT Atlas https://github.com/jamesabel/ismain by James Abel Pets in PyCharm Joke: You're absolutely right
AI might write your code, but can you trust it to do it well? Clare Sudbery says: not without a safety net. In this episode, she explains how test-driven development is evolving in the age of AI, and why developers need to slow down, not speed up.
AI Assisted Coding: Beyond AI Code Assistants: How Moldable Development Answers Questions AI Can't With Tudor Girba In this BONUS episode, we explore Moldable Development with Tudor Girba, CEO of feenk.com and creator of the Glamorous Toolkit. We dive into why developers spend over 50% of their time reading code—not because they want to, but because they lack the answers they need. Tudor shares how building contextual tools can transform software development, making systems truly understandable and enabling decisions at the speed of thought. The Hidden System: A Telco's Three-Year Quest "They had a system consisting of five boxes, but they could only enumerate four. If this is your level of awareness about what is reality around you, you have almost no chance of systematically affecting that reality." Tudor opens with a striking case study from a telecommunications company that spent three years and hundreds of person-years trying to optimize a data pipeline. Despite massive effort and executive mandate, the pipeline still took exactly one day to process data—no improvement whatsoever. When Tudor's team investigated, they asked for an architecture diagram. The team drew four boxes representing their system. But when Tudor's team started building tools to mirror this architecture back from the actual code, they discovered something shocking: there was an entire fifth system between the first and second boxes that nobody knew existed. This missing system was likely the bottleneck they'd been trying to optimize for three years. Why Reading Code Doesn't Scale "Developers spend more than 50% of their time reading code. The problem is that our systems are typically larger than anyone can read, and by the time you finish reading, the system has already changed many times." The real issue isn't the time spent reading—it's that reading is the most manual, least scalable way to extract information from systems. When developers read code, they're actually trying to answer questions so they can make decisions. But a 250,000-line system would take one person-month to read at high speed, and the system changes constantly during that time. This means everything you learned yesterday becomes merely a hypothesis, not a reliable answer. The fundamental problem is that we cannot perceive anything in a software system except through tools, yet we've never made how we read code an explicit, optimizable activity. The Context Problem: Why Generic Tools Fail "Software is highly contextual, which means we can predict classes of problems people will have, but we cannot predict specific problems people will have." Tudor draws a powerful parallel with testing. Nobody downloads unit tests from the web and applies them to their system—that would be absurd. Instead, we download test frameworks and build tests contextually for our specific system, encoding what's valuable about our particular business logic. Yet for almost everything else in software development, we download generic tools and expect them to work. This is why teams have tens of thousands of static analysis warnings they ignore, while a single failing test stops deployment. The test encodes contextual value; the generic warning doesn't. Moldable Development extends this principle: every question about your system should be answered by a contextual tool you build for that specific question. Tools That Mirror Your Mental Model "Whatever you draw on the whiteboard—that's your mental model. But as soon as the system exists, we want the system to mirror you back that thing. We make it the job of the system to show our mental model back to us." When someone draws an architecture diagram on a whiteboard, they're not documenting the system—they're documenting their beliefs about the system. The diagram represents wishes when drawn before the system exists, but beliefs when drawn after. Moldable Development flips this: instead of humans reading code and creating approximations, the system itself generates the visualization directly from the actual code. This eliminates the layers of belief and inference. Whether you're looking at high-level architecture, data lineage across multiple technologies, performance bottlenecks, or business domain structure, you build small tools that extract and present exactly the information you need from the system as it actually is. The Test-Driven Development Parallel "Testing was a way to find some kind of class of answers. But there are many other questions we have, and the question is: is there a systematic way to approach arbitrary questions?" Tudor explains that Moldable Development applies test-driven development principles to all forms of system understanding. Just as we write tests after we understand the functionality we need, we build visualization and analysis tools after we understand the questions we need answered. Both approaches share key characteristics: they're built contextually for the specific system, created by developers during development, and composed of many small tools that collectively model the system. The difference is that TDD focuses on functional decomposition and known expectations, while Moldable Development addresses architecture, security, domain structure, performance, and any other perspective where functional tests aren't the most useful decomposition. From Thousands of Features to Thousands of Tools "In my development environment, I don't have features. I have thousands of tools that coexist. Development environments should be focused not on what exists out of the box, but on how quickly you can create a contextual tool." Traditional development environments offer dozens of features—buttons, plugins, generic views. But Moldable Development environments contain thousands of micro-tools, each answering a specific question about a specific system. The key is making these tools composable and fast to create. Rather than building monolithic tools that try to handle every scenario, you build small inspectors that show one perspective on one object or concept. These inspectors chain together naturally as you drill down from high-level questions to detailed investigations. You might have one inspector showing test failures grouped by exception type, another showing PDF document comparisons, another showing cluster performance, and another showing memory usage—all coexisting and available when needed. The Real Bottleneck To Learning A System: Time to the Next Question "Once you do this, you will see that the interesting bottleneck is in the time to the next interesting question. This is by far the most interesting place to be spending energy." When you commoditize access to answers through contextual tools, something remarkable happens: the bottleneck shifts from getting answers to asking better questions. Right now, because answers come so slowly through manual reading and analysis, we rarely exercise the skill of formulating good questions. We make decisions based on gut feelings and incomplete data because we can't afford to dig deeper. But when answers arrive at the speed of thought, you can explore, follow hunches, test hypotheses, and develop genuine insight. The conversation between person and system becomes fluid, enabling decision-making based on actual evidence rather than belief. Moldable Development in Practice: The Lifeware Case "They are investing in software engineering as their competitive advantage. They have 150,000 tests that would take 10 days to run on a single machine, but they run them in 16 minutes distributed across AWS." Tudor shares a powerful case study of Lifeware, a life insurance software company that was featured in Kent Beck's "Test-Driven Development by Example" in 2002 with 4,000 tests. Today they have 150,000 tests and have fully adopted Moldable Development as their core practice. Their business model is remarkable: they take data from insurance companies, throw away the old systems, and reverse-engineer new systems by TDD-ing the business—replaying history to produce pixel-identical documents. They've deployed Glamorous Toolkit as their sole development environment across 100+ developers. Their approach demonstrates that Moldable Development isn't just a research concept but a practical competitive advantage that scales to large teams and complex systems. Why AI Doesn't Solve This Problem "When you ask AI, you will get exactly the same kind of answers. The answer comes quickly, but you will not know whether this is accurate, whether this represents the whole thing, and you definitely do not have an explanation as to why the answer is the way it is." In the age of AI code assistants, it might seem like language models could solve the problem of understanding systems. But Tudor explains why they can't. When you ask an AI about your architecture, you get an opinion—fast but unverifiable. Just like asking a developer to draw the architecture on a whiteboard, you receive filtered information without knowing if it's complete or accurate. Moldable Development, by contrast, extracts answers deterministically from the actual system. Software systems have almost no ambiguity in meaning—they're mathematical, not linguistic. We don't need probabilistic interpretation of source code; we need precise extraction and presentation. The tools you build give you not just answers but explanations of how those answers were derived from the actual system state. Scaling Through Language, Not Features "You need a new kind of development environment where the goal is to create tools much quicker. You need some sort of language in which to express development environments." The technical challenge of Moldable Development is enabling thousands of tools to coexist productively. This requires a fundamentally different approach to development environments. Instead of adding features—buttons and menu items that quickly become overwhelming—you need a language for expressing tools and a system for composing them. Glamorous Toolkit demonstrates this through its inspector architecture, where any object can define custom views that appear contextually. These views compose naturally as you navigate through your investigation, reusing earlier perspectives while adding new ones. The environment becomes a medium for tool creation, not just a collection of pre-built features. Making the Invisible Visible "We cannot perceive anything in a software system except through a tool. If that's so important, then the ability to control that shape is probably kind of important too." Software has no inherent shape—it's just data. Every perception we have of it comes through some tool that renders it into a form we can reason about. This means tools aren't nice-to-have accessories; they're fundamental to our ability to work with software at all. The text editor showing code is a tool. The debugger showing variables is a tool. But these are generic tools built once and reused everywhere, which means they show generic perspectives. What if we could control the shape of our software as easily as we write it? What if the system could show us exactly the view we need for exactly the question we have? That's the promise of Moldable Development. About Tudor Girba Tudor Girba is CEO of feenk.com and creator of Moldable Development. He leads the team behind Glamorous Toolkit, a novel IDE that helps developers make sense of complex systems. His work focuses on transforming how teams understand, navigate, and modernize legacy software through custom, insightful tools. Tudor and Simon Wardley are writing a book about Moldable Development which you can get at: https://moldabledevelopment.com/, and read more about in this Medium article. You can link with Tudor Girba on LinkedIn.
Are your accessibility tests missing critical issues? A new open-source framework with Selenium + Axe-core might be your fix. Can AI really make Test-Driven Development 10x faster? Autonomous testing is heating up. Forrester's Q3 2025 report reveals the big winners, risks, and disruptors testers need to know about. Find out in this episode of the Test Guild New Shows for the week of Oct 5th. So, grab your favorite cup of coffee or tea, and let's do this. Time News Link 0:24 ZAPTESTAI https://testguild.me/ZAPTESTNEWS 1:03 Selenium Axe-Core https://testguild.me/72t9fx 1:53 ARIA Notify https://guildlive.io/s/z9i67Bww 3:33 Mobile No-Code + AI https://guildlive.io/s/0xt4p5EB 4:52 Caesr AI https://guildlive.io/s/vlcASVIH 5:35 TDD With AI https://guildlive.io/s/z0Qc3Cnd 7:06 Forrester Report https://guildlive.io/s/POq47b9W 8:47 DevTools (MCP) https://guildlive.io/s/f9ssW2In
Struggling with technical debt and code quality? Learn how a technical coach can help your team level up.In this episode, Emily Bache, a Samman technical coach, shares her proven method for building better engineering teams through structured learning and collaborative coding. We explore ensemble programming, learning hours, and why AI makes fundamental engineering practices more important than ever.Key topics discussed:The role of a Technical Coach and the Samman Method explainedHow AI amplifies good engineering practices instead of replacing themHow to use ensemble programming to achieve single-piece flowRunning effective ensemble sessions and avoiding common failure modesWhy learning is part of the work, not only a side activityWhy pull requests should not be the primary tool for mentoring junior developersThe dangerous trend of “vibe coding” with AI toolsTimestamps:(00:00) Trailer & Intro(02:22) Career Turning Points(03:23) Being Part of Modern Engineering YouTube Channel(04:27) The Role of a Technical Coach(05:42) The Impact of AI on Technical Coaching(08:20) Sofware Engineering is a Learning Process(09:55) Optimizing Learning With Samman Method(11:40) The Samman Method: Ensemble (Mob Programming)(14:59) The Main Benefit of Ensemble: Single Piece Flow(17:26) How to Do Ensemble and Avoid Common Failure Modes(20:27) The Types of Coding to Ensemble On(22:12) The Importance of Trust, Communication, and Kindness(23:52) Common Things Development Teams Are Struggling With(25:37) Prompt Engineering(27:16) The Samman Method: Learning Hours(29:08) Learning is Part of the Work(31:32) The Practice of Learning as a Team(34:39) The Constraint When Learning from Pull Requests(36:30) Putting Aside Time for Learning Hours(39:14) Becoming a Technical Coach(41:23) How to Measure the Effectiveness of Technical Coaching(43:52) Danger of AI Assisted Coding(46:59) The (Still) Important Skills in the AI Era(49:56) Why We Should Not Refactor Through AI(52:41) The Samman Method & Technical Coaching Resources(53:29) 3 Tech Lead Wisdom(54:56) Finding Mentors for Career Progression_____Emily Bache's BioEmily Bache is an independent consultant, YouTuber and Technical Coach. She works with developers, training and coaching effective agile practices like Refactoring and Test-Driven Development.Emily has worked with software development for 25 years, written two books and teaches courses on platforms including Pluralsight and O'Reilly. A frequent conference speaker, Emily has been invited to keynote at prestigious developer events including EuroPython, Craft and ACCU. Emily founded the Samman Technical Coaching Society in order to promote technical excellence and support coaches everywhere.Follow Emily:LinkedIn – linkedin.com/in/emilybacheX – x.com/emilybacheMastodon – sw-development-is.social/web/@emilybacheGitHub – github.com/emilybacheWebsite – emilybache.comSamman Coaching – sammancoaching.orgYouTube – youtube.com/@EmilyBache-tech-coachModern Software Engineering – youtube.com/@ModernSoftwareEngineeringYTLike this episode?Show notes & transcript: techleadjournal.dev/episodes/230.Follow @techleadjournal on LinkedIn, Twitter, and Instagram.Buy me a coffee or become a patron.
Every enterprise is legit rushing to build AI agents.But there's no instructions. So, what do you do? How do you make sure it works? How do you track reliability and traceability? We dive in and find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Google Gemini's Veo 3 Video Creation ToolTrust & Reliability in AI AgentsBuilding Reliable AI Agents GuideAgentic AI for Mission-Critical TasksMicro Agentic System Architecture DiscussionNondeterministic Software Challenges for EnterprisesGalileo's Agent Leaderboard OverviewMulti-Agent Systems: Future ProtocolsTimestamps:00:00 "Building Reliable Agentic AI"05:23 The Future of Autonomous AI Agents08:43 Chatbots vs. Agents: Key Differences10:48 "Galileo Drives Enterprise AI Adoption"13:24 Utilizing AI in Regulated Industries18:10 Test-Driven Development for Reliable Agents22:07 Evolving AI Models and Tools24:05 "Multi-Agent Systems Revolution"27:40 Ensuring Reliability in Single AgentsKeywords:Google Gemini, Agentic AI, reliable AI agents, mission-critical tasks, large language models, AI reliability platform, AI implementation, microservices, micro agents, ChuckGPT, AI observability, enterprise applications, nondeterministic software, multi-agentic systems, AI trust, AI authentication, AI communication, AI production, test-driven development, agent EVALS, Hugging Face space, tool calls, expert protocol, MCP protocol, Google A2A protocol, multi-agent systems, agent reliability, real-time prevention, CICD aspect, mission-critical agents, nondeterministic world, reliable software, Galileo, agent leaderboard, AI planning, AI execution, observability feedback, API calls, tool selection quality.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This interview was recorded for GOTO Unscripted.https://gotopia.techRead the full transcription of this interview hereNat Pryce - Co-Author of "Growing Object-Oriented Software Guided by Tests" & "Java to Kotlin"Duncan McGregor - Co-Author of "Java to Kotlin" & Independent ConsultantRESOURCESNathttps://mastodon.social/@natprycehttps://github.com/nprycehttps://x.com/natprycehttps://www.linkedin.com/in/natprycehttp://www.natpryce.comDuncanhttps://twitter.com/duncanmcghttps://www.linkedin.com/in/duncan-mcgregor-a3038b6https://github.com/dmcghttp://www.oneeyedmen.comhttps://java-to-kotlin.devLinkshttps://www.meetup.com/extreme-tuesday-club-xtchttps://guava.dev/releases/21.0/api/docs/com/google/common/base/Function.htmlDESCRIPTIONThis conversation between Duncan McGregor and Nat Pryce explores the legacy of Nat's co-authored book "Growing Object-Oriented Software, Guided by Tests" (GOOS) and how software development practices have evolved in the past 15 years.They discuss the origins of test-driven development (TDD) within London's Extreme Tuesday Club, the shift from object-oriented to functional programming paradigms, and how changing technology has influenced development approaches.Key topics include outside-in vs bottom-up testing strategies, mock objects, the rise of microservices, and whether modern development practices have actually improved productivity.The conversation provides valuable historicaDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify Inspiring Tech Leaders - The Technology PodcastInterviews with Tech Leaders and insights on the latest emerging technology trends.Listen on: Apple Podcasts SpotifyBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Josh and James discuss the implications of Test-Driven Development (TDD) in the context of the rapid advancements in AI technology. They explore how AI tools are changing the landscape of software development, the challenges of maintaining quality in fast-paced environments, and the importance of balancing speed with safety. Their conversation also touches on the future of coding, the training of new developers, and the evolving role of testing in ensuring robust software solutions. TDD is gaining renewed importance with the rise of AI. AI tools can enhance rapid prototyping but come with risks. Maintaining quality in software is crucial as teams move quickly. The balance between speed and safety is essential in development. Understanding system design and good architecture is foundational for developers. AI can assist in writing tests and fixing bugs effectively. The complexity of production apps increases with user volume. New tools are emerging to support error tracking and testing. Training the next generation of developers is vital in an AI-driven landscape. Investing in TDD and BDD can set teams apart in software development.
Every enterprise is legit rushing to build AI agents.But there's no instructions. So, what do you do? How do you make sure it works? How do you track reliability and traceability? We dive in and find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Google Gemini's Veo 3 Video Creation ToolTrust & Reliability in AI AgentsBuilding Reliable AI Agents GuideAgentic AI for Mission-Critical TasksMicro Agentic System Architecture DiscussionNondeterministic Software Challenges for EnterprisesGalileo's Agent Leaderboard OverviewMulti-Agent Systems: Future ProtocolsTimestamps:00:00 "Building Reliable Agentic AI"05:23 The Future of Autonomous AI Agents08:43 Chatbots vs. Agents: Key Differences10:48 "Galileo Drives Enterprise AI Adoption"13:24 Utilizing AI in Regulated Industries18:10 Test-Driven Development for Reliable Agents22:07 Evolving AI Models and Tools24:05 "Multi-Agent Systems Revolution"27:40 Ensuring Reliability in Single AgentsKeywords:Google Gemini, Agentic AI, reliable AI agents, mission-critical tasks, large language models, AI reliability platform, AI implementation, microservices, micro agents, ChuckGPT, AI observability, enterprise applications, nondeterministic software, multi-agentic systems, AI trust, AI authentication, AI communication, AI production, test-driven development, agent EVALS, Hugging Face space, tool calls, expert protocol, MCP protocol, Google A2A protocol, multi-agent systems, agent reliability, real-time prevention, CICD aspect, mission-critical agents, nondeterministic world, reliable software, Galileo, agent leaderboard, AI planning, AI execution, observability feedback, API calls, tool selection quality.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
Dlaczego niektórzy mówią, że TDD to Święty Graal backendu? W rzeczywistości to podejście budzi znacznie więcej kontrowersji - padają nawet głosy, że to największy zabójca efektywności. Dzisiaj dyskutujemy o tym, co Test-Driven Development może zrobić dla twojego procesu, ale też jak zaaplikować to podejście w skuteczny sposób.
We are joined by Eno Reyes and Matan Grinberg, the co-founders of Factory.ai. They are building droids for autonomous software engineering, handling everything from code generation to incident response for production outages. After raising a $15M Series A from Sequoia, they just released their product in GA! https://factory.ai/ https://x.com/latentspacepod Chapters 00:00:00 Introductions 00:00:35 Meeting at Langchain Hackathon 00:04:02 Building Factory despite early model limitations 00:06:56 What is Factory AI? 00:08:55 Delegation vs Collaboration in AI Development Tools 00:10:06 Naming Origins of 'Factory' and 'Droids' 00:12:17 Defining Droids: Agent vs Workflow 00:14:34 Live Demo 00:17:37 Enterprise Context and Tool Integration in Droids 00:20:26 Prompting, Clarification, and Agent Communication 00:22:28 Project Understanding and Proactive Context Gathering 00:24:10 Why SWE-Bench Is Dead 00:28:47 Model Fine-tuning and Generalization Challenges 00:31:07 Why Factory is Browser-Based, Not IDE-Based 00:33:51 Test-Driven Development and Agent Verification 00:36:17 Retrieval vs Large Context Windows for Cost Efficiency 00:38:02 Enterprise Metrics: Code Churn and ROI 00:40:48 Executing Large Refactors and Migrations with Droids 00:45:25 Model Speed, Parallelism, and Delegation Bottlenecks 00:50:11 Observability Challenges and Semantic Telemetry 00:53:44 Hiring 00:55:19 Factory's design and branding approach 00:58:34 Closing Thoughts and Future of AI-Native Development
The promise of Test Driven Development (or TDD) remains unfulfilled. Like many other forms of aspirational development, the practice has fallen victim to countless buzzword cycles. What if the answer is already in our toolbox?This week, host Andrew Zigler sits down with Animesh Mishra, Senior Solutions Engineer at Diffblue, to unpack the gap between TDD's theoretical appeal and its practical challenges. Animesh draws from his extensive experience to explain how deterministic AI can address the key challenges of building trust in AI for testing. These aren't LLMs of today, but foundational machine learning models that can evaluate all possible branches of a piece of code to write test coverage for it. Imagine writing two years worth of tests for a legacy codebase… in two hours… with no errors!If you enjoyed this conversation about the gaps between theory and execution in engineering culture, be sure to check out last week's chat with David Mytton about shift left adoption by engineering teams.Check out:Translating DevEx to the Board Beyond the DORA FrameworksIntroducing AI-Powered Code Review with gitStreamFollow the hosts:Follow BenFollow AndrewFollow today's guest(s):www.diffblue.comX: diffbluehqLinkedIn: DiffblueAnimesh MishraSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever
In episode 16 of How It's Tested, Eden speaks with Jon Jagger, Director of Software at Kosli. The conversation dives into Jon's journey of creating Cyber-Dojo, his insights on test-driven development (TDD), and how software testing practices have evolved over the years. They also discuss Jon's current role at Kosli, the philosophy behind effective testing, and how regulated industries like banking can benefit from modern compliance practices.
What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
Exploring Rust for Embedded Systems with Philip MarkgrafIn this episode of the Agile Embedded Podcast, hosts Jeff Gable and Luca Ingianni are joined by Philip Markgraf, an experienced software developer and technical leader, to discuss the use of Rust in embedded systems. Philip shares his background in C/C++ development, his journey with Rust, and the advantages he discovered while using it in a large development project. The conversation touches on memory safety, efficient resource management, the benefits of Rust's type system, and the supportive Rust community. They also explore the practical considerations for adopting Rust, including its tooling, ecosystem, and applicability to Agile development. The episode concludes with Philip offering resources for learning Rust and connecting with its community.00:00 Introduction and Guest Welcome00:26 Philip's Journey with Rust01:01 The Evolution of Programming Languages02:27 Evaluating Programming Languages for Embedded Systems06:13 Adopting Rust for a Green Energy Project08:57 Benefits of Using Rust11:24 Rust's Memory Management and Borrow Checker15:50 Comparing Rust and C/C++19:32 Industry Trends and Future of Rust22:30 Rust in Cloud Computing and Embedded Systems23:11 Vendor-Supplied Driver Support and ARM Processors24:09 Open Source Hardware Abstraction Libraries25:52 Advantages of Rust's Memory Model29:32 Test-Driven Development in Rust30:35 Refactoring and Tooling in Rust31:14 Simplicity and Coding Standards in Rust32:14 Error Messages and Linting Tools33:32 Sustainable Pace and Developer Satisfaction36:15 Adoption and Transition to Rust39:37 Hiring Rust Developers42:23 Conclusion and ResourcesResourcesPhil's LinkedinThe Rust LanguageRust chat rooms (at the Awesome Embedded Rust Resources List)The Ferrocene functional-safety qualified Rust compiler You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click here
Kent Beck is an original signer of the Agile Manifesto, author of the Extreme Programming book series, rediscoverer of Test-Driven Development, and an inspiring Keynote Speaker. I read his TDD book 20 years ago. Topics of Discussion: [3:46] What led Kent to extreme programming? [7:52] What critical practices have stood the test of time? [10:58] The role of software design in Agile Development. [13:11] The inspiration behind Tidy First? [16:16] Why software design is both a critical skill and an exercise in human relationships. [22:05] What is “normalizing symmetry”? [25:04] Empirical design. [28:09] Design changes tend to be reversible. [30:41] Experimentation with the GPT phase of AI on publications. [35:13] Advice for young developers and programmers. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! KentBeck.com Tidy First? Test-Driven Development Extreme Programming Explained Implementation Patterns Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
Carter Morgan and Nathan Toups read and discuss the first half of "Working Effectively with Legacy Code" by Michael Feathers. Join them as they reflect on dependency inversion, the importance of interfaces, and continue their never-ending debate on the pros and cons of Test-Driven Development! (The audio gets a little de-synced in the last three minutes. Carter isn't talking over Nathan on purpose!) Chapter markers: 00:00 Intro 04:51 Thoughts on the book 10:54 Defining Legacy Code 21:53 Quick Break: Pull Requests 22:38 How to change software 44:30 Quick Break: CI/CD 45:15 Testing Legacy Code 1:15:10 Quick Break: Linting 1:16:01 Closing Thoughts
Testing ist nicht gleich Testing - Ein Deep Dive mit Sebastian BergmannViele Software-Entwickler⋅innen kennen Unit-Tests. Einige schreiben Unit Tests bei der Entwicklung. Wenige machen wirklich Test-Driven-Development. Doch beim Unit-Testing fängt das ganze Thema Testing doch erst an. Wie sieht es denn mit Static Testing, Non-Functional-Testing, White-Box-Testing, End-to-End-Testing, Dynamic Testing oder Integration Testing aus? Und hast du schon mal von Mutanten Testing gehört?Ganz schön viele Buzzwords. Und dabei haben wir noch gar nicht die Fragen beantwortet, was eigentlich gute Tests sind, wie viele Tests genug Tests sind, wie AI uns helfen kann bessere Tests zu schreiben oder ob Testing eigentlich moderner Kram ist oder schon seit Anbeginn des Programmier Zeitalters eine Rolle gespielt hat.In dieser Episode gibt es einen Rundumschlag zum Thema Testing mit Sebastian Bergmann.Bonus: Die Amiga-Szene lebt.Das schnelle Feedback zur Episode:
Go 1.22.4 & 1.21.11 coming Tuesday, June 4Community eventsGolang Atlanta meetup, June 13Cup o' Go Meetup in Amsterdam, June 19Golang Tilburg meetup, June 20Proposal accepted and implemented: new iterator functions in maps package coming in 1.23Reddit: What software shouldn't you write in Go?Blog: Blazingly Fast Shadow Stacks for Go by Felix GeisendörfBlog: Abusing Go's infrastructure by Pedro VilaçaAd breakEpisode 15, interview with Adelina Simion about her book, Test-Driven Development in GoInterview with Riccardo PinosioHugging Facehugot on GitHubONNXKnights Analytics
In this episode, we dive into Codium, an AI-powered coding platform designed to assist developers throughout the software development lifecycle, especially in testing, code review, and documentation. Dedy Kredo, one of Codium's co-founders, explains the unique features and benefits of the platform, comparing it to other tools like GitHub Copilot. The discussion also touches on Codium's adaptability for test-driven development and its flexible deployment options, highlighting the importance of security and configuration. Additionally, the significance of the Intel Ignite startup program and the impact of AI hype on Codium's rapid growth are discussed. Listeners will gain insights into Codium's open-core model and open-source projects, including the Alpha Codium research project. 00:00 Introduction 00:13 What is Codium? 01:35 Comparison with Other AI Coding Tools 03:01 Test-Driven Development and Codium 05:40 Customization and Configuration 08:17 Deployment Options and Security 11:11 Intel Ignite Program Experience 13:45 Impact of AI Hype on Business 17:02 AI-Assisted Development and Semi-Automation 17:43 Improving Code Quality and Productivity 18:33 Challenges and Opportunities in AI for Software Development 20:27 Adopting AI Tools in Development Teams 24:07 Open Source Projects and Community Engagement 28:11 Conclusion and Future Prospects Guest: Dedy Kredo is the Co-Founder and Chief Product Officer of CodiumAI, leading the product and engineering teams to empower developers to build software faster and more accurately through the use of artificial and human intelligence. Before founding CodiumAI, he served as VP of Customer Facing Data Science at Explorium, where he built and led a talented data science team and played a key role in the company's growth from seed to series C. Previously, he was the founder of an online marketing startup, growing it from a bootstrapped venture to millions in revenue. Before that, he spent seven years in Colorado and California as a product line manager at VMware's management business unit. During this time, he worked closely with Fortune 500 companies and successfully launched several new products to market.
Software Engineering Radio - The Podcast for Professional Software Developers
Kent Beck, Chief Scientist at Mechanical Orchard, and inventor of Extreme Programming and Test-Driven Development, joins SE Radio host Giovanni Asproni for a conversation on software design based on his latest book "Tidy First?". The episode starts with exploring the reasons for writing the book, and introducing the concepts of tidying, cohesion, and coupling. It continues with a conversation about software design, and the impact of tidyings. Then Kent and Giovanni discuss how to balance design and code quality decisions with cost, value delivered, and other important aspects. The episode ends with some considerations on the impact of Artificial Intelligence on the software developer's job. Brought to you by IEEE Software and IEEE Computer Society.
Adam presents TDD as skill zero, the one that unlocks all the others.Want more?
Today we are talking about Test Driven Development, Why it's important, and How it improves development with guest Alexey Korepov. We'll also cover Test Helpers as our module of the week. For show notes visit: www.talkingDrupal.com/446 Topics What does the term Test Driven Development (TDD) mean Does Drupal make use of TDD What makes TDD different from other methods of Development Do you have to change your way of thinking What are some good resources to learn TDD Do you have any pointers for teams looking to get started Are certain kinds of projects better suited to TDD How have dev teams adapted to TDD Any advice on environment setup Any special tools Resources Open telemetry QA Engineer Kent Beck Test Driven Development: By Example Needs tests tag Local unit tests PHPUnit Guests Alexey Korepov - korepov.pro Murz Hosts Nic Laflin - nLighteneddevelopment.com nicxvan Martin Anderson-Clutz - mandclu Matt Glaman - mglaman.dev mglaman MOTW Correspondent Martin Anderson-Clutz - mandclu Brief description: Have you ever wanted an API that could dramatically simplify the process of writing Drupal unit tests? There's a module for that. Module name/project name: Test Helpers Brief history How old: created in Sep 2022 by today's guest, Alexey Korepov Versions available: 1.3.0 compatible with versions of Drupal 9.4 or newer, right up to Drupal 11 Maintainership Actively maintained, latest release less than 3 months ago Security coverage Test coverage, would be ironic if it didn't API Documentation is available, linked from the project page Number of open issues: 2 open issues, which are actually feature requests Usage stats: 5 sites officially, but modules or sites can leverage Test Helpers without enabling it, and this usage is recommended, so the number is actually higher Module features and usage Provides a new container that automated tests can leverage to perform common tasks with much less code. For example, you can create a user or a node with a single line of code You can also mock more complex operations like an entityQuery or loadMultiple call, again with a single line of code Traditionally, writing unit tests is more complicated because by design they run without fully bootstrapping Drupal That means that your test needs to mock functions or services in the code you're testing which can result in units tests being much longer than the code they're testing Test Helpers also allows your tests to leverage existing mocks and stubs for popular services The project page also links to the recording and slides for a talk Alexey gave about Test Helpers at DrupalCon Pittsburgh last year, if you want to do a deeper dive
In this podcast transcript, Rob and Michael delve into the pivotal topic of defining requirements in software development. They emphasize the significance of clear and detailed requirements, underscoring the potential pitfalls of vague or incomplete requirements. Throughout the conversation, they provide insights, anecdotes, and practical strategies for navigating the complexities of requirement gathering and management. Let's dive into the key points discussed by Rob and Michael. Defining Requirements The Importance of Clear Communication Rob and Michael stress the importance of clear communication in understanding and defining project requirements. They highlight the dangers of assumptions and ambiguity, advocating for a thorough exploration of the client's needs and expectations. Drawing from their experience, they emphasize the need for developers to engage in detailed discussions with clients to ensure alignment on project goals and outcomes. Understanding the End Goal A key topic we discuss is the necessity of understanding a project's end goal before delving into its requirements. Rob and Michael illustrate the importance of clarifying objectives and envisioning the desired outcome using the tree swing example. This requires us to ask probing questions and seek clarity on client expectations. By doing so, developers can ensure that the final product meets the intended purpose. Agile Approach to Requirement Management The conversation touches upon the agile approach to requirement management, emphasizing the iterative and adaptable nature of the process. Rob and Michael advocate for regular review and refinement of project requirements, especially in dynamic environments where priorities and circumstances may change over time. They underscore the value of maintaining a flexible backlog and continuously reassessing the relevance and feasibility of pending tasks. Test-Driven Development and Quality Assurance The discussion expands to encompass the role of test-driven development (TDD) and quality assurance (QA) in requirement validation. Rob and Michael highlight the importance of thinking critically about user interactions and anticipated outcomes when refining project requirements. They advocate for a proactive approach to testing and validation, leveraging QA principles to uncover potential issues and ensure the robustness of the final product. In conclusion, Rob and Michael emphasize the ongoing nature of requirement management and the importance of continuous improvement. They encourage developers to adopt a proactive mindset, actively engaging with clients and stakeholders to refine project requirements iteratively. By prioritizing clear communication, understanding the end goal, and embracing agile practices, developers can navigate the challenges of requirement gathering and deliver successful outcomes for their clients. Final Thoughts on Defining Requirements As Rob and Michael wrap up their discussion, they invite listeners to engage with their podcast and provide feedback or topic suggestions at info@develpreneur.com. They reiterate their commitment to delivering valuable insights and practical advice for developers, underscoring the collaborative nature of their community. With a focus on continuous learning and improvement, they invite listeners to join them on their journey of building better developers. By incorporating these key points and insights, developers can enhance their approach to requirement management and contribute to the success of their projects. Whether adopting agile methodologies, leveraging TDD principles, or prioritizing clear communication, a proactive and iterative approach to requirement definition is essential for delivering high-quality software solutions. Additional Resources for Defining Requirements Setting Realistic Expectations In Development Creating Your Product Requirements Changing Requirements – Welcome Them For Competitive Advantage Behind the Scenes Podcast Video
Test Driven Development Demo with PyTest TDD Discussed in hpr4075 Write a new test and run it. It should fail. Write the minimal code that will pass the test Optionally - refactor the code while ensure the tests continue to pass PyTest Framework for writing software tests with python Normally used to test python projects, but could test any software that python can launch return input. if you can write python, you can write tests in PyTest. python assert - check that something is true Test Discovery Files named test* Functions named test* Demo Project Trivial app as a demo Print a summary of the latest HPR Episode Title, Host, Date, Audio File How do we get the latest show data RSS feed Feed parser Feed URL The pytest setup The python script we want to test will be named hpr_info.py The test will be in a file will be named test_hpr_info.py test_hpr_info.py import hpr_info Run pytest ModuleNotFoundError: No module named 'hpr_info' We have written our first failing test. The minimum code to get pytest to pass is to create an empty file touch hpr_info.py Run pytest again pytest ============================= test session starts ============================== platform linux -- Python 3.11.8, pytest-7.4.4, pluggy-1.4.0 rootdir: /tmp/Demo collected 0 items What just happened We created a file named test_hpr_info.py with a single line to import hpr_info We ran pytest and it failed because hpr_info.py did not exist We created hpr_info.py and pytest ran without an error. This means we confirmed: Pytest found the file named test_hpr_info.py and tried to execute its tests The import line is looking for a file named hpr_info.py Python Assert In python, assert tests if a statement is true For example asert 1==1 In pytest, we can use assert to check a function returns a specific value assert module.function() == "Desired Output" Without doing a comparison operator, we can also use assert to check if something exists without specifying a specific value assert dictionary.key Adding a Test Import hpr_info will allow us to test functions inside hpr_info.py We can reference functions inside hpr_info.py by prepending the name with hpr_info. for example hpr_info.HPR_FEED The first step in finding the latest HPR episode is fetching a copy of the feed. Lets add a test to make sure the HPR feed is defined import hpr_info def test_hpr_feed_url(): assert hpr_info.HPR_FEED == "https://hackerpublicradio.org/hpr_ogg_rss.php" pytest again Lets run pytest again and we get the error AttributeError: module 'hpr_info' has no attribute 'HPR_FEED' So lets add the just enough code hpr_info.py to get the test to pass HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" Run pytest again and we get 1 passed indicating the pytest found 1 test which passed Hooray, we are doing TDD Next Test - Parsing the feed lets plan a function that pulls the HPR feed and returns the feed data. We can test that the result of fetching the feed is a HTTP 200 def test_get_show_data(): show_data = hpr_info.get_show_data() assert show_data.status == 200 Now when we run pytest we get 1 failed, 1 passed and we can see the error AttributeError: module 'hpr_info' has no attribute 'get_show_data' Lets write the code to get the new test to pass. We will use the feedparser python module to make it easier to parse the rss feed. After we add the import and the new function, hpr_info.py looks like this import feedparser HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" def get_show_data(): showdata = feedparser.parse(HPR_FEED) return showdata Lets run pytest again. When I have more than one test, I like to add the -v flag so I can see each test as it runs. test_hpr_info.py::test_hpr_feed_url PASSED [ 50%] test_hpr_info.py::test_get_show_data PASSED [100%] Next Test - Get the most recent episode from the feed Now that we have the feed, lets test getting the first episode. feedparser entries are dictionaries. Lets test what the function returns to make sure it looks like a rss feed entry. def test_get_latest_entry(): latest_entry = hpr_info.get_latest_entry() assert latest_entry["title"] assert latest_entry["published"] After we verify the test fails, we can write the code to rerun the newest entry data to hpr_info.py and pytest -v will show 3 passing tests. def get_latest_entry(): showdata = get_show_data() return showdata["entries"][0] Final Test Lets test a function to see if it returns the values we want to print. We don't test for specific values, just that the data exists. def test_get_entry_data(): entry_data = hpr_info.get_entry_data(hpr_info.get_latest_entry()) assert entry_data["title"] assert entry_data["host"] assert entry_data["published"] assert entry_data["file"] And then code to get the test to pass def get_entry_data(entry): for link in entry["links"]: if link.get("rel") == "enclosure": enclosure = link.get("href") return { "title": entry["title"], "host": entry["authors"][0]["name"], "published": entry["published"], "file": enclosure, } Finish the HPR info script. Now that we have tested that we can, get all the info we want from the most recent episode lets add the last bit of code to hpr_info.py to print the episode info if __name__ == "__main__": most_recent_show = get_entry_data(get_latest_entry()) print() print(f"Most Recent HPR Episode") for x in most_recent_show: print(f"{x}: {most_recent_show.get(x)}") if __name__ == "__main__": ensures code inside this block will only run when the script is called directly, and not when imported by test_hpr_info.py Summary TDD is a programming method where you write tests prior to writing code. TDD forces me to write smaller functions and more modular code. Link to HPR info script and tests - TODO Additional tests to add Check date is the most recent weekday Check this the host is listed on corespondents page Check others. Project Files - https://gitlab.com/norrist/hpr-pytest-demo
The Elixir Wizards Podcast is back with Season 12 Office Hours, where we talk with the internal SmartLogic team about the stages of the software development lifecycle. For the season premiere, "Testing 1, 2, 3," Joel Meador and Charles Suggs join us to discuss the nuances of software testing. In this episode, we discuss everything from testing philosophies to test driven development (TDD), integration, and end-user testing. Our guests share real-world experiences that highlight the benefits of thorough testing, challenges like test maintenance, and problem-solving for complex production environments. Key topics discussed in this episode: How to find a balance that's cost-effective and practical while testing Balancing test coverage and development speed The importance of clear test plans and goals So many tests: Unit testing, integration testing, acceptance testing, penetration testing, automated vs. manual testing Agile vs. Waterfall methodologies Writing readable and maintainable tests Testing edge cases and unexpected scenarios Testing as a form of documentation and communication Advice for developers looking to improve testing practices Continuous integration and deployment Links mentioned: https://smartlogic.io/ Watch this episode on YouTube! youtu.be/unx5AIvSdc Bob Martin “Clean Code” videos - “Uncle Bob”: http://cleancoder.com/ JUnit 5 Testing for Java and the JVM https://junit.org/junit5/ ExUnit Testing for Elixir https://hexdocs.pm/exunit/ExUnit.html Code-Level Testing of Smalltalk Applications https://www.cs.ubc.ca/~murphy/stworkshop/28-7.html Agile Manifesto https://agilemanifesto.org/ Old Man Yells at Cloud https://i.kym-cdn.com/entries/icons/original/000/019/304/old.jpg TDD: Test Driven Development https://www.agilealliance.org/glossary/tdd/ Perl Programming Language https://www.perl.org/ Protractor Test Framework for Angular and AngularJS protractortest.org/#/ Waterfall Project Management https://business.adobe.com/blog/basics/waterfall CodeSync Leveling up at Bleacher Report A cautionary tale - PETER HASTIE https://www.youtube.com/watch?v=P4SzZCwB8B4 Mix ecto.dump https://hexdocs.pm/ectosql/Mix.Tasks.Ecto.Dump.html Apache JMeter Load Testing in Java https://jmeter.apache.org/ Pentest Tools Collection - Penetration Testing https://github.com/arch3rPro/PentestTools The Road to 2 Million Websocket Connections in Phoenix https://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections Donate to Miami Indians of Indiana https://www.miamiindians.org/take-action Joel Meador on Tumblr https://joelmeador.tumblr.com/ Special Guests: Charles Suggs and Joel Meador.
Look we all know there are things to talk about but we recorded this last week before any of that stuff happened!
Original signer of the Agile Manifesto, author of the Extreme Programming book series, rediscoverer of Test-Driven Development, and inspiring Keynote Speaker. I read his TDD book 20 years ago. Topics of Discussion: [4:06] What led Kent into extreme programming, and realizing that technical mastery alone is not enough for project success. [6:24] The significance of extreme programming. [9:15] The Agile Manifesto. [10:46] The importance of taking responsibility seriously. [14:06] What was the inspiration behind Tidy First? [16:27] Why software design is an important skill. [17:31] The human aspect dominates in design. [19:40] You can make large changes in small safe steps. [23:09] Normalizing symmetry. [30:17] Preserving flexibility in design through empirical and reversible changes rather than rather than speculative or reactive design. [31:51] Kent's experimentation with the GPT phase of AI on publications. [32:11] Rent-A-Kent to get better answers around software development. [37:19] Advice for young programmers. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! Rent-A-Kent Tidy First? by Kent Beck Test Driven Development, by Kent Beck Extreme Programming Explained, by Kent Beck with Cynthia Andres Implementation Patterns, by Kent Beck Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
In 2002, Kent Beck released a book called "Test Driven Development by Example".In December of 2023, Kent wrote an article called "Canon TDD".With Kent's permission, this episode contains the full content of the article.Brian's commentary is saved for a followup episode.Links:Canon TDDTest Driven Development by Example The Complete pytest CourseLevel up your testing skills and save time during coding and maintenance.Check out courses.pythontest.com
I joined Julie and Andrew from Ruby For All to talk about Test Driven Development, attending conferences, and using TDD as a thinking tool.. This episode was recorded at RubyConf in San Diego. Show Notes [Ruby For All] - https://www.rubyforall.com/ Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!
El test driven development (TDD) o en español desarrollo guiado por pruebas, es un enfoque de programación que se utiliza durante el desarrollo de software en el que se realizan pruebas unitarias antes de escribir el código. --- Support this podcast: https://podcasters.spotify.com/pod/show/fernando-her85/support
Ralph Hempel spoke with us about the development of Lego Mindstorms from hacking the initial interface to running Debian Linux as well as programming Mindstorms in Python. Happy 25th birthday to Lego Mindstorms! Pybricks is a MicroPython based coding environment that works across all Lego PoweredUp hubs and on the latest Mindstorms elements. The creators are David Lechner and Laurens Valk. Ralph was the first person to boot a full Debian Linux distro on the brick, see EV3Dev, a Debian Linux for Lego Mindstorms EV3. BrickLink was originally a site for third party resellers of new and used Lego sets and elements. The site was purchased by the Lego Group a few years ago. It's still a great place to buy individual parts - for example a 4 port PoweredUp hub to run the new PyBricks on :-) ReBrickable is a site dedicated to taking off-the-shelf Lego sets, and creating something new with the set. In particular see the MOCs Designed by LUCAMOCS, fantastic Technic vehicles as well as interesting designs for vehicle subsystems. Yoshihito ISOGAWA - YouTube is an absolute genius at coming up with practical applications of new LEGO Elements. Ralph recommends his books as “awesome to read”. LEGO uses 18 Cucumbers to build real Log House Ralph highly recommends Test Driven Development for Embedded C by James Grenning (who has been on the show: 270: Broccoli is Good Too, 109: Resurrection of Extreme Programming, and 30: Eventually Lightning Strikes). Origami Simulator and Elecia's origami generating python code on github Transcript Nordic Semiconductor empowers wireless innovation, by providing hardware, software, tools and services that allow developers to create the IoT products of tomorrow. Learn more about Nordic Semiconductor at nordicsemi.com, check out the DevAcademy at academy.nordicsemi.com and interact with the Nordic Devzone community at devzone.nordicsemi.com.
On today's episode, Elixir Wizards Owen Bickford and Dan Ivovich compare notes on building web applications with Elixir and the Phoenix Framework versus Ruby on Rails. They discuss the history of both frameworks, key differences in architecture and approach, and deciding which programming language to use when starting a project. Both Phoenix and Rails are robust frameworks that enable developers to build high-quality web apps—Phoenix leverages functional programming in Elixir and Erlang's networking for real-time communication. Rails follows object-oriented principles and has a vast ecosystem of plug-ins. For data-heavy CRUD apps, Phoenix's immutable data pipelines provide some advantages. Developers can build great web apps with either Phoenix or Rails. Phoenix may have a slight edge for new projects based on its functional approach, built-in real-time features like LiveView, and ability to scale efficiently. But, choosing the right tech stack depends heavily on the app's specific requirements and the team's existing skills. Topics discussed in this episode: History and evolution of Phoenix Framework and Ruby on Rails Default project structure and code organization preferences in each framework Comparing object-oriented vs functional programming paradigms CRUD app development and interaction with databases Live reloading capabilities in Phoenix LiveView vs Rails Turbolinks Leveraging WebSockets for real-time UI updates Testing frameworks like RSpec, Cucumber, Wallaby, and Capybara Dependency management and size of standard libraries Scalability and distribution across nodes Readability and approachability of object-oriented code Immutability and data pipelines in functional programming Types, specs, and static analysis with Dialyzer Monkey patching in Ruby vs extensible core language in Elixir Factors to consider when choosing between frameworks Experience training new developers on Phoenix and Rails Community influences on coding styles Real-world project examples and refactoring approaches Deployment and dev ops differences Popularity and adoption curves of both frameworks Ongoing research into improving Phoenix and Rails Links Mentioned in this Episode: SmartLogic.io (https://smartlogic.io/) Dan's LinkedIn (https://www.linkedin.com/in/divovich/) Owen's LinkedIn (https://www.linkedin.com/in/owen-bickford-8b6b1523a/) Ruby https://www.ruby-lang.org/en/ Rails https://rubyonrails.org/ Sams Teach Yourself Ruby in 21 Days (https://www.overdrive.com/media/56304/sams-teach-yourself-ruby-in-21-days) Learn Ruby in 7 Days (https://www.thriftbooks.com/w/learn-ruby-in-7-days---color-print---ruby-tutorial-for-guaranteed-quick-learning-ruby-guide-with-many-practical-examples-this-ruby-programming-book--to-build-real-life-software-projects/18539364/#edition=19727339&idiq=25678249) Build Your Own Ruby on Rails Web Applications (https://www.thriftbooks.com/w/build-your-own-ruby-on-rails-web-applications_patrick-lenz/725256/item/2315989/?utm_source=google&utm_medium=cpc&utm_campaign=low_vol_backlist_standard_shopping_customer_acquisition&utm_adgroup=&utm_term=&utm_content=593118743925&gad_source=1&gclid=CjwKCAiA1MCrBhAoEiwAC2d64aQyFawuU3znN0VFgGyjR0I-0vrXlseIvht0QPOqx4DjKjdpgjCMZhoC6PcQAvD_BwE#idiq=2315989&edition=3380836) Django https://github.com/django Sidekiq https://github.com/sidekiq Kafka https://kafka.apache.org/ Phoenix Framework https://www.phoenixframework.org/ Phoenix LiveView https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html#content Flask https://flask.palletsprojects.com/en/3.0.x/ WebSockets API https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API WebSocket connection for Phoenix https://github.com/phoenixframework/websock Morph Dom https://github.com/patrick-steele-idem/morphdom Turbolinks https://github.com/turbolinks Ecto https://github.com/elixir-ecto Capybara Testing Framework https://teamcapybara.github.io/capybara/ Wallaby Testing Framework https://wallabyjs.com/ Cucumber Testing Framework https://cucumber.io/ RSpec https://rspec.info/
John's Board game has 6 days remaining on Kickstarter! I absolutely insist that you buy it. Colossi on Kickstarter John's Twitter Catacombian Games
Test Driven Development. Red, Green, Refactor. Do we have to do the refactor part? Does the refactor at the end include tests? Or can I refactor the tests at any time?Why is refactor at the end? This episode is to talk about this with a an example. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.
Daniel and Caleb wax nostalgic about the various eras of Laravel, their long and eventfull friendship, Laracon talk nerves, and a tentative plan for WIRECON.
On this episode, Wisen Tanasa joins me to talk Test Driven Development. We discuss why TDD is intuitive, translating specifications into tests, the balance between design and execution, developing a walking skeleton, the value of learning design principles and UX, minimizing the need to use willpower with positive feedback loops, and understanding what TDD is.Growing Object-Oriented Software Guided by Tests by Steve Freeman and Nat PryceThe Non-Designer's Design Book by Robin WilliamsWisen Tanasa on TwitterWisen Tanasa on LinkedInWisen Tanasa's Newsletter Quantum Steps
Thank you to this week's sponsor, Koyeb!Go 1.21.2 & 1.20.9 released. Upgrade yesterday!
Lorraine Chambers: Using Experiments To Drive Agile Change, Lessons from a Test Automation Initiative Read the full Show Notes and search through the world's largest audio library on Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. This story starts with an agile transformation featuring a shift-left initiative. The team faced challenges in implementing test automation due to unclear policies and time allocation. Recognizing the challenges faced by the teams, Lorraine engaged with managers and leaders, advocating to give teams the support they needed. Through that, it was possible to help the teams with guidance on Test-Driven Development and support in using an internal testing tool. When it comes to helping teams adopt new practices, Lorraine advises identifying policy and decision-makers, gathering relevant data, and proposing time-limited experiments for major changes, culminating in retrospective evaluations. [IMAGE HERE] As Scrum Master we work with change continuously! Do you have your own change framework that provides the guidance, and queues you need when working with change? The Lean Change Management framework is a fully defined, lean-startup inspired change framework that can be used as the backbone of any change process! You can buy Lean Change Management the book at Amazon. Also available in French, Spanish, German and Portuguese. About Lorraine Chambers Lorraine's vision of excellence is summed up in the words of philosopher, Lao Tzu -- “A leader is best when people barely know he exists ... " She's held several roles in the Fintech industry, including Product Owner and Quality Assurance. She's a native New Yorker that loves travel, music and museums. You can link with Lorraine Chambers on LinkedIn and connect with Lorraine Chambers on Instagram.
Whistling A brick is about 48 cents Skateboarding Linear Command Palettes Junior Devs
It's been years since our last Laracon episode. It's good to be back
Today we're joined by guest co-host, Adelina Simion! Adelina works at Form3, co-organizer of Women Who Go, London and London Gophers, and is the author of Test-Driven Development in Go.
This episode was recorded 2 weeks ago.For that reason, I don't remember in detail what we talked about.
Daniel and Caleb talk about how you could like, process audio with like, Laravel collections. Wild.
In this ep, the fellas compare their respective YouTube algos and Caleb learns why the hell Daniel is so amped about event sourcing. It's a good one.
Markus Oberlehner, software architect, speaker, and open source contributor, comes onto the show to talk about how to better write tests with test-driven development. Links https://twitter.com/MaOberlehner https://markus.oberlehner.net https://markus.oberlehner.net/blog http://twitch.tv/webdevexplorer https://github.com/maoberlehner https://goodvuetests.com Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Markus Oberlehner.