Podcast appearances and mentions of michael feathers

  • 52PODCASTS
  • 75EPISODES
  • 49mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Oct 14, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about michael feathers

Latest podcast episodes about michael feathers

Tech Lead Journal
#195 - Working Effectively with Legacy Code and AI Coding Assistant - Michael Feathers

Tech Lead Journal

Play Episode Listen Later Oct 14, 2024 56:16


“Legacy code is a code without tests. If you have code, and it has lots of tests, it's relatively easy to change. But if you don't have the tests, you're really in serious trouble.” Do you dread working with legacy code? Michael Feathers, renowned software expert and author of the classic “Working Effectively with Legacy Code,” joins me to discuss the challenges and strategies for working with legacy code, a topic that remains highly relevant even after 20 years! Michael explains why he defines legacy code as “code without tests,” emphasizing the crucial role of automated tests for code maintainability, rather than simply defining it as an old inherited code. He also provides insights on the psychological challenges of working with legacy code and stresses the importance of approaching it with curiosity and a sense of adventure. The conversation also explores the evolving world of AI assistant in software development, drawing from Michael's forthcoming book, “AI-Assisted Programming”. He shares how AI can assist developers in various tasks, such as explaining code, identifying potential issues, generating tests, and exploring new possibilities. Listen to this episode to explore the intersection of legacy code, AI, and the future of software development!   Listen out for: Career Journey - [00:01:24] “Working Effectively with Legacy Code” Book - [00:02:05] Definition of Legacy Code - [00:04:55] The Importance of Automated Tests - [00:06:39] Understanding Legacy Code - [00:09:47] Mindset for Working with Legacy Code - [00:11:15] Rewrite vs Fixing Legacy Code - [00:13:50] Microservice for Legacy Code - [00:15:36] Approach to Dealing with Legacy Code - [00:17:33] Seams - [00:20:03] Strangler Fig - [00:21:42] Understanding Refactoring - [00:22:48] Testing Pyramid - [00:24:28] Code Nobody Wants to Touch - [00:26:10] AI for Understanding Legacy Code - [00:27:53] AI Churning More Legacy Code - [00:30:06] “AI Assisted Programming” Book - [00:32:47] Prompt Engineering - [00:34:16] Doing in Small Steps - [00:35:09] Best Use Case for AI - [00:37:29] Developer's Fear of AI - [00:39:16] SudoLang - [00:40:59] AI as Test Assistant - [00:43:42] Context Window - [00:45:19] Waywords - [00:47:14] Managing AI Sessions - [00:48:53] Using Different AI Tools - [00:50:30] 3 Tech Lead Wisdom - [00:52:28] _____ Michael Feathers's BioMichael Feathers is the Founder and Director of R7K Research & Conveyance, a company specializing in software and organization design. Over the past 20 years he has consulted with hundreds of organizations, supporting them with general software design issues, process change and code revitalization. A frequent presenter at national and international conferences, Michael is also the author of the book Working Effectively with Legacy Code. Follow Michael: Twitter – @mfeathers LinkedIn – linkedin.com/in/michaelfeathers Substack – substack.com/@michaelfeathers

The Mob Mentality Show
Is All CD/CD Pipeline Code Instant Legacy Code?

The Mob Mentality Show

Play Episode Listen Later Oct 8, 2024 15:28


In this Mob Mentality Show episode, Chris Lucian and Austin Chadwick dive into the complexities of modern CI/CD (Continuous Integration / Continuous Delivery) pipeline code and IaC (Infrastructure as Code), exploring why these critical components of software delivery often exhibit the same problematic attributes as classic Legacy Code. Drawing inspiration from Michael Feathers' seminal book *Working Effectively with Legacy Code*, they analyze the paradox of cutting-edge DevOps practices turning into technical debt almost as soon as they're written. ### Episode Highlights: - **CI/CD Pipeline Code and Legacy Code Parallels**: Why does so much CI/CD and IaC code resemble legacy code? Despite being crucial for continuous delivery and automation, CI/CD pipelines can become fragile, difficult to change, and filled with technical debt if not handled carefully. Austin and Chris discuss why this phenomenon is so common and what makes the codebases for CI/CD pipelines especially prone to these issues.    - **“Edit and Pray” vs. TDD Confidence**: Do your CI/CD changes feel like a roll of the dice? Chris and Austin compare how the lack of test-driven development (TDD) practices in CI/CD code leads to “edit and pray” scenarios. They discuss the confidence that TDD brings to traditional application development and how applying similar principles could reduce fragility in CI/CD code. - **The Pitfalls of YAML in IaC**: Is the problem inherent to YAML? The hosts explore whether the complexity of YAML syntax and configurations is the root cause of the brittleness often found in IaC. They provide real-world examples of IaC configurations that suffer from high cyclomatic complexity—making them feel more like full-blown applications rather than simple configuration files. - **Fear of Change in CI/CD and IaC**: Why are developers often afraid to modify CI/CD pipeline code or IaC? Chris and Austin highlight the psychological aspects of fragile infrastructure—where fear of unintended consequences and lack of fast feedback loops result in slower iterations and more bugs. They explore why these codebases are often re-written from scratch instead of extended and safely enhanced. - **Reducing Fragility through Experiments**: The episode features a recent experiment where CI/CD pipeline code was developed in Python using TDD and separation of concerns. This case study reveals the pros and cons of less YAML and a shift towards more code-based "configurations." Could this approach be a solution to reducing brittleness in IaC and pipelines? - **A World Without Brittle Pipelines?**: Imagine a world without fragile pipelines and brittle configuration files. Chris and Austin discuss strategies to move towards more resilient infrastructure and how teams can focus on improving feedback loops, reducing complexity, and enabling safer, faster CI/CD iterations. Join Chris and Austin as they explore these and other crucial topics that are impacting DevOps teams around the world. Whether you're struggling with high bug rates in your pipelines, slow feedback loops, or simply want to better understand how to manage the complexity of modern infrastructure, this episode is for you! Video and Show Notes: https://youtu.be/3Cs-j055b9g 

Book Overflow
Michael Feathers Reflects on "Working Effectively with Legacy Code"

Book Overflow

Play Episode Listen Later Aug 1, 2024 58:22


In this special episode of Book Overflow, Michael Feathers joins Carter Morgan and Nathan Toups to reflect on his book "Working Effectively with Legacy Code." Join them as they discuss the pros and cons of TDD, the dangers of AI hallucination, and why Michael became a software engineer!

Book Overflow
"Working Effectively with Legacy Code" by Michael Feathers (Part 2)

Book Overflow

Play Episode Listen Later Jul 22, 2024 78:52


In this episode of Book Overflow, Carter Morgan and Nathan Toups discuss the second half of "Working Effectively with Legacy Code" by Michael Feathers. Join them as they discuss how to keep up a good attitude while working on legacy code, how to get started when you're intimidated, and some of the legacy and greenfield projects they've worked on in their careers! ------------ Book Overflow is a podcast for software engineers, by software engineers dedicated to improving our craft by reading the best technical books in the world. Join Carter Morgan and Nathan Toups as they read and discuss a new technical book each week! The full book schedule and links to every major podcast player can be found at https://bookoverflow.io https://x.com/bookoverflowpod

Book Overflow
"Working Effectively with Legacy Code" by Michael Feathers (Part 1)

Book Overflow

Play Episode Listen Later Jul 15, 2024 82:02


Carter Morgan and Nathan Toups read and discuss the first half of "Working Effectively with Legacy Code" by Michael Feathers. Join them as they reflect on dependency inversion, the importance of interfaces, and continue their never-ending debate on the pros and cons of Test-Driven Development! (The audio gets a little de-synced in the last three minutes. Carter isn't talking over Nathan on purpose!) Chapter markers: 00:00 Intro  04:51 Thoughts on the book 10:54 Defining Legacy Code 21:53 Quick Break: Pull Requests 22:38 How to change software 44:30 Quick Break: CI/CD 45:15 Testing Legacy Code 1:15:10 Quick Break: Linting 1:16:01 Closing Thoughts 

The Engineering Room with Dave Farley
Legacy Code, OOP vs Functional Programming & MORE | Michael Feathers In The Engineering Room Ep. 10

The Engineering Room with Dave Farley

Play Episode Listen Later Jan 31, 2024 74:24


When Michael Feathers talks it's usually worth listening. Michael is thoughtful about software and software design, for example Michael is the person who invented the term SOLID as an approach to software design. Michael is also the author of a book that is on the “must read” list of nearly every real-world programmer “Working with Legacy Code”. In this chat, Michael Feathers describes this book as "scare tactics for TDD" - When you know how hard it is to write tests for existing code, you'll write tests from the beginning! Michael and Dave talk broadly about automated testing, software architecture and design principles for quality code, and Michael claims that “OO, when it's done right, looks a lot like FP”.xxIf you want to learn Continuous Delivery and DevOps skills, check out Dave Farley's courses ➡️ https://bit.ly/DFTraining

Semaphore Uncut
Michael Feathers On Facilitating Onboarding and Scaling in Software Development

Semaphore Uncut

Play Episode Listen Later Jul 11, 2023 22:24


The ability to adapt, collaborate, and continuously improve has become paramount in keeping pace with ever-changing technologies, customer demands, and market trends. In this episode, we discover how Michael Feathers, chief architect of Globant and renowned software expert, addresses the challenges of onboarding teams to complex systems and scaling software development. In his upcoming book, "Patterns of Systems Renewal," Feathers delves into the process of knowledge acquisition, code comprehension, and system expansion, tackling some outstanding problems faced by the industry. Listen to the full episode or read the transcript on the Semaphore blog.Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends.

GOTO - Today, Tomorrow and the Future
Working Effectively with Legacy Code • Michael Feathers & Christian Clausen

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later May 19, 2023 46:19 Transcription Available


This interview was recorded for the GOTO Book Club.gotopia.tech/bookclubRead the full transcription of the interview hereMichael Feathers - Author of “Working Effectively with Legacy Code” &  Chief Architect at GlobantChristian Clausen - Author of "Five Lines of Code" & Founder of mist-cloud & Technical Agile CoachRESOURCESgithub.com/features/copilotxp123.com/articles/procedural-and-declarative-testsowasp.org/www-community/Fuzzingyoutu.be/4cVZvoFGJTUinvestopedia.com/terms/p/paretoprinciple.aspcodescene.comsonarsource.com/products/sonarqubeDESCRIPTIONLegacy code has been one of the problems that developers worldwide have been trying to tackle for a long time. But what is legacy code and how can you learn from writing tests that give you more insights into the system and the code?Christian Clausen, author of “Five Lines of Code”, talks to Michael Feathers, author of “Working Effectively With Legacy Code”, about their shared passion for testing, refactoring, and solving real-life problems with the help of clean code.RECOMMENDED BOOKSMichael Feathers • Working Effectively with Legacy CodeChristian Clausen • Five Lines of CodeKent Beck • Test Driven DevelopmentMartin Fowler • RefactoringAdam Tornhill • Your Code as a Crime SceneMatthew Skelton & Manuel Pais • Team TopologiesEric Evans • Domain-Driven DesignTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily

The Digital Human
Salvage

The Digital Human

Play Episode Listen Later Mar 13, 2023 29:22


Online and offline, our world is a hugely complex tangle of modern creations and the legacy of the past. As we build upon the shoulders of times gone by, we are in a constant process of assessing what is still useful, what needs to be adapted and what no longer serves us. Aleks looks at the process of salvaging value from the world around us, looking at the pleasure and pain of sifting through the past, the pressures to preserve, how value can evolve over time, the allure of creating from scratch in the face of complex legacy systems and structures, and how treasure is often in the eye of the beholder. Michael Feathers is a software architect and author of Working Effectively with Legacy Code. Over the years, he advised many different companies on the strategic reuse and modernisation of their legacy code and systems. He is currently the Chief Architect for Globant, a global organisation helping companies transform their businesses. Dr James Hunter is a maritime archeologist and curator at the Australian National Maritime Museum. He is also an avid diver. James has excavated sixteenth century Spanish galleons, wrecks from the US civil war and many vessels sunk in World wars. Kate Macdonald is the director of Handheld Press, which republishes texts from the 1920s, 30s and 40s. She has a particular interest in uncovering works that explore lives lived by women, LGBTQ+ and people with physical impairments. Founder of the urban planning consultancy Zvidsky Agency in Ukraine, Alexander Shevchenko has a background in civil engineering and spatial and urban planning. Since 2022, he has set up the non-governmental organisation Restart Ukraine, which supports Ukrainian municipalities with recovery from the impact of 2014 and 2022 conflicts and with tackling urban regeneration fit for modern society's needs.

Code for Thought
ByteSized: Testing your Python code

Code for Thought

Play Episode Listen Later Dec 15, 2022 21:07


This last episode of ByteSized RSE before the end of 2022 is about testing your Python code.Testing is an essential part of software development, and a lot of what we cover in this episode applies to any programming and scripting language. For Python, the two big frameworks being used are unittest and PyTest. Unittest is built into Python, whereas PyTest is a module you would need to install extra.https://docs.python.org/3/library/unittest.html the built in unit testing framework of Pythonhttps://docs.python.org/3/library/unittest.mock.html mock testing in the unittest frameworkhttps://docs.python.org/3/library/unittest.html#class-and-module-fixtures fixtures for classes and moduleshttps://docs.pytest.org/en/7.2.x/ the popular PyTest frameworkMocking can be done with monkeypatch in PyTest https://docs.pytest.org/en/7.1.x/how-to/monkeypatch.html#Fixtures in PyTest: https://docs.pytest.org/en/7.2.x/reference/fixtures.html Books mentionedWorking effectively with legacy code, Michael Feathers, ISBN: 9780131177055, Pearson's, 2004Refactoring: Improving the Design of Existing Code, Martin Fowler, ISBN: 9780134757681, 2nd edition, Addison-Wesley ProfessionalByte-sized RSE is presented in collaboration with the UNIVERSE-HPC project.https://www.imperial.ac.uk/computational-methods/rse/events/byte-sized-rse/ ByteSized RSE link to Imperial CollegeSupport the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/

CTO Studio
Trunk-Based Development And Feature Flags With EJ & TJ

CTO Studio

Play Episode Listen Later Oct 19, 2022 46:55


What must exist for a community to be vibrant and healthy? In this week's show, EJ Allen and TJ Taylor, CTO and Staff Engineer at Mobilize, answer this question. They join Etienne de Bruin to dig into trunk-based development, feature flags, and how community and personal connection drive business.  Some ideas you'll hear them explore are: Community is the tide that raises all boats. Building communities of trust and connecting with people can unlock potential for everyone.  What it means to be part of a thriving community is the same across professional and personal networks. The key components of creating a vibrant network include trust, empathy, and unlocking potential.  In software development, you must be able to take risks and be vulnerable with your team. This means that you essentially eliminate the consequences of making a mistake, allowing your team to experience psychological safety. Building habits is one of the ways to create pits of success. That translates into looking at the habits of the people around you and figuring out how to leverage those habits to get the desired behavior. Habits are the compound interest of self-improvement.  Think about how you're adding value to your customer. At the end of the day, your work is not necessarily as important as the work that your team is delivering to the customer.   "Being able to get 1% better a day makes you 37 times better in a year. Try not to worry so much about what can I get done in a day or what can I get done this week… Instead, just focus on how can I deliver the smallest amount of value as consistently as possible?"  You need every leg of the stool to be successful. That requires trust, empathy, and connection in your team.  Resources EJ Allen | LinkedIn  TJ Taylor | LinkedIn  Mobilize  Refactoring by Kent Beck Working Effectively with Legacy Code by Michael Feathers

Beyond Coding
Social systems in Tech Teams with Michael Feathers

Beyond Coding

Play Episode Listen Later May 4, 2022 45:40


I invited Michael Feathers on to discuss what makes a great and effective team in tech. We cover lots of the social systems you'll see, as well as the impact that remote working has had on those. More of the topics we cover this episode, in order

Greater Than Code
271: EventStorming with Paul Rayner

Greater Than Code

Play Episode Listen Later Feb 16, 2022 58:24


00:58 - Paul's Superpower: Participating in Scary Things 02:19 - EventStorming (https://www.eventstorming.com/) * Optimized For Collaboration * Visualizing Processes * Working Together * Sticky (Post-it) Notes (https://www.post-it.com/3M/en_US/post-it/products/~/Post-it-Products/Notes/?N=4327+5927575+3294529207+3294857497&rt=r3) 08:35 - Regulation: Avoiding Overspecifics * “The Happy Path” * Timeboxing * Parking Lot (https://project-management.fandom.com/wiki/Parking_lot) * Inside Pixar (https://www.imdb.com/title/tt13302848/#:~:text=This%20documentary%20series%20of%20personal,culture%20of%20Pixar%20Animation%20Studios.) * Democratization * Known Unknowns 15:32 - Facilitation and Knowledge Sharing * Iteration and Refinement * Knowledge Distillation / Knowledge Crunching * Clarifying Terminology: Semantics is Meaning * Embracing & Exposing Fuzziness (Complexities) 24:20 - Key Events * Narrative Shift * Domain-Driven Design (https://en.wikipedia.org/wiki/Domain-driven_design) * Shift in Metaphor 34:22 - Collaboration & Teamwork * Perspective * Mitigating Ambiguity 39:29 - Remote EventStorming and Facilitation * Miro (https://miro.com/) * MURAL (https://www.mural.co/) 47:38 - EventStorming vs Event Sourcing (https://martinfowler.com/eaaDev/EventSourcing.html) * Sacrificing Rigor For Collaboration 51:14 - Resources * The EventStorming Handbook (https://leanpub.com/eventstorming_handbook) * Paul's Upcoming Workshops (https://www.virtualgenius.com/events) * @thepaulrayner (https://twitter.com/thepaulrayner) Reflections: Mandy: Eventstorming and its adjacence to Technical Writing. Damien: You can do this on a small and iterative scale. Jess: Shared understanding. Paul: Being aware of the limitations of ideas you can hold in your head. With visualization, you can hold it in more easily and meaningfully. This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode) To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Transcript: MANDY: Welcome to Episode 271 of Greater Than Code. My name is Mandy Moore and I'm here today with a guest, but returning panelist. I'm happy to see Jessica Kerr. JESSICA: Thanks, Mandy. It's great to see you. I'm also excited to be here today with Damien Burke! DAMIEN: And I am excited to be here with both of you and our guest today, Paul Rayner. Paul Rayner is one of the leading practitioners of EventStorming and domain-driven design. He's the author of The EventStorming Handbook, co-author of Behavior-Driven Development with Cucumber, and the founder and chair of the Explore DDD conference. Welcome to the show, Paul. PAUL: Thanks, Damien. Great to be here. DAMIEN: Great to have you. And so you know, you are prepared, you are ready for our first and most famous question here on Greater Than Code? PAUL: I don't know if I'm ready, or prepared, but I can answer it, I think. [laughter] DAMIEN: I know you have prepared, so I don't know if you are prepared. PAUL: Right. DAMIEN: Either way, here it comes. [chuckles] What is your superpower and how did you acquire it? PAUL: Okay. So a couple of weeks ago, there's a lake near my house, and the neighbors organized a polar plunge. They cut a big hole in the ice and everyone lines up and you basically take turns jumping into the water and then swimming to the other side and climbing out the ladder. So my superpower is participating in a polar plunge and I acquired that by participating with my neighbors. There was barbecue, there was a hot tub, and stuff like that there, too. So it was very, very cool. It's maybe not a superpower, though because there were little kids doing this also. So it's not like it was only me doing it. JESSICA: I'll argue that your superpower is participating in scary things because you're also on this podcast today! PAUL: [chuckles] Yeah, there we go. DAMIEN: Yeah, that is very scary. Nobody had to be fished out of the water? No hospital, hypothermia, any of that? PAUL: No, there was none of that. It was actually a really good time. I mean, being in Denver, blue skies, it was actually quite a nice day to jump into frozen. MANDY: So Paul, you're here today to talk about EventStorming. I want to know what your definition of that is, what it is, and why it's a cool topic to be talking about on Greater Than Code. PAUL: Okay. Well, there's a few things there. So firstly, what is EventStorming? I've been consulting, working with teams for a long time, coaching them and a big part of what I try and do is to try and bridge the gap between what the engineers, the developers, the technical people are trying to build in terms of the software, and what the actual problem is they're trying to solve. EventStorming is a technique for just mapping out a process using sticky notes where you're trying to describe the story of what it is that you're building, how that fits into the business process, and use the sticky notes to layer in variety of information and do it in a collaborative kind of way. So it's really about trying to bridge that communication gap and uncover assumptions that people might have, expose complexity and risk through the process, and with the goal of the software that you write actually being something that solves the real problem that you're trying to solve. I think it's a good topic for Greater Than Code based on what I understand about the podcast, because it certainly impacts the code that you write, touches on that, and connects with the design. But it's really optimized for collaboration, it's optimized for people with different perspectives being able to work together and approach it as visualizing processes that people create, and then working together to be able to do that. So there's a lot of techniques out there that are very much optimized from a developer perspective—UML diagrams, flow charts, and things like that. But EventStorming really, it sacrifices some of that rigor to try and draw people in and provide a structured conversation. I think with the podcast where you're trying to move beyond just the code and dig into the people aspects of this a lot more, I think it really touches on that in a meaningful way. JESSICA: You mentioned that with a bunch of stickies, a bunch of different people, and their perspectives, EventStorming layers in different kinds of information. PAUL: Mm hm. JESSICA: Like what? PAUL: Yeah. So the way that usually approach it is, let's say, we're modeling, visualizing some kind of process like somebody registering for a certain thing, or even somebody, maybe a more common example, purchasing something online and let's say, that we have the development team that's responsible for implementing how somebody might return a product to a merchant, something like that. The way it would work is you describe that process as events where each sticky note represents something that happened in the story of returning a product and then you can layer on questions. So if people have questions, use a different colored sticky note for highlighting things that people might be unsure of, what assumptions they might be making, differences in terminology, exposing those types of unknowns and then once you've sort of laid out that timeline, you can then layer in things like key events, what you might call emergent structures. So as you look at that timeline, what might be some events that are more important than others? JESSICA: Can you make that concrete for me? Give me an example of some events in the return process and then…? PAUL: Yeah. So let's say, the customer receives a product that they want to return. You could have an event like customer receive product and then an event that is customer reported need for return. And then you would have a shift in actor, like a shift in the person doing the work where maybe the merchant has to then merchant sent return package to customer. So we're mapping out each one of these as an event in the process and then the customer receives, or maybe it's a shipping label. The customer receives the shipping label and then they put the items in the package with the shipping label and they return it. And then there would be a bunch of events that the merchant would have to take care of. So the merchant would have to receive that package and then probably have to update the system to record that it's been returned. And then, I imagine there would be processing another order, or something like that. A key event in there might be something like sending out the shipping label and the customer receiving the shipping label because that's a point where the responsibility transfers from the merchant, who is preparing the shipping label and dispatching that, to the customer that's actually receiving it and then having to do something. That's just one, I guess, small example of you can use that to divide that story up into what you might think of as chapters where there's different responsibilities and changes in the narrative. Part of that maybe layering in sticky notes that represent who's doing the work. Like who's the actor, whether it's the merchant, or the customer, and then layering in other information, like the systems that are involved in that such as maybe there's email as a system, maybe there's the actual e-commerce platform, a payment gateway, these kinds of things could be reflected and so on, like there's – [overtalk] JESSICA: Probably integration with the shipper. PAUL: Integration with the shipper, right. So potentially, if you're designing this, you would have some kind of event to go out to the shipper to then know to actually pick up the package and that type of thing. And then once the package is actually delivered back to the merchant, then there would be some kind of event letting the merchant know. It's very hard to describe because I'm trying to picture this in my mind, which is an inherently visual thing. It's probably not that interesting to hear me describing something that's usually done on some kind of either mirror board, like some kind of electronic space, or on a piece of butcher's paper, or – [overtalk] DAMIEN: Something with a lot of sticky notes. PAUL: Something with a lot of sticky notes, right. DAMIEN: Which, I believe for our American listeners, sticky notes are the little square pieces of brightly colored paper with self-adhesive strip on the back. PAUL: Yeah. The stickies. DAMIEN: Stickies. [chuckles] I have a question about this process. I've been involved in very similar processes and it sounds incredibly useful. But as you describe it, one of the concerns I have is how do you avoid getting over specific, or over described? Like you can describe systems until you're talking about the particles in the sun, how do you know when to stop? PAUL: So I think there's a couple of things. Number one is at the start of whatever kind of this activity, this EventStorming is laying out what's the goal? What are we trying to accomplish in terms of the process? With returns, for example, it would be maybe from this event to this event, we're trying to map out what that process looks like and you start with what you might call the happy path. What does it look like when everything goes well? And then you can use pink stickies to represent alternate paths, or things going wrong and capture those. If they're not tied back to this goal, then you can say, “Okay, I think we've got enough level of detail here.” The other thing is time boxing is saying, “Okay, well, we've only got half an hour, or we've only got an hour so let's see how much we can do in that time period,” and then at the end of that, if you still have a lot of questions, then you can – or you feel like, “Oh, we need to dig into some of these areas more.” Then you could schedule a follow up session to dig into that a little bit more. So it's a combination of the people that are participating in this deciding how much level of detail they want to go down to. What I find is it typically is something that as you're going through the activity, you start to see. “Oh, maybe this is too far down in the weeds versus this is the right level.” As a facilitator, I don't typically prescribe that ahead of time, because it's much easier to add sticky notes and then talk about them than it is to have a conversation when there's nothing visualized. I like to visualize it first and lay it out and then it's very easy to say, “Oh, well, this looks like too much detail. So we'll just put a placeholder for that and not worry about out it right now.” It's a little bit of the facilitation technique of having a parking lot where you can say, “Okay, this is a good topic, but maybe we don't need to get down in that right now. Maybe let's refocus back on what it is that we're trying to accomplish.” JESSICA: So there's some regulation that happens naturally during the meeting, during interactions and you can have that regulation in the context of the visual representation, which is the EventStorming, the long row of stickies from one event to the other. PAUL: Right, the timeline that you're building up. So it's a little bit in my mind, I watched last year, I think it was on Netflix. There was a documentary about Pixar and how they do their storyboarding process for their movies and it is exactly that. They storyboard out the movie and iterate over that again and again and again telling that story. What's powerful about that is it's a visual medium so you have someone that is sketching out the main beats of the story and then they're talking it through. Not to say that EventStorming is at that level of rigor, but it has that kind of feel to it of we're laying out these events to tell the story and then we're talking through the story and seeing what we've missed and where we need to add more detail, maybe where we've added too much detail. And then like you said, Jess, there's a certain amount of self-regulation in there in terms of, do we have enough time to go down into this? Is this important right now? JESSICA: And I imagine that when I have questions that go further into detail than we were able to go in the meeting, if I've been in that EventStorming session, I know who to ask. PAUL: That's the idea, yeah. So the pink stickies that we said represent questions, what I like about those is, well, several things. Number one, it democratizes the idea that it's okay to ask questions, which I think is a really powerful technique. I think there's a tendency in meetings for some people to hold back and other people to do all the talking. We've all experienced that. What this tries to do is to democratize that and actually make it not only okay and not only accepted, but encourage that you're expected to ask questions and you're expected to put these sticky notes on here when there's things that you don't understand. JESSICA: Putting the questions on a sticky note, along with the events, the actors, and the things that we do know go on sticky notes, the questions also go on sticky notes. All of these are contributions. PAUL: Exactly. They value contributions and what I love about that is that even people that are new to this process, it's a way for them to ask questions in a way that is kind of friendly to them. I've seen this work really well, for example, with onboarding new team members and also, it encourages the idea that we have different areas of expertise. So in any given process, or any business story, whatever you want to characterize it as, some people are going to know more about some parts of it than others. What typically happens is nobody knows the whole story, but when we work together, we can actually build up an approximation of that whole story and help each other fill in the gaps. So you may have the person that's more on the business, or the product side explaining some terminology. You can capture those explanations on sticky notes as a glossary that you're building up as you go. You can have engineers asking questions about the sequence of events in terms of well, does this one come before that one? And then the other thing that's nice about the questions is it actually as you're going, it's mapping out your ignorance and I see that as a positive thing. JESSICA: The known unknowns. PAUL: Known unknowns. It takes unknown unknowns, which the kind of elephant in the room, and at least gets them up as known unknowns that you can then have a conversation around. Because there's often this situation of a question that somebody's afraid to ask and maybe they're new to the team, or maybe they're just not comfortable asking that type of question. But it gives you actually a map of that ignorance so you can kind of see oh, there's this whole area here that just has a bunch of pink stickies. So that's probably not an area we're ready to work on and we should prioritize. Actually, if this is an area that we need to be working on soon, we should prioritize getting answers to these questions by maybe we need to do a proof of concept, or some UX work, or maybe some kind of prototyping around this area, or like you said, Jess, maybe the person that knows the answers to these questions is just not in this session right now and so, we need to follow up with them, get whatever answers we need, and then come back and revisit things. JESSICA: So you identify areas of risk. PAUL: Yes. Areas of risk, both from a product perspective and also from a technical perspective as well. DAMIEN: So what does it take to have one of these events, or to facilitate one of these events? How do you know when you're ready and you can do it? PAUL: So I've done EventStorming [chuckles] as a conference activity in a hallway with sticky notes and we say, “Okay, let's as a little bit of an icebreaker here –” I usually you do the story of Cinderella. “Let's pick the Disney story of Cinderella and we'll just EventStorm this out. Just everyone, here are some orange sticky notes and a Sharpie, just write down some things that you remember happening in that story,” and then everyone writes a few. We post it up on the hallway wall and then we sequence them as a timeline and then we can basically build up that story in about 5, or 10 minutes from scratch. With a business process, it's not that different. It's like, okay, we're going to do returns, or something like that and if people are already familiar with the technique, then just give them a minute, or so to think of some things that they know that would happen in that process. And then they do that individually and then we just post them up on the timeline and then sequence them as a group and it can happen really quickly. And then everything from there is refinement. Iteration and refinement over what you've put up as that initial skeleton. DAMIEN: Do you ever find that a team comes back a week, or a day, or a month later and goes, “Oh, there is this big gap in our narrative because nobody in this room understood the warehouse needed to be reordered in order to send this thing down”? PAUL: Oh, for sure. Sometimes it's big gaps. Sometimes it's a huge cluster of pink sticky notes that represents an area where there's just a lot of risk and unknowns that the team maybe hasn't thought about all that much. Like you said, it could be there's this third-party thing that it wasn't until everyone got in a room and kind of started to map it out, that they realized that there was this gap in their knowledge. JESSICA: Yeah. Although, you could completely miss it if there's nobody from the warehouse in the room and nobody has any idea that you need to tell the warehouse to expect this return. PAUL: Right and so, part of that is putting a little bit of thought into who would need to be part of this and in a certain way, playing devil's advocate in terms of what don't we know, what haven't we thought of. So it encourages that sense of curiosity with this and it's a little bit different from – Some of the listeners maybe have experienced user story mapping and other techniques like that. Those tend to be focused on understanding a process, but they're very much geared towards okay, how do we then figure out how we're going to code up this feature and how do we slice it up into stories and prioritize that. So it's similar in terms of sticky notes, but the emphasis in EventStorming is more on understanding together, the problem that we're trying to address from a business perspective. JESSICA: Knowledge pulling. PAUL: Yeah. Knowledge pulling, knowledge distillation, those types of idea years, and that kind of mindset. So not just jumping straight to code, but trying to get a little bit of a shared understanding of what all is the thing that we're trying to actually work on here. JESSICA: Eric Evans calls it knowledge crunching. PAUL: Yes, Eric called it knowledge crunching. DAMIEN: I love that phrase, that shared understanding. That's what we, as product teams, are generating is a shared understanding both, captured in our documentation, in our code, and before that, I guess on large sheets of butcher paper. [laughs] PAUL: Well, and it could be a quick exercise of okay, we're going to be working on some new feature and let's just spend 15 minutes just mapping it out to get a sense of, are we on the same page with this? JESSICA: Right, because sometimes it's not even about we think we need to know something, it's do we know enough? Let's find out. PAUL: Right. JESSICA: And is that knowledge shared among us? PAUL: Right, and maybe exposing, like it could be as simple as slightly different terminology, or slightly different understanding of terminology between people that can have a big impact in terms of that. I was teaching a workshop last night where we were talking about this, where somebody had written the event. So there was a repair process that a third-party repair company would handle and then the event that closed that process off, they called case closed. So then the question becomes well, what does case closed mean? Because the word case – [overtalk] JESSICA: [laughs] It's like what's the definition of done? PAUL: Right, exactly. [laughter] Because that word case didn't show up anywhere earlier in the process. So is this like a new concept? Because the thing that kicks off the process is repair purchase order created and at the end of the process, it's said case closed. So then the question becomes well, is case closed really, is that a new concept that we actually need to implement here? Or is this another way of saying that we are getting a copy of that repair purchase order back that and it's been updated with details about what the repair involved? Or maybe it's something like repair purchase order closed. So it's kind of forcing us to clarify terminology, which may seem a little bit pedantic, but that's what's going to end up in the code. If you can get some of those things exposed a little earlier before you actually jump to code and get people on the same page and surface any sort of differences in terminology and misunderstandings, I think that can be super helpful for everyone. JESSICA: Yeah. Some people say it's just semantics. Semantics' meaning, its only meaning, this is only about out what this step actually means because when you put it in the code, the code is crystal clear. It is going to do exactly what it does and whether that clarity matches the shared understanding that we think we have oh, that's the difference between a bug and a working system. DAMIEN: [laughs] That's beautiful. It's only meaning. [laughs] JESSICA: Right? Yeah. But this is what makes programming hard is that pedanticness. The computer is the ultimate pedant. DAMIEN: Pedant. You're going to be pedantic about it. [laughter] PAUL: I see what you did there. [laughter] DAMIEN: And that is the occupation, right? That is what we do is look at and create systems and then make them precise. JESSICA: Yeah. DAMIEN: In a way that actually well, is precise. [laughs] JESSICA: Right, and the power of our human language is that it's not precise, that it allows for ambiguity, and therefore, a much broader range of meaning. But as developers, it's our job to be precise. We have to be precise to the computers. It helps tremendously to be precise with each other. DAMIEN: Yeah, and I think that's actually the power of human cognition is that it's not precise. We are very, very fuzzy machines and anyone who tries to pretend otherwise will be greatly disappointed. Ask me how I know. [laughter] PAUL: Well, and I think what I'm trying to do with something like EventStorming is to embrace the fuzziness, is to say that that's actually an asset and we want to embrace that and expose that fuzziness, that messiness. Because the processes we have and work with are often inherently complex. We are trying to provide some visual representation of that so we can actually get our head around, or our minds around the language complexities, the meanings, and drive in a little bit to that meaning. JESSICA: So when the sticky notes pile on top of each other, that's a feature. PAUL: It is. Going back to that example I was just talking about, let's say, there's a bunch of, like we do the initial part of this for a minute, or so where people are creating sticky notes and let's say, we end up with four, or five sticky notes written by different people on top of each other that end up on the timeline that all say pretty much the same thing with slight variations. JESSICA: Let's say, case closed, request closed. PAUL: Case closed, repair purchase order closed, repair purchase order updated, repair purchase order sent. So from a meaning perspective, I look at that and I say, “That's gold in terms of information,” because that's showing us that there's a richness here. Firstly, that's a very memorable thing that's happening in the timeline – [overtalk] JESSICA: Oh and it has multiple things. PAUL: That maybe means it's a key event. Right, and then what is the meaning? Are these the same things? Are they different things? Maybe we don't have enough time in that session to dig into that, but if we're going to implement something around that, or work with something around that, then we're going to at some point need some clarity around the language, the terminology, and what these concepts mean. Also, the sequence as well, because it might be that there's actually multiple events being expressed there that need to be teased apart. DAMIEN: You used this phrase a couple times, “key event,” and since you've used it a couple times, I think it might be key. [laughter] Can you tell us a little bit about what a key event is? What makes something a key event? PAUL: Yeah, the example I like to use is from the Cinderella story. So if you think about the story of Cinderella, one of the things, when people are doing that as an icebreaker, they always end up being multiple copies of the event that usually is something like shoe lost, or slipper lost, or glass slipper lost. There's something about that event that makes it memorable, firstly and then there's something about that event that makes it pivotal in the story. For those that are not familiar with the story [chuckles]—I am because I've EventStormed this thing maybe a hundred times—but there's this part. Another key event is the fairy godmother showing up and doing the magic at the start and she actually describes a business policy. She says, “The magic is going to run out at midnight,” and like all business policies, it's vague [laughter] and it's unclear as to what it means because – [overtalk] JESSICA: The carriage disappears, the dress disappears, but not the slipper that fell off. PAUL: Exactly. There's this exception that for some bizarre reason, to move the plot forward, the slipper stays. But then the definition of midnight is very hazy because what she's actually describing, in software terms, is a long running process of the clock banging 12 times, which is what midnight means is the time between the first and the twelfth and during that time, the magic is slowly unraveling. JESSICA: So midnight is a duration, not an instant. PAUL: Exactly. Yes, it's a process, not an event. So coming back to the question that Damien asked about key events. That slipper being lost is a key event in that story, I think because it actually is a shift in narrative. Up until that point in the story, it's the story of Cinderella and then after that, once the slipper is lost, it becomes the story of the prince looking for Cinderella. And then at the end, you get the day tomorrow, the stuff that happens with that slipper at the end of the story. Another key event would be like the fairy godmother showing up and doing the magic. DAMIEN: [chuckles] It seems like these are necessary events, right? If the slipper is not lost, if the fairy godmother doesn't do magic, you don't have the story of Cinderella. PAUL: Right. These are narrative turns, right? DAMIEN: Yeah. PAUL: These are points of the story shifts and so, key events can sometimes be a narrative shift where it's driving the story forward in a business process. Something like, let's say, you're working on an e-commerce system, like order submitted is a key event because you are adding items to a shopping cart and then at some point, you make a decision to submit the order and then at that point, it transitions from order being a draft thing that is in a state of flux to it actually becomes essentially immutable and gets passed over to fulfilment. So there's a shift in responsibility and actor between these two as well just like between Cinderella and the prince. JESSICA: A shift in who is driving the story forward. PAUL: Right. Yeah. So it's who is driving the story forward. So these key events often function as a shift in actor, a shift in who's driving the story forward, or who has responsibility. They also often indicate a handoff because of that from one group to another in an organization. Something like a sales process that terminates in contract signed. That key event is also the goal of the sales process. The goal is to get to contract signed and then once that happens, there's usually a transition to say, an onboarding group that actually onboards the new customer in the case of a sales process for a new customer, or in e-commerce, it would be the fulfillment part, the warehousing part that Jess was talking about earlier. That's actually responsible for the fulfillment piece, which is they take that order, they create a package, they put all the items in the package, create the shipping label, and ship it out to the customer. JESSICA: And in domain-driven design, you talked about the shift from order being a fluid thing that's changing as people add stuff to their cart to order being immutable. The word order has different meanings for the web site where you're buying stuff and the fulfillment system, there's a shift in that term. PAUL: Right, and that often happens around a key event, or a pivotal event is that there's a shift from one, you might think of it as context, or language over to another. So preorder submission, it's functioning as a draft order, but what it's actually typically called is a shopping cart and a shopping cart is not the same as an order. It's a great metaphor because there is no physical cart, but we all know what that means as a metaphor. A shopping cart is a completely different metaphor from an order, but we're able to understand that thread of continuity between I have this interactive process of taking items, or products, putting them in the shopping cart, or out again. And then at some point that shopping cart, which is functioning as a draft order, actually it becomes an order that has been submitted and then it gets – [overtalk] DAMIEN: Yeah, the metaphor doesn't really work until that transition. You have a shopping cart and then you click purchase and now what? [laughs] You're not going to the register and ringing it up, that doesn't make any sense. [chuckles] The metaphor kind of has to end there. JESSICA: You're not leaving the cart in the corral in the parking lot. [laughter] PAUL: Well, I think what they're trying to do is when you think about going through the purchase process at a store, you take your items up in the shopping cart and then at that point, you transition into a financial transaction that has to occur that then if you were at a big box electronic store, or something, eventually, you would make the payment. You would submit payment. That would be the key events and that payment is accepted and then you receive a receipt, which is kind of the in-person version of a record of your order that you've made because you have to bring the receipt back. DAMIEN: It sort of works if the thing you're putting in the shopping cart are those little cards. When they don't want to put things on the shelf, they have a card, you pick it up, and you take it to register. They ring it up, they give you a receipt, and hopefully, the thing shows up in the mail someday, or someone goes to the warehouse and goes gets it. PAUL: We've all done that. [chuckles] Sometimes it shows up. Sometimes it doesn't. JESSICA: That's an interesting point that at key events, there can be a shift in metaphor. PAUL: Yes. Often, there is. So for example, I mentioned earlier, a sales process ending in a contract and then once the contract is signed, the team – let's say, you're signing on a new customer, for a SaaS service, or something like that. Once they've signed the contract, the conversation isn't really about the contract anymore. It's about what do we need to do to onboard this customer. Up until that point, the emphasis is maybe on payment, legal disclosures, and things like that. But then the focus shifts after the contract is signed to more of an operational focus of how do we get the data in, how do we set up their accounts correctly, that type of thing. JESSICA: The contract is an input to that process. PAUL: Yes. JESSICA: Whereas, it was the output, the big goal of the sales process. PAUL: Yes, exactly. So these key events also function from a systems perspective, when you think about moving this to code that event then becomes almost like a message potentially. Could be implemented as say, a message that's being passed from the sales system through to the onboarding system, or something like that. So it functions as the integration point between those two, where the language has to be translated from one context to another. JESSICA: And it's an integration point we can define carefully so that makes it a strong boundary and a good place to divide the system. DAMIEN: Nice. PAUL: Right. So that's where it starts to connect to some of the things that people really care about these days in terms of system decomposition and things like that. Because you can start thinking about based on a process view of this, based on a behavior view of this, if we treat these key events as potential emergent boundaries in a process, like we've been describing, that we discover through mapping out the process, then that can give us some clues as to hmm maybe these boundaries don't exist in the system right now, but they could. These could be places where we start to tease things apart. JESSICA: Right. Where you start breaking out separate services and then when you get down to the user story level, the user stories expect a consistent language within themselves. You're not going to go from cart to return purchase in a case. PAUL: [laughs] Right. JESSICA: In a single user story. User stories are smaller scope and work within a single language. PAUL: Right and so, I think the connection there in my mind is user stories have to be written in some kind of language, within some language context and mapping out the process can help you understand where you are in that context and then also understand, like if you think about a process that maybe has a sales part of the process and then an onboarding part, it'll often be the case that there's different development teams that are focusing on different parts of that process. So it provides a way of them seeing what their integration point is and what might need to happen across that integration point. If they were to either integrate to different systems, or if they're trying to tease apart an existing system. To use Michael Feathers' term, what might be a “scene” that we could put in here that would allow us to start teasing these things apart. And doing it with the knowledge of the product people that are part of the visualization, too is that this isn't something typically that engineers do exclusively from a technical perspective. The idea with EventStorming is you are also bringing in other perspectives like product, business, stakeholders, and anyone that might have more of that business perspective in terms of what the goals of the process are and what the steps are in the process. MID-ROLL: And now a quick word from our sponsor. I hear people say the VPNs have a reputation for slowing down your internet speed, but not with NordVPN, because it's the fastest VPN in the world. I don't have to sacrifice internet speed for better security. With NordVPN, my internet traffic is routed through a secure encrypted tunnel, which protects my data and privacy. I can also have it on up to six devices like my laptop, phone, TV, iPad—all my devices are protected. Grab your exclusive NordVPN deal by going to nordvpn.com/gtc, or use the code GTC to get a huge discount on your NordVPN plan plus one additional month for free. Plus, a bonus gift! It's completely risk-free with Nord's 30-day money back guarantee. JESSICA: As a developer, it's so important to understand what those goals are, because that lets us make good decisions when we're down in the weeds and getting super precise. PAUL: Right, I think so. I think often, I see teams that are implementing stories, but not really understanding the why behind that in terms of maybe they get here's the functionality on delivering and how that fits into the system. But like I talked about before, when you're driving a process towards a key event, that becomes the goal of that subprocess. So the question then becomes how does the functionality that I'm going to implement that's described in this user story actually move people towards that goal and maybe there's a better way of implementing it to actually get them there. DAMIEN: Yeah, it's always important to keep that in mind, because there's always going to be ambiguity until you have a running system, or ran system, honestly. JESSICA: Yeah! DAMIEN: There's always going to be ambiguity, which it is our job as people writing code to manage and we need to know. Nobody's going to tell us exactly what's going to happen because that's our job. PAUL: Right. JESSICA: It's like if the developer had a user story that Cinderella's slipper fell off, but they do didn't realize that the goal of that was that the prince picked it up, then they might be like, “Oh, slipper broke. That's fine.” PAUL: Yeah. JESSICA: It's off the foot. Check the box. PAUL: Let's create a glass slipper factory implementer object [laughter] so that we can just create more of those. JESSICA: Oh, yeah. What, you wanted a method slip off in one piece? You didn't say that. I've created crush! PAUL: Right. [laughter] Yeah. So I think sometimes there's this potential to get lost in the weeds of the everyday development work that is happening and I like to tie it back to what is the actual story that we're supporting. And then sometimes what people think of as exception cases, like an example might be going back to that merchant return example is what if they issue the shipper label, but the buyer never receives it. We may say, “Well, that's never going to happen,” or “That's unlikely.” But visualizing that case, you may say, “That's actually a strong possibility. How do we handle that case and bake that into the design so that it actually reflects what we're trying to do?” JESSICA: And then you make an event that just triggers two weeks later that says, “Check whether customer received label.” PAUL: Yes, exactly. One thing you can do as well is like – so that's one possibility of solving it. The idea what EventStorming can let you do is say, “Well, that's one way of doing it. Are there any other options in terms of how we could handle this, let's visualize.” With any exception case, or something, you could say, “Well, let's try solving this a few different ways. Just quickly come up with some different ideas and then we can pull the best of those ideas into that.” So the idea when you're modeling is to say, “Okay, well, there's probably more than one way to address this. So maybe let's get a few ideas on the table and then pick the best out of these.” JESSICA: Or address it at multiple levels. PAUL: Yes. JESSICA: A fallback for the entire process is customer contact support again. PAUL: Right, and that may be the simple answer in that kind of case. What we're trying to do, though is to visualize that case as an option and then talk about it, have a structured conversation around it, say, “Well, how would we handle that?” Which I think from a product management perspective is a key thing to do is to engage the engineers in saying, “Well, what are some different ways that we could handle this and solve this?” If you have people that are doing responsibility primarily for testing in that, then having them weigh in on, well, how would we test this? What kind of test cases might we need to handle for this? So it's getting – [overtalk] JESSICA: How will we know it worked? PAUL: Different perspectives and opinions on the table earlier rather than later. JESSICA: And it's cheap. It's cheap, people. It's a couple hours and a lot of post-its. You can even buy the generic post-its. We went to Office Depot yesterday, it's $10 for 5 little Post-it pads, [laughter] or 25 Office Depot brand post-it pads. They don't have to stay on the wall very long; the cheap ones will work. PAUL: [laughs] So those all work and then it depends if you have shares in 3M, I guess, with you. [laughter] Or Office Depot, depending which road you want to go down. [laughter] JESSICA: Or if you really care about that shade of pale purple, which I do. PAUL: Right. I mean, what's been fascinating to me is in the last 2 years with switching to remote work and that is so much of, 95% of the EventStorming I do these days is on a collaborative whiteboard tool like Miro, or MURAL, which I don't know why those two product names are almost exactly the same. But then it's even cheaper because you can sign up for a free account, invite a few people, and then just start adding sticky notes to some virtual whiteboard and do it from home. There's a bunch of things that you can do on tool like that with copy pasting, moving groups of sticky notes around, rearranging things, and ordering things much – [overtalk] JESSICA: And you never run out of wall. PAUL: Yeah. The idea with the butcher's paper in a physical workshop, in-person workshop is you're trying to create a sense of unending modeling space that you can use. That you get for free when you use online collaborative whiteboarding tool. It's just there out of – [overtalk] JESSICA: And you can zoom in. PAUL: And you zoom in and out. Yeah. There's a – [overtalk] JESSICA: Stickies on your stickies on your stickies. [laughter] I'm not necessarily recommending that, but you can do it. PAUL: Right. The group I was working with last night, they'd actually gone to town using Miro emojis. They had something bad happen in the project and they've got the horror emoji [laughter] and then they've got all kinds of and then copy pasting images off the internet for things. JESSICA: Nice. PAUL: So yeah, can make it even more fun. JESSICA: Okay. So it's less physical, but in a lot of ways it can be more expressive, PAUL: I think so. More expressive and just as engaging and it can break down the geographical barriers. I've done sessions where we've had people simultaneously spread in multiple occasions across the US and Europe in the same session, all participating in real-time. If you're doing it remote, I like to keep it short. So maybe we do like a 2-hour session with a 10- or 15-minute break in the middle, because you're trying to manage people's energy and keep them focused and it's hard to do that when you just keep going. MANDY: I kind of want to talk a little bit about facilitation and how you facilitate these kind of workshops and what you do, engage people and keep them interested. PAUL: Yeah. So I think that it depends a little bit on the level of detail we're working at. If it's at the level of a few team members trying to figure out a feature, then it can be very informal. Not a lot of facilitation required. Let's just write down what the goal is and then go through the process of brainstorming a few stickies, laying it out, and then sequencing it as a timeline, adding questions. It doesn't require a lot of facilitation hand. I think the key thing is just making sure that people are writing down their questions and that it's time boxed. So quitting while people are still interested and then [laughter] at the end, before you finish, having a little bit of a conversation around what might the next steps be. Like what did we learn? You could do a couple of minutes retrospective, add a sticky note for something you learned in this session, and then what do you see as our next steps and then move on from there with whatever action items come out of that. So that one doesn't require, I think a lot of facilitation and people can get up and running with that pretty quickly. I also facilitate workshops that are a lot more involved where it's at the other end of the spectrum, where it's a big picture workshop where we're mapping out maybe an entire value stream for an organization. We may have a dozen, 20 people involved in a session like that representing different departments, different organizational silos and in that case, it requires a lot more planning, a lot more thinking through what the goal of the workshop is, who would you need to invite? Because there's a lot more detail involved and a lot more people involved, that could be four, or five multi-hour sessions spread over multiple days to be able to map out an entire value stream from soup to nuts. And then usually the goal of something like that is some kind of system modernization effort, or maybe spinning up a new project, or decomposing a legacy system, or even understanding what a legacy system does, or process improvement that will result inevitably in some software development in certain places. I did a workshop like that, I think last August and out of that, we identified a major bottleneck in the process that everyone in the workshop, I think it was just a bunch of pink stickies in one area that it got called the hot mess. [laughter] It was one area and what was happening was there were several major business concerns that were all coupled together in this system. They actually ended up spinning up a development team to focus on teasing apart the hot mess to figure out how do we decompose that down? JESSICA: Yes. PAUL: As far as I know, that effort was still ongoing as of December. I'm assuming that's still running because it was prioritized as we need to be able to decompose this part of this system to be able to grow and scale to where we want to get to. JESSICA: Yeah. That's a major business risk that they've got. They at least got clarity about where it is. PAUL: Right. Yeah, and what we did from there is I coached the developers through that process over several months. So we actually EventStormed it out at a much lower level. Once we figured out what the hot mess was, let's map it out and then they combined that with some flow charting and a bunch of other more engineering, kind of oriented visualization techniques, state machines, things like that to try and get a handle on what was going on. DAMIEN: We'll get UML in there eventually, right? PAUL: Eventually. [laughter] You can't do software development without some kind of state machine, sequence diagram. JESSICA: And it's approximating UML. You can't do it. You can't do it. [laughter] You will either use it, or you will derive a pigeon form of it. PAUL: Right. Well, I still use it for state diagrams and sequence diagrams when I'm down at that technical level. What I find is that there's a certain level of rigor that UML requires for a sequence diagram, or something like that that seems to get in the way of collaboration. So EventStorming sacrifices some of that rigor to be able to draw in everyone and have a low bar of entry to having people participate. DAMIEN: That's a huge insight. Why do you think that is? Is it the inability to hold that much information at a high level of rigor, or just people not used to working at that sort of precision and rigor? PAUL: I think that when I'm working with people that are not hands-on coders, they are in the everyday, like say, product managers, or stakeholders, to use those terms. They're in the everyday details of how the business process works and they tend to think of that process more as a series of steps that they're going through in a very specific kind of way. Like, I'm shipping a certain product, or supporting the shipping. or returning of certain types of products, those kinds of things. Whereas, as developers, we tend to think of it more in terms of the abstractions of the system and what we're trying to implement in the code. So the idea of being able to tell the story of a process in terms of the events that happen is a very natural thing, I find for people from a business perspective to do because that's how they tend to think about it. Whereas, I think as programmers, we're often taught not so much to think about behavior as a sequence of things happening, but more as the structure we've been taught to design in terms of structures and relationships rather than flow. JESSICA: Yet that's changing with event sourcing. PAUL: I think so. EventStorming and event sourcing become a very natural complement for each other and even event-driven architecture, or any event-driven messaging, whatever it happens to be. The gap between modeling using EventStorming and then designing some kind of event-driven distributed system, or even not distributed, but still event-driven is much more natural than trying to do something like an entity relationship diagram and they'd get from that to some kind of meaningful understanding of what's the story of how these functions and features are going to work. JESSICA: On the topic of sacrificing rigor for collaboration, I think you have to sacrifice rigor to work across content texts because you will find contradictions between them. The language does have different meaning before and after the order is submitted and you have to allow for that in the collaboration. It's not that you're not going to have the rigor. It's more that you're postponing it, you're scoping it as separately. This meeting is about the higher level and you need completeness over consistency. DAMIEN: Yeah. I feel like almost you have to sacrifice rigor to be effective in most roles and in that way, sacrifice is even the wrong word. Most of the things that we do as human beings do not allow for the sort of rigor of the things that we do as software engineers and things that computers do. JESSICA: Yeah. DAMIEN: And it's just, the world doesn't work that way. PAUL: Right. Well, and it's the focus in EventStorming on exploration, discovery, and urgent ideas versus rigor is more about not so much exploring and discovery, but about converging on certain things. So when someone says pedant and the other person says pedant, or vice versa, that tends to shut down the conversation because now you are trying to converge on some agreed upon term versus saying, “Well, let's explore a bunch of different ways this could be expressed and temporarily defer trying converge on.” JESSICA: Later in Slack, we'll vote. PAUL: Yes. JESSICA: Okay. So standardize later. PAUL: Yes. Standardize, converge later, and for now, let's kind of hold that at arm's length so that we can uncover and discover different perspectives on this in terms of how the story works and then add regulator when we go to code and then you may discover things in code where there are implicit concepts that you then need to take back to the modeling to try and figure out well, how do we express this? Coming up with some kind of term in the code and being able to go from there. JESSICA: Right. Some sort of potential return because it hasn't happened yet. PAUL: Exactly. So maybe it's a potential, maybe it's some other kind of potential return, like pending return, maybe we don't call it a return at all. JESSICA: Or disliked item because we could – or unsatisfactory item because we could intercept that and try to like, “Hey, how about we send you the screws that we're missing?” PAUL: Right. Yeah, maybe the answer is not a return at all. JESSICA: Yeah. PAUL: But maybe the case is that the customer says they want to return it, but you actually find a way to get them to buy more stuff by sending them something else that they would be happy with. So the idea is we're trying to promote discovery thinking when we are talking about how to understand certain problems and how to solve them rather than closing off options too soon. MANDY: So, Paul, I know you do give these workshops. Is there anything? Where can people find you? How can people learn more? How can people hire you to facilitate a workshop and get in touch with you? PAUL: Okay. Well, in terms of resources, Damien had mentioned at the beginning, I have an eBook up on Leanpub, The EventStorming Handbook, so if people are interested in learning more, they can get that. And then I do workshop facilitation and training through my company, Virtual Genius. They can go to virtualgenius.com and look at what training is available. It's all online these days, so they can participate from anywhere. We have some public workshops coming up in the coming months. And then they can find me, I'm @ThePaulRayner on Twitter, just to differentiate me from all the indefinite articles that are out there. [laughter] MANDY: Sounds good. Well, let's head into reflections. I can start. I just was thinking while we were talking about this episode, about how closely this ties into my background in professional writing, technical writing to be exact, and just how you have this process to lay out exactly what steps need to be taken and to differentiate when people say the same things and thinking about, “Well, they're saying the same things, but the words matter,” and to get pedantic, that can be a good thing, especially when you are writing technical documents and how-tos. I remember still, my first job being a technical writer and looking at people in a machine shop who it was like, first, you do this, then you do this, then you do this and to me, I was like, “This is so boring.” But it makes sense and it matters. So this has been a really good way for me to think about it as a newbie just likening it to technical writing. JESSICA: Yeah. Technical writing has to tell that story. DAMIEN: I'm going to be reflecting on this has been such a great conversation and I feel like I have a lot of familiarity with at least a very similar process. I brought up all my fears that come from them, which is like, what if we don't have the right person in the room? What if there's something we didn't discover? And you said something about how you can do this in 5 minutes and how you can do this in 15 minutes and I realized, “Oh, this process doesn't have to be the 6-hour things that I've participated in and facilitated in. It can also be done more smaller and more iteratively and I can bring this sort of same process and thought process into more of the daily work.” So that's super helpful for me. JESSICA: I want to reflect on a phrase that Paul said and then Damien emphasized, which is shared understanding. It's what we're trying to get to in EventStorming across teams and across functions. I think it's also like what we're constantly trying to get to as humans. We value shared understanding so much because we're trapped in our heads and my experience in my head is never going to be the same as your experience in your head. But at some point, we share the same physical world. So if we can get that visual representation, if we can be talking together about something in that visual world, we can pass ideas back and forth more meaningfully. We can achieve this shared understanding. We can build something together. And that feels so good. I think that that constant building of shared understanding is a lot of what it means to be human and I get really excited when I get to do that at work. PAUL: I think I would just add to that as well is being human, I'm very much aware of limitations in terms of how many ideas I can hold in my head at any one time. I know the times where I've been in the experience that many describe where someone's giving me a list of steps to follow and things like that, inevitably I'm like, “Well, I remember like the first two, maybe three,” and then everything after that is kind of Charlie Brown. What, what, why? [laughter] I don't remember anything they said from that point on. But when I can visualize something, then I can take it in one go. I can see it and we're building it together. So for me, it's a little bit of a mind hack in terms of getting over the limitations of how many things I can keep in my mind at one time. Also, like you said, Jess, getting those things out of my mind and out of other people's minds into a shared space where we can actually collaborate on them together, I think that's really important to be able to do that in a meaningful way. MANDY: Well, thank you so much for coming on the show today, Paul. We really enjoyed this discussion. And if you, as listeners, would like to continue this conversation, please head over to Patreon.com/greaterthancode. We have a Slack channel. You can pledge and donate to sponsor us as little as a dollar and you can come in, hang out, talk with us about these episodes. If not, give me a DM on Twitter and let me know, and I'll let you in anyway because [laughter] that's what we do here at Greater Than Code. PAUL: Because Mandy's awesome. MANDY: [laughs] Thank you, Paul. With that, thank you everyone for listening and we'll see you again next week. Special Guest: Paul Rayner.

Rails with Jason
119 - Refactoring Techniques and Working with Large Codebases with Dana Kashubeck

Rails with Jason

Play Episode Listen Later Nov 9, 2021 49:44


In this episode, Dana Kashubeck and I discuss working in a rapidly growing environment, deciding when to refactor, the benefits of organizational knowledge, and how to effectively share knowledge as opposed to simply giving answers.Within 3Working Effectively with Legacy Code by Michael Feathers

The Mob Mentality Show
Seeing Sociotechnical Systems with Michael Feathers

The Mob Mentality Show

Play Episode Listen Later May 31, 2021 51:57


How do you end up with legacy code? How come things don't go as planned? Is it as simple as one bad process/practice? Bad tech? Bad people? Conway's Law describes how team structure influences software structure. But, does software structure also influence team structure? Is there a bidirectional feedback loop between people and code? Join Chris and Austin as they discuss with Michael Feathers about how we can learn to better see our sociotechnical systems and factor that into our decision-making. In addition we talk about the dynamics of team/org sizing and specialization. Lastly we talk about the impact of Mob Programming on a system and the role of an architect. Video and show notes: https://youtu.be/AC4ZvN6riPE  

The Agile Embedded Podcast
Why Testing Sucks

The Agile Embedded Podcast

Play Episode Listen Later May 11, 2021 37:29


You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer. "Working Effectively with Legacy Code" by Michael Feathers

Software Engineering Unlocked
Legacy code and what to do with it - with Michael Feathers

Software Engineering Unlocked

Play Episode Listen Later Mar 30, 2021 49:44


Today’s episode is sponsored by Botany.io – Botany is a virtual coach for software engineers that unblocks essential teamwork and levels up careers!Links:Michael on TwitterMichael’s essay on how systems are organisms WebsiteColin Breck – Quality ViewsAlan Kay at OOPSLA 1997 – The computer revolution hasn’t happened yetSubscribe on iTunes, Spotify, Google, Deezer, or via RSS. 

The InfoQ Podcast
Michael Feathers: Looking Back at Working Effectively with Legacy Code

The InfoQ Podcast

Play Episode Listen Later Mar 15, 2021 29:57


Several years ago, today's guest Michael Feathers published a book called Working Effectively with Legacy Code. This book introduced ways of wrangling large codebases. In the book, Feather's discussed leveraging unit tests to introduce--not only a validation of correctness but also-- documentation on a system's operation, ways to decouple/modularize monolithic code, and 24 different techniques to introduce change safely. Today on the podcast, Wes Reisz and Michael Feathers go back and review the book. The two spend some time reviewing key concepts from the book and then discuss how the techniques can be applied today. The two wrap with a discussion on what might change in a new version of the book. Read a transcript of this interview: https://bit.ly/3qREZrL Subscribe to our newsletters: - The InfoQ weekly newsletter: bit.ly/24x3IVq - The Software Architects’ Newsletter [monthly]: www.infoq.com/software-architects-newsletter/ Upcoming Virtual Events - https://events.infoq.com/ InfoQ Live: https://live.infoq.com/ - March 16, 2021 - April 13, 2021 - June 22, 2021 - July 20, 2021 QCon Plus: https://plus.qconferences.com/ - May 17-28, 2021 Follow InfoQ: - Twitter: twitter.com/InfoQ - LinkedIn: www.linkedin.com/company/infoq - Facebook: bit.ly/2jmlyG8 - Instagram: @infoqdotcom - Youtube: www.youtube.com/infoq

Salesforce Way
82. Working effectively with legacy code | Michael Feathers

Salesforce Way

Play Episode Listen Later Dec 3, 2020


Michael Feathers is a well-recognized programmer in the IT industry, the author of the famous programming book Working Effectively with Legacy Code, and the founder of R7K Research & Conveyance. Michael talked about the content of his book, what is his definition of legacy code, how to work with legacy code, Why tests are important, etc. Links Michael’s Twitter Michael’s Blog Working Effectively with Legacy Code The deep synergy between testability and good design R7K Research & Conveyance Video Teaser The YouTube Video URL The post 82. Working effectively with legacy code | Michael Feathers appeared first on SalesforceWay.

Better Software Design
10. O refaktoryzacji The Arkency Way z Andrzejem Krzywdą

Better Software Design

Play Episode Listen Later Aug 10, 2020 72:02


Materiały dodatkowe:Refactoring: Improving the Design of Existing Code,Martin Fowler, with Kent Beck , klasyka gatunkuWorking Effectively with Legacy Code, Michael Feathers, druga klasyka warta przeczytania i posiadania w swojej biblioteczceFearless Refactoring: Rails Controllers, Andrzej Krzywda, wspomniana przez Andrzeja jego książka o refaktoryzacji Railsowych kontrolerówKatalog przekształceń refaktoryzacyjnych Martina FowleraTrunkBasedDevelopment.com, skarbnica wiedzy jeśli chodzi o podejście Trunk Based. Można tu znaleźć zarówno przypadki użycia tej techniki, jak i przydatne wzorce, rozwiązujące typowe problemyNasze profile na Instagramie:Profil Andrzeja KrzywdyProfil Mariusza GilaPrzy okazji wizyty Andrzeja w studio nagraliśmy coś jeszcze! Zapraszam do śledzenia mojego kanału na YouTube.

Scala Love
Scala Valentines #2

Scala Love

Play Episode Listen Later Jun 21, 2020 28:51


00:31 Foreach vs Traverse 02:36 ZIO documentation 04:41 What Functional Programming Is, What it Isn't, and Why it Matters by Noel Welsh 06:37 Functional Code is Honest Code by Michael Feathers 07:20 Wrapping Java code with Scala 09:58 Lift typeclass 11:18 47 Degrees Academy 13:23 the Moving from Scala 2 to Scala 3 course by Lunatech 14:16 Spark 3 release 15:24 Dotty + Spark = 16:10 Starting with Scala 3 macros: a short tutorial by Adam Warski 18:11 PicoCLI 19:24 Decline 20:00 Scopt 22:34 JetBrains survey 26:12 Top salaries

The Podlets - A Cloud Native Podcast
Application Transformation with Chris Umbel and Shaun Anderson (Ep 19)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Mar 2, 2020 45:43


Today on the show we are very lucky to be joined by Chris Umbel and Shaun Anderson from Pivotal to talk about app transformation and modernization! Our guests help companies to update their systems and move into more up-to-date setups through the Swift methodology and our conversation focusses on this journey from legacy code to a more manageable solution. We lay the groundwork for the conversation, defining a few of the key terms and concerns that arise for typical clients and then Shaun and Chris share a bit about their approach to moving things forward. From there, we move into the Swift methodology and how it plays out on a project before considering the benefits of further modernization that can occur after the initial project. Chris and Shaun share their thoughts on measuring success, advantages of their system and how to avoid roll back towards legacy code. For all this and more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Josh Rosso Duffie Cooley Olive Power Key Points From This Episode: A quick introduction to our two guests and their roles at Pivotal. Differentiating between application organization and application transformation. Defining legacy and the important characteristics of technical debt and pain. The two-pronged approach at Pivotal; focusing on apps and the platform. The process of helping companies through app transformation and what it looks like. Overlap between the Java and .NET worlds; lessons to be applied to both. Breaking down the Swift methodology and how it is being used in app transformation. Incremental releases and slow modernization to avoid roll back to legacy systems. The advantages that the Swift methodology offers a new team. Possibilities of further modernization and transformation after a successful implementation. Measuring success in modernization projects in an organization using initial objectives. Quotes: “App transformation, to me, is the bucket of things that you need to do to move your product down the line.” — Shaun Anderson [0:04:54] “The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way.” — Shaun Anderson [0:17:26] “Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application.” — Chris Umbel [0:24:16] “I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations.” — Chris Umbel [0:30:58] Links Mentioned in Today’s Episode: Chris Umbel — https://github.com/chrisumbel Shaun Anderson — https://www.crunchbase.com/person/shaun-anderson Pivotal — https://pivotal.io/ VMware — https://www.vmware.com/ Michael Feathers — https://michaelfeathers.silvrback.com/ Steeltoe — https://steeltoe.io/ Alberto Brandolini — https://leanpub.com/u/ziobrando Swiftbird — https://www.swiftbird.us/ EventStorming — https://www.eventstorming.com/book/ Stephen Hawking — http://www.hawking.org.uk/ Istio — https://istio.io/ Stateful and Stateless Workload Episode — https://thepodlets.io/episodes/009-stateful-and-stateless/ Pivotal Presentation on Application Transformation: https://content.pivotal.io/slides/application-transformation-workshop Transcript: EPISODE 19 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.0] CC: Hi, everybody. Welcome back to The Podlets. Today, we have an exciting show. It's myself on, Carlisia Campos. We have our usual guest hosts, Duffie Cooley, Olive Power and Josh Rosso. We also have two special guests, Chris Umbel. Did I say that right, Chris? [0:01:03.3] CU: Close enough. [0:01:03.9] CC: I should have checked before. [0:01:05.7] CU: Umbel is good. [0:01:07.1] CC: Umbel. Yeah. I'm not even the native English speaker, so you have to bear with me. Shaun Anderson. Hi. [0:01:15.6] SA: You said my name perfectly. Thank you. [0:01:18.5] CC: Yours is more standard American. Let's see, the topic of today is application modernization. Oh, I just found out word I cannot pronounce. That's my non-pronounceable words list. Also known as application transformation, I think those two terms correctly used alternatively? The experts in the house should say something. [0:01:43.8] CU: Yeah. I don't know that I would necessarily say that they're interchangeable. They're used interchangeably, I think by the general population though. [0:01:53.0] CC: Okay. We're going to definitely dig into that, how it does it not make sense to use them interchangeably, because just by the meaning, I would think so, but I'm also not in that world day-to-day and that Shaun and Chris are. By the way, please give us a brief introduction the two of you. Why don't you go first, Chris? [0:02:14.1] CU: Sure. I am Chris Umbel. I believe it was probably actually pronounced Umbel in Germany, but I go with Umbel. My title this week is the – I think .NET App Transformation Journey Lead. Even though I focus on .NET modernization, it doesn't end there. Touch a little bit of everything with Pivotal. [0:02:34.2] SA: I'm Shaun Anderson and I share the same title of the week as Chris, except for where you say .NET, I would say Java. In general, we play the same role and have slightly different focuses, but there's a lot of overlap. [0:02:48.5] CU: We get along, despite the .NET and Java thing. [0:02:50.9] SA: Usually. [0:02:51.8] CC: You both are coming from Pivotal, yeah? As most people should know, but I'm sure now everybody knows, Pivotal was just recently as of these date, which is what we are? End of January. This episode is going to be a while to release, but Pivotal was just acquired by VMware. Here we are. [0:03:10.2] SA: It's good to be here. [0:03:11.4] CC: All right. Somebody, one of you, may be let's say Chris, because you've brought this up, how this application organization differs from application transformation? Because I think we need to lay the ground and lay the definitions before we can go off and talk about things and sound experts and make sure that everybody can follow us. [0:03:33.9] CU: Sure. I think you might even get different definitions, even from within our own practice. I'll at least lay it out as I see it. I think it's probably consistent with how Shaun's going to see it as well, but it's what we tell customers anyway. At the end of the day, there are – app transformation is the larger [inaudible] bucket. That's going to include, say just the re-hosting of applications, taking applications from point A to some new point B, without necessarily improving the state of the application itself. We'd say that that's not necessarily an exercise in paying down technical debt, it's just making some change to an application or its environment. Then on the modernization side, that's when things start to get potentially a little more architectural. That's when the focus becomes paying down technical debt and really improving the application itself, usually from an architectural point of view and things start to look maybe a little bit more like rewrites at that point. [0:04:31.8] DC: Would you say that the transformation is more in-line with re-platforming, those of you that might think about it? [0:04:36.8] CU: We'd say that app transformation might include re-platforming and also the modernization. What do you think of that, Shaun? [0:04:43.0] SA: I would say transformation is not just the re-platforming, re-hosting and modernization, but also the practice to figure out which should happen as well. There's a little bit more meta in there. Typically, app transformation to me is the bucket of things that you need to do to move your product down the line. [0:05:04.2] CC: Very cool. I have two questions before we start really digging to the show, is still to lay the ground for everyone. My next question will be are we talking about modernizing and transforming apps, so they go to the clouds? Or is there a certain cut-off that we start thinking, “Oh, we need to – things get done differently for them to be called native.” Is there a differentiation, or is this one is the same as the other, like the process will be the same either way? [0:05:38.6] CU: Yeah, there's definitely a distinction. The re-platforming bucket, that re-hosting bucket of things is where your target state, at least for us coming out of Pivotal, we had definitely a product focus, where we're probably only going to be doing work if it intersects with our product, right? We're going to be doing both re-platforming targeted, say typically at a cloud environment, usually Cloud Foundry or something to that effect. Then modernization, while we're usually doing that with customers who have been running our platform, there's nothing to say that you necessarily need a cloud, or any cloud to do modernization. We tend to based on who we work for, but you could say that those disciplines and practices really are agnostic to where things run. [0:06:26.7] CC: Sorry, I was muted. I wanted to ask Shaun if you wanted to add to that. Do you have the same view? [0:06:33.1] SA: Yeah. I have the same view. I think part of what makes our process unique that way is we're not necessarily trying to target a platform for deployment, when we're going through the modernization part anyway. We're really looking at how can we design this application to be the best application it can be. It just so happens that that tends to be more 12-factor compliant that is very cloud compatible, but it's not necessarily the way that we start trying to aim for a particular platform. [0:07:02.8] CC: All right. If everybody allows me, after this next question, I'll let other hosts speak too. Sorry for monopolizing, but I'm so excited about this topic. Again, in the spirit of understanding what we're talking about, what do you define as legacy? Because that's what we're talking about, right? We’re definitely talking about a move up, move forwards. We're not talking about regression and we're not talking about scaling down. We're talking about moving up to a modern technology stack. That means, that implies we're talking about something that's legacy. What is legacy? Is it contextual? Do we have a hard definition? Is there a best practice to follow? Is there something public people can look at? Okay, if my app, or system fits this recipe then it’s considered legacy, like a diagnosis that has a consensus. [0:07:58.0] CU: I can certainly tell you how you can't necessarily define legacy. One of the ways is by the year that it was written. You can certainly say that there are certainly shops who are writing legacy code today. They're still writing legacy code. As soon as they're done with a project, it's instantly legacy. There's people that are trying to define, like another Michael Feathers definition, which is I think any application that doesn't have tests, I don't know that that fits what – our practice necessarily sees legacy as. Basically, anything that's occurred a significant amount of technical debt, regardless of when the application was written or conceived fits into that legacy bucket. Really, our work isn't necessarily as concerned about whether something's legacy or not as much as is there pain that we can solve with our practice? Like I said, we've modernized things that were in for all intents and purposes, quite modern in terms of the year they were written. [0:08:53.3] SA: Yeah. I would double down on the pain. Legacy to us often is something that was written as a prototype a year ago. Now it's ready to prove itself. It's going to be scaled up, but it wasn't built with scale in mind, or something like that. Even though it may be the latest technology, it just wasn't built for the load, for example. Sometimes legacy can be – the pain is we have applications on a mainframe and we can't find Cobol developers and we're leasing a giant mainframe and it's costing a lot of money, right? There's different flavors of pain. It also could be something as simple as a data center move. Something like that, where we've got all of our applications running on Iron and we need to go to a virtual data center somewhere, whether it's cloud or on-prem. Each one of those to us is legacy. It's all about the pain. [0:09:47.4] CU: I think is miserable as that might sound, that's really where it starts and is listening to that pain and hearing directly from customers what that pain is. Sounds terrible when you think about it that you're always in search of pain, but that isn't indeed what we do and try to alleviate that in some way. That pain is what dictates the solution that you come up with, because there are certain kinds of pain that aren't going to be solved with say, modernization approach, a more a platformed approach even. You have to listen and make sure that you're applying the right medicine to the right pain. [0:10:24.7] OP: Seems like an interesting thing bringing what you said, Chris, and then what you said earlier, Shaun. Shaun you had mentioned the target platform doesn't necessarily matter, at least upfront. Then Chris, you had implied bringing the right thing in to solve the pain, or to help remedy the pain to some degree. I think what's interesting may be about the perspectives for those on this call and you too is a lot of times our entry points are a lot more focused with infrastructure and platform teams, where they have these objectives to solve, like cost and ability to scale and so on and so forth. It seems like your entry point, at least historically is maybe a little bit more focused on finding pain points on more of the app side of the house. I'm wondering if that's a fair assessment, or if you could speak to how you find opportunities and what you're really targeting. [0:11:10.6] SA: I would say that's a fair assessment from the perspective of our services team. We're mainly app-focused, but it's almost there's a two-pronged approach, where there's platform pain and application pain. What we've seen is often solving one without the other is not a great solution, right? I think that's where it's challenging, because there's so much to know, right? It's hard to find one team or one person who can point out the pain on both sides. It just depends on often, how the customer approaches us. If they are saying something like, “We’re a credit card company and we're getting our butts kicked by this other company, because they can do biometrics and we can't yet, because of the limitations of our application.” Then we would approach it from the app-first perspective. If it's another pain point, where our operations, day two operations is really suffering, we can't scale, where we have issues that the platform is really good at solving, then we may start there. It always tends to merge together in the end. [0:12:16.4] CU: You might be surprised how much variety there is in terms of the drivers for people coming to us. There are a lot of cases where the work came to us by way of the platform work that we've done. It started with our sister team who focuses on the platform side of things. They solve the infrastructure problems ahead of us and then we close things out on the application side. We if our account teams and our organization is really listening to each individual customer that you'll find that there – that the pain is drastically different, right? There are some cases where the driver is cost and that's an easy one to understand. There are also drivers that are usually like a date, such as this data center goes dark on this date and I have to do something about it. If I'm not out of that data center, then my apps no longer run. The solution to that is very different than the solution you would have to, "Look, my application is difficult for me to maintain. It takes me forever to ship features. Help me with that." There's two very different solutions to those problems, but each of which are things that come our way. It's just that former probably comes in by way of our platform team. [0:13:31.1] DC: Yeah, that’s an interesting space to operate in in the application transformation and stuff. I've seen entities within some of the larger companies that represent this field as well. Sometimes that's called production engineering or there are a few other examples of this that I'm aware of. I'm curious how you see that happening within larger companies. Do you find that there is a particular size entity that is actually striving to do this work with the tools that they have internally, or do you find that typically, most companies are just need something like an application transformation so you can come in and help them figure out this part of it out? [0:14:09.9] SA: We've seen a wide variety, I think. One of them is maybe a company really has a commitment to get to the cloud and they get a platform and then they start putting some simple apps up, just to learn how to do it. Then they get stuck with, “Okay. Now how do we with trust get some workloads that are running our business on it?” They will often bring us in at that point, because they haven't done it before. Experimenting with something that valuable to them is — usually means that they slow down. There's other times where we've come in to modernize applications, whether it's a particular business unit for example, that may have been trying to get off the mainframe for the last two years. They’re smart people, but they get stuck again, because they haven't figured out how to do it. What often happens and Chris can talk about some examples of this is once we help them figure out how to modernize, or the recipes to follow to start getting their systems systematically on to the platform and modernize, that they tend to like forming a competency area around it, where they'll start to staff it with the people who are really interested and they take over where we started from. [0:15:27.9] CU: There might be a little bit of bias to that response, in that typically, in order to even get in the door with us, you're probably a Fortune 100, or at least a 500, or government, or something to that effect. We're going to be seeing people that one, have a mainframe to begin with. Two, would have say, capacity to fund say a dedicated transformation team, or to build a unit around that. You could say that the smaller an organization gets, maybe the easier it is to just have the entire organization just write software the modern way to begin with. At least at the large side, we do tend to see people try to build a – they'll use different names for it. Try to have a dedicated center of excellence or practice around modernization. Our hope is to help them build that and hopefully, put them in a position that that can eventually disappear, because eventually, you should no longer need that as a separate discipline. [0:16:26.0] JR: I think that's an interesting point. For me, I argue that you do need it going forward, because of the cognitive overhead between understanding how your application is going to thrive on today's complex infrastructure models and understanding how to write code that works. I think that one person that has all of that in their head all the time is a little too much, a little too far to go sometimes. [0:16:52.0] CU: That's probably true. When you consider the size the portfolios and the size of the backlog for modernization that people have, I mean, people are going to be busy on that for a very long time anyway. It's either — even if it is finite, it still has a very long life span at a minimum. [0:17:10.7] SA: At a certain point, it becomes like painting the Golden Gate Bridge. As soon as you finish, you have to start again, because of technology changes, or business needs and that thing. It's probably a very dynamic organization, but there's a lot of overlap. The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way. It may be that people are busy modernizing applications off of WebLogic, or WebSphere, and it takes a two years or more to get that completed for this enterprise. It was 20, 50 different projects. To them, it was brand-new each time, which is cool actually to come into that. [0:17:56.3] JR: I'm curious, I definitely love hear it from Olive. I have one more question before I pass it out and I think we’d love to hear your thoughts on all of this. The question I have is when you're going through your day-to-day working on .NET and Java applications and helping people figure out how to go about modernizing them, what we've talked about so far is that represents some of the deeper architectural issues and stuff. You've already mentioned 12 factor after and being able to move, or thinking about the way that you frame the application as far as inputs of those things that it takes to configure, or to think with the lifecycle of those things. Are there some other common patterns that you see across the two practices, Java and .NET, that you think are just concrete examples of stuff that people should take away maybe from this episode, that they could look at their app – and they’re trying to get ahead of the game a little bit? [0:18:46.3] SA: I would say a big part of the commonality that Chris and I both work on a lot is we have a methodology called the SWIFT methodology that we use to help discover how the applications really want to behave, define a notional architecture that is again, agnostic of the implementation details. We’ll often come in with a the same process and I don't need to be a .NET expert and a .NET shop to figure out how the system really wants to be designed, how you want to break things into microservices and then the implementation becomes where those details are. Chris and I both collaborate on a lot of that work. It makes you feel a little bit better about the output when you know that the technology isn't as important. You get to actually pick which technology fits the solution best, as opposed to starting with the technology and letting a solution form around it, if that makes sense. [0:19:42.4] CU: Yeah. I'd say that interesting thing is just how difficult it is while we're going through the SWIFT process with customers, to get them to not get terribly attached to the nouns of the technology and the solution. They've usually gone in where it's not just a matter of the language, but they have something picked in their head already for data storage, for messaging, etc., and they're deeply attached to some of these decisions, deeply and emotionally attached to them. Fundamentally, when we're designing a notional architecture as we call it, really you should be making decisions on what nouns you're going to pick based on that architecture to use the tools that fit that. That's generally a bit of a process the customers have to go through. It's difficult for them to do that, because the more technical their stakeholders tend to be, often the more attached they are to the individual technology choices and breaking that is the principal role for us. [0:20:37.4] OP: Is there any help, or any investment, or any coordination with those vendors, or the purveyors of the technologies that perhaps legacy applications are, or indeed the platforms they're running on, is there any help on that side from those vendors to help with application transformation, or making those applications better? Or do organizations have to rely on a completely independent, so the team like you guys to come in and help them with that? Do you understand my point? Is there any internal – like you mentioned WebLogic, WebSphere, do the purveyors of those platforms try and drive the transformation from within there? Or is it organizations who are running those apps have to rely on independent companies like you, or like us to help them with that? [0:21:26.2] SA: I think some of it depends on what the goal of the modernization is. If it's something like, we no longer want to pay Oracle licensing fees, then of course, obviously they – WebLogic teams aren't going to be happy to help. That's not always the case. Sometimes it's a case where we may have a lot of WebLogic. It's working fine, but we just don't like where it's deployed and we'd like to containerize it, move it to Kubernetes or something like that. In that case, they're more willing to help. At least in my experience, I've found that the technology vendors are rightfully focused just on upgrading things from their perspective and they want to own the world, right? WebLogic will say, “Hey, we can do everything. We have clustering. We have messaging. We've got good access to data stores.” It's hard to find a technology vendor that has that broader vision, or the discipline to not try to fit their solutions into the problem, when maybe they're not the best fit. [0:22:30.8] CU: I think it's a broad generalization, but specifically on the Java side it seems that at least with app server vendors, the status quo is usually serving them quite well. Quite often, we’re adversary – a bit of an adversarial relationship with them on occasion. I could certainly say that within the .NET space, we've worked a relatively collaboratively with Microsoft on things like Steeltoe, which is a I wouldn't say it's a springboot analog, but at least a microservice library that helps people achieve 12-factor cloud nativeness. That's something where I guess Microsoft represents both the legacy side, but also the future side and were part of a solution together there. [0:23:19.4] SA: Actually, that's a good point because the other way that we're seeing vendors be involved is in creating operators on Kubernetes side, or Cloud Foundry tiles, something that makes it easy for their system to still be used in the new world. That's definitely helpful as well. [0:23:38.1] CC: Yeah, that's interesting. [0:23:39.7] JR: Recently, myself me people on my team went through a training from both Shaun and Chris, interestingly enough in Colorado about this thing called the SWIFT methodology. I know it's a really important methodology to how you approach some of the application transformation-like engagements. Could you two give us a high-level overview of what that methodology is? [0:24:02.3] SA: I want to hear Chris go through it, since I always answer that question first. [0:24:09.0] CU: Sure. I figured since you were the inventor, you might want to go with it Shaun, but I'll give it a stab anyway. Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application. The one thing that you'll hear Shaun say all the time that I think is pretty apt, which is we're trying to understand how the application wants to behave. This is a very analog process, especially at the beginning. It's one where we get people who can speak about the business problem behind an application and the business processes behind an application. We get them into a room, a relatively large room typically with a bunch of wall space and we go through a series of exercises with them, where we tease that business process apart. We start with a relatively lightweight version of Alberto Brandolini’s event storing method, where we map out with the subject matter experts, what all of the business events that occur in a system are. That is a non-technical exercise, a completely non-technical exercise. As a matter of fact, all of this uses sticky notes and arts and crafts. After we've gone through that process, we transition into Boris diagram, which is an exercise of Shaun's design that we take the domains that we've, or at least service candidates that we've extrapolated from that event storming and start to draw out a notional architecture. Like an 80% idea of what we think the architecture is going to look like. We're going to do this for slices of – thin slices of that business problem. At that point, it starts to become something that a software developer might be interested in. We have an exercise called Snappy that generally occurs concurrently, which translates that message flow, Boris diagram thing into something that's at least a little bit closer to what a developer could act upon. Again, these are sticky note and analog exercises that generally go on for about a week or so, things that we do interactively with customers to try to get a purely non-technical way, at least at first, so that we can understand that problem and tell you what an architecture is that you can then act on. We try to position this as a customer. You already have all of the answers here. What we're going to do as facilitators of these is try to pull those out of your head. You just don't know how to get to the truth, but you already know that truth and we're going to design this architecture together. How did I do, Shaun? [0:26:44.7] SA: I couldn't have said it better myself. I would say one of the interest things about this process is the reason why it was developed the way it was is because in the world of technology and especially engineers, I definitely seen that you have two modes of thought when you come from the business world to the to the technical world. Often, engineers will approach a problem in a very different way and a very focused, blindered way than business folks. Ultimately, what we try to think of is that the purpose for the software is to enable the business to run well. In order to do that, you really need to understand at least at a high-level, what the heck is the business doing? Surprisingly and almost consistently, the engineering team doing the work is separated from the business team enough that it's like playing the telephone game, right? Where the business folks say, “Well, I told them to do this.” The technical team is like, “Oh, awesome. Well then, we're going to use all this amazing technology and build something that really doesn't support you.” This process really brings everybody together to discover how the system really wants to behave. Also as a side effect, you get everybody agreeing that yes, that is the way it's supposed to be. It's exciting to see teams come together that actually never even work together. You see the light bulbs go on and say, “Oh, that's why you do that.” The end result is in a week, we can go from nobody really knows each other, or quite understands the system as a whole, to we have a backlog of work that we can prioritize based on the learnings that we have, and feel pretty comfortable that the end result is going to be pretty close to how we want to get there. Then the biggest challenge is defining how do we get from point A to point B. That's part of that layering of the Swift method is knowing when to ask those questions. [0:28:43.0] JR: A micro follow-up and then I'll keep my mouth shut for a little bit. Is there a place that people could go online to read about this methodology, or just get some ideas of what you just described? [0:28:52.7] SA: Yeah. You can go to swiftbird.us. That has a high-level overview of more the public facing of how the methodology works. Then there's also internal resources that are constantly being developed as well. That's where I would start. [0:29:10.9] CC: That sounds really neat. As always, we are going to have links on the show notes for all of this. I checked out the website for the EventStorming book. There is a resources page there and has a list of a bunch of presentations. Sounds very interesting. I wanted to ask Chris and Shaun, have you ever seen, or heard of a case where a company went through the transformation, or modernization process and then they roll back to their legacy system for any reason? [0:29:49.2] SA: That's actually a really good question. It implies that often, the way people think about modernization would be more of a big bang approach, right? Where at a certain point in time, we switch to the new system. If it doesn't work, then we roll back. Part of what we try to do is have incremental releases, where we're actually putting small slices into production where you're not rolling back a whole – from modern back to legacy. It's more of you have a week's worth of work that's going into production that's for one of the thin slices, like Chris mentioned. If that doesn't work where there's something that is unexpected about it, then you're rolling back just a small chunk. You're not really jumping off a cliff for modernization. You're really taking baby steps. If it's a two step forward and one step back, you're still making a lot of really good progress. You're also gaining confidence as you go that in the end in two years, you're going to have a completely shiny new modern system and you're comfortable with it, because you're getting there an inch of the time, as opposed to taking a big leap. [0:30:58.8] CU: I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations. They’ve become so used to that that it often doesn't even cross their mind that it's possible to do something incrementally. We really do often times have to get spend time getting buy-in from them on that approach. You'd be surprised that even in industries that you’d think would be fantastic with managing risk, when you look at how they actually deal with deployment of software and the rolling out of software, they’re oftentimes taking approaches that maximize their risk. There's no way to make something riskier by doing a big bang. Yeah, as Shaun mentioned, the specifics of the swift are to find a way, so that you can understand where and get a roadmap for how to carve out incremental slices, so that you can strangle a large monolithic system slowly over time. That's something that's pretty powerful. Once someone gets bought in on that, they absolutely see the value, because they're minimizing risk. They're making small changes. They're easy to roll back one at a time. You might see people who would stop somewhere along the way, and we wouldn't necessarily say that that's a problem, right? Just like not every app needs to be modernized, maybe there's portions of systems that could stay where they are. Is that a bad thing? I wouldn't necessarily say that it is. Maybe that's the way that – the best way for that organization. [0:32:35.9] DC: We've bumped into this idea now a couple of different times and I think that both Chris and Shaun have brought this up. It's a little prelude to a show that we are planning on doing. One of the operable quotes from that show is the greatest enemy of knowledge is not the ignorance, it is the illusion of knowledge. It's a quote by Stephen Hawking. It speaks exactly to that, right? When you come to a problem with a solution in your mind that is frequently difficult to understand the problem on its merit, right? It’s really interesting seeing that crop up again in this show. [0:33:08.6] CU: I think even oftentimes, the advantage of a very discovery-oriented method, such as Swift is that it allows you to start from scratch with a problem set with people maybe that you aren't familiar with and don't have some of that baggage and can ask the dumb questions to get to some of the real answers. It's another phrase that I know Shaun likes to use is that our roles is facilitator to this method are to ask dumb questions. I mean, you just can't put enough value on that, right? The only way that you're going to break that established thinking is by asking questions at the root. [0:33:43.7] OP: One question, actually there was something recently that happened in the Kubernetes community, which I thought was pretty interesting and I'd like to get your thoughts on it, which is that Istio, which is a project that operates as a service mesh, I’m sure you all are familiar with it, has recently decided to unmodernize itself in a way. It was originally developed as a set of microservices. They have had no end of difficulty in getting in optimizing the different interactions between those services and the nodes. Then recently, they decided this might be a good example of when to monolith, versus when to microservice. I'm curious what your thoughts are on that, or if you have familiarity with it. [0:34:23.0] CU: What's actually quite – I'm not going to necessarily speak too much to this. Time will tell as to if the monolithing that they're doing at the moment is appropriate or not. Quite often, the starting point for us isn't necessarily a monolith. What it is is a proposed architecture coming from a customer that they're proud of, that this is my microservice design. You'll see a simple system with maybe hundreds of nano-services. The surprise that they have is that the recommendation from us coming out of our Swift sessions is that actually, you're overthinking this. We're going to take that idea that you have any way and maybe shrink that down and to save tens of services, or just a handful of services. I think one of the mistakes that people make within enterprises, or on microservices at the moment is to say, “Well, that's not a microcservice. It’s too big.” Well, how big or how small dictates a microservice, right? Oftentimes, we at least conceptually are taking and combining services based on the customers architecture very common. [0:35:28.3] SA: Monoliths aren't necessarily bad. I mean, people use them almost as a pejorative, “Oh, you have a monolith.” In our world it's like, well monoliths are bad when they're bad. If they're not bad, then that's great. The corollary to that is micro-servicing for the sake of micro-servicing isn't necessarily a good thing either. When we go through the Boris exercise, really what we're doing is we're showing how domain-based, or capabilities relate to each other. That happens to map really well in our opinion to first, cut microservices, right? You may have an order service, or a customer service that manages some of that. Just because we map capabilities and how they relate to each other, it doesn't mean the implementation can't even be as a single monolith, but componentized inside it, right? That's part of what we try really hard to do is avoid the religion of monolith versus microservices, or even having to spend a lot of time trying to define what a microservice is to you. It's really more of well, a system wants to behave this way. Now, surprise, you just did domain-driven design and mapped out some good 12-factor compliant microservices should you choose to build it that way, but there's other constraints that always apply at that point. [0:36:47.1] OP: Is there more traction in organizations implementing this methodology on a net new business, rather than current running businesses or applications? Is there are situations more so that you have seen where a new project, or a new functionality within a business starts to drive and implement this methodology and then it creeps through the other lines of business within the organization, because that first one was successful? [0:37:14.8] CU: I'd say that based on the nature of who our customers are as an app transformation practice, based on who those customers are and what their problems are, we're generally used to having a starting point of a process, or software that exists already. There's nothing at all to mandate that it has to be that way. As a matter of fact, with folks from our labs organization, we've used these methods in what you could probably call greener fields. At the end of the day when you have a process, or even a candidate process, something that doesn't exist yet, as long as you can get those ideas onto sticky notes and onto a wall, this is a very valid way of getting – turning ideas into an architecture and an architecture into software. [0:37:59.4] SA: We've seen that happen in practice a couple times, where maybe a piece of the methodology was used, like EventStorming just to get a feel for how the business wants to behave. Then to rapidly try something out in maybe more of a evolutionary architecture approach, MVP approach to let's just build something from a user perspective just to solve this problem and then try it out. If it starts to catch hold, then iterate back and now drill into it a little bit more and say, “All right. Now we know this is going to work.” We're modernizing something that may be two weeks old just because hooray, we proved it's valuable. We didn't necessarily have to spend as much upfront time on designing that as we would in this system that's already proven itself to be of business value. [0:38:49.2] OP: This might be a bit of a broad question, but what defines success of projects like this? I mean, we mentioned earlier about cost and maybe some of the drivers are to move off certain mainframes and things like that. If you're undergoing an application transformation, it seems to me like it's an ongoing thing. How do enterprises try to evaluate that return on investment? How does it relate to success criteria? I mean, faster release times, etc., potentially might be one, but how was that typically evaluated and somebody internally saying, “Look, we are running a successful project.” [0:39:24.4] SA: I think part of what we tried to do upfront is identify what the objectives are for a particular engagement. Often, those objectives start out with one thing, right? It's too costly to keep paying IBM or Oracle for WebLogic, or WebSphere. As we go through and talk through what types of things that we can solve, those objectives get added to, right? It may be the first thing, our primary objective is we need to start moving workloads off of the mainframe, or workloads off of WebLogic, or WebSphere, or something like that. There's other objectives that are part of this too, which can include things as interesting as developer happiness, right? They have a large team of a 150 developers that are really just getting sick of doing the same old thing and having new technology. That's actually a success criteria maybe down the road a little bit, but it's more of a nice to have. In a long-winded answer of saying, when we start these and when we incept these projects, we usually start out with let's talk through what our objectives are and how we measure success, those key results for those objectives. As we're iterating through, we keep measuring ourselves against those. Sometimes the objectives change over time, which is fine because you learn more as you're going through it. Part of that incremental iterative process is measuring yourself along the way, as opposed to waiting until the end. [0:40:52.0] CC: Yeah, makes sense. I guess these projects are as you say, are continuous and constantly self-adjusting and self-analyzing to re-evaluate success criteria to go along. Yeah, so that's interesting. [0:41:05.1] SA: One other interesting note though that personally we like to measure ourselves when we see one project is moving along and if the customers start to form other projects that are similar, then we know, “Okay, great. It's taking hold.” Now other teams are starting to do the same thing. We've become the cool kids and people want to be like us. The only reason it happens for that is when you're able to show success, right? Then other teams want to be able to replicate that. [0:41:32.9] CU: The customers OKRs, oftentimes they can be a little bit easier to understand. Sometimes they're not. Typically, they involve time or money, where I'm trying to take release times from X to Y, or decrease my spend on X to Y. The way that we I think measure ourselves as a team is around how clean do we leave the campsite when we're done. We want the customers to be able to run with this and to continue to do this work and to be experts. As much as we'd love to take money from someone forever, we have a lot of people to help, right? Our goal is to help to build that practice and center of excellence and expertise within an organization, so that as their goals or ideas change, they have a team to help them with that, so we can ride off into the sunset and go help other customers. [0:42:21.1] CC: We are coming up to the end of the episode, unfortunately, because this has been such a great conversation. It turned out to be a more of an interview style, which was great. It was great getting the chance to pick your brains, Chris and Shaun. Going along with the interview format, I like to ask you, is there any question that wasn't asked, but you wish was asked? The intent here is to illuminates what this process for us and for people who are listening, especially people who they might be in pain, but they might be thinking this is just normal. [0:42:58.4] CU: That's an interesting one. I guess to some degree, that pain is unfortunately normal. That's just unfortunate. Our role is to help solve that. I think the complacency is the absolute worst thing in an organization. If there is pain, rather than saying that the solution won't work here, let’s start to talk about solutions to that. We've seen customers of all shapes and sizes. No matter how large, or cumbersome they might be, we've seen a lot of big organizations make great progress. If your organization's in pain, you can use them as an example. There is light at the end of the tunnel. [0:43:34.3] SA: It's usually not a train. [0:43:35.8] CU: Right. Usually not. [0:43:39.2] SA: Other than that, I think you asked all the questions that we always try to convey to customers of how we do things, what is modernization. There's probably a little bit about re-platforming, doing the bare minimum to get something onto to the cloud. We didn't talk a lot about that, but it's a little bit less meta, anyway. It's more technical and more recipe-driven as you discover what the workload looks like. It's more about, is it something we can easily do a CF push, or just create a container and move it up to the cloud with minimal changes? There's not conceptually not a lot of complexity. Implementation-wise, there's still a lot of challenges there too. They're not as fun to talk about for me anyway. [0:44:27.7] CC: Maybe that's a good excuse to have some of our colleagues back on here with you. [0:44:30.7] SA: Absolutely. [0:44:32.0] DC: Yeah, in a previous episode we talked about persistence and state of those sorts of things and how they relate to your applications and how when you're thinking about re-platforming and even just where you're planning on putting those applications. For us, that question comes up quite a lot. That's almost zero trying to figure out the state model and those sort of things. [0:44:48.3] CC: That episode was named States in Stateless Apps, I think. We are at the end, unfortunately. It was so great having you both here. Thank you Duffie, Shaun, Chris and I'm going by the order I'm seeing people on my video. Josh and Olive. Until next time. Make sure please to let us know your feedback. Subscribe. Give us a thumbs up. Give us a like. You know the drill. Thank you so much. Glad to be here. Bye, everybody. [0:45:16.0] JR: Bye all. [0:45:16.5] CU: Bye. [END OF EPISODE] [0:45:17.8] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.

Maintainable
Michael Feathers: Be Curious & Chase The Rabbit Holes

Maintainable

Play Episode Listen Later Oct 21, 2019 32:14


Robby speaks with programmer, speaker, and author Michael Feathers, most notably known as the author of the book Working Effectively with Legacy Code.Helpful Links[Book] Working Effectively with Legacy CodeFollow Michael on TwitterR7kr Research & ConveyanceR7kr Research & Conveyance on TwitterSocio-Technical Seeing: Modeling The Dynamics Of Code And AttentionMichael's blogSubscribe to Maintainable on:Apple PodcastsOvercastOr search "Maintainable" wherever you stream your podcasts.

Becomex Lab
Desenvolvedor do Futuro Série 2 - Episódio 6 - SOLID

Becomex Lab

Play Episode Listen Later Sep 29, 2019 13:30


Você sabe quais são as melhores práticas do SOLID? Esse será o tema que o Rodolfo Dalla Costa e o Bruno Duran discutirão hoje na série desenvolvedores do Futuro da Becomex Lab. Entenda o significado do acrônimo SOLID, que foi criado por Michael Feathers e representa os 5 princípios da programação orientada a objetos identificados por Robert Cecil Martin ou Uncle Bob nos princípios do ano 2000. Quer saber o que significa cada letra? Ouça o Podcast!

Weekly Dev Tips
Introducing SOLID Principles

Weekly Dev Tips

Play Episode Listen Later May 6, 2019 7:26


Hi and welcome back to Weekly Dev Tips. I’m your host Steve Smith, aka Ardalis. This is episode 47, in which we'll introduce the SOLID principles. I'll spend a little time reviewing these principles in the upcoming episodes. What are the SOLID principles of object-oriented design? Sponsor - devBetter Group Career Coaching for Developers Are you a software developer looking to advance in your career more quickly? Would you find a mentor and a group of like-minded professionals valuable? If so, check out devBetter.com and read the testimonials at the bottom of the page. Sign up for a risk free membership if you're interested in growing your network and skills with us. Show Notes / Transcript Depending on how long you've been programming, you may have heard of the SOLID principles. These are a set of 5 principles that have been around for several decades, and about 15 years ago someone - I think it was Michael Feathers - had the idea to arrange them in such a way that they formed the macronym SOLID. Prior to that, I think the first time they were all published together was in Robert C. Martin's 2003 book, Agile Software Development: Principles, Patterns, and Practices in which their sequence spelled SOLDI - so close! This same sequence was used in the 2006 book Agile Principles, Patterns, and Practices in C#. So what are the SOLID principles? As I mentioned, SOLID is a macronym, meaning it is an acronym formed by other acronyms. In this case, these are SRP, OCP, LSP, ISP, and DIP. All those Ps at the end of each acronym stand for principle, of course. Listing each principle, we have: Single Responsibility Open/Closed Liskov Substitution Interface Segregation Dependency Inversion You may already be familiar with these principles. If you're a developer who's using a strongly typed language like C# or Java, you should be extremely familiar with them. If you're not, I recommend digging into them more deeply. Applying them can make a massive difference in the quality of code you write. How do I define quality? Well, that's probably a topic I could devote an episode to, but the short version is that quality code is code that is easy to understand and easy to change to suit new requirements. It's easily and quickly tested by automated tests, which reduces the need for expensive manual testing. And it's loosely coupled to infrastructure concerns like databases or files. How do these principles help you to write quality code? They provide guidance. You need to write code that solves a problem, first and foremost. But once you have code that does that, before you call it done and check it in, you should evaluate its design and see if it makes sense to spend a few moments cleaning anything up. Back in Episode 6 - you are listening to these in sequential, not reverse, order, right? - I talked about Kent Beck's approach of Make It Work, Make It Right, Make It Fast. SOLID principles should generally be applied during the Make It Right step. Don't apply them up front, but as I discussed in Episode 10, follow Pain Driven Development. If you try to apply every principle to every part of your codebase from the start, you'll end up with extremely abstract code that could do anything but actually does nothing. Don't do that. Instead, build the code you need to solve the problem at hand, and then evaluate whether that code has any major code smells like I discussed in episode 30. One huge code smell is code that is hard to unit test, meaning it's hard to write an automated test that can just test your code, without any external infrastructure or dependencies like databases, files, or web servers. Code that is easy to unit test is generally easy to change, and code that has tests is also easier to refactor because when you're done you'll have some degree of confidence that you haven't broken anything. In upcoming episodes, I'll drill into each principle a bit more. I've published two courses on SOLID at Pluralsight where you can obviously learn a lot more and see real code as opposed to just hearing me through a podcast. The first one was published in 2010 and so the tools and look were a bit dated. The more recent one is slimmed down and uses the latest version of Visual Studio and .NET Core. There are links to both courses in the show notes - the original one also covers the Don't Repeat Yourself principle. Let me wrap this episode up with a very brief overview of each principle. The Single Responsibility Principle is generally applied to classes and suggests that classes should have only one responsibility, which can also be thought of as one reason to change. Responsibilities include things like business logic, ui logic, data access, and more. Following this principle, you'll tend to have smaller, more focused classes. The Open/Closed Principle suggests that you should be able to change the behavior of your system without changing its source code. This generally relies on some kind of parameter or plug-in capability to provide new behavior to an existing class or service. The Liskov Substitution Principle cautions against creating inheritance hierarchies in which child types are not 100% substitutable for their base types. When violated, this can result in messy code and bugs. The Interface Segregation Principle suggests that classes that use interfaces should use all or most of the interface's exposed members. This then leads to interfaces that are small, focused, and cohesive, much like SRP. Finally, the Dependency Inversion Principle recommends that low-level concerns depend on high level concerns, not the other way around. This means for example that business layer code shouldn't directly depend on data access code, but rather an abstraction should exist that the business code works with and that the data access code implements. At runtime, the data access code will be provided as an implementation of the interface the business code is written to work with, providing loose coupling and more testable code. Show Resources and Links devBetter SOLID Principles for C# on Pluralsight Refactoring Fundamentals on Pluralsight SOLID Principles of Object Oriented Design on Pluralsight (published in 2010) Agile Software Development: Principles, Patterns, and Practices on Amazon Agile Principles, Patterns, and Practices in C# on Amazon See Visualizations and Subscribe to WeeklyDevTips in YouTube Clean Code on Amazon That’s it for this week. If you want to hear more from me, go to ardalis.com/tips to sign up for a free tip in your inbox every Wednesday. I'm also streaming programming topics on twitch.tv/ardalis most Fridays at noon Eastern Time. Thank you for subscribing to Weekly Dev Tips, and I'll see you next week with another great developer tip.

Cross Cutting Concerns Podcast
Podcast 120 - Dennis Stepp on Risk Based Analysis

Cross Cutting Concerns Podcast

Play Episode Listen Later May 5, 2019 17:10


Dennis Stepp is prioritizing tests based on risk. This episode is not sponsored! Want to be a sponsor? You can contact me or check out my sponsorship gig on Fiverr Show Notes: Mind Mapping The four factors of risk based analytis: Domain, risks, impact, likelihood I threw out the term systemic risk Books: Clean Code by Robert C. Martin The Phoenix Project by Jean Kim A Seat at the Table by Mark Schwartz Making Work Visible by Dominica Degrandis Working Effectively with Legacy Code by Michael Feathers Dennis-Stepp.com Dennis is on Twitter Want to be on the next episode? You can! All you need is the willingness to talk about something technical.

Hipsters Ponto Tech
Microsserviços na Caelum – Hipsters On The Road #6

Hipsters Ponto Tech

Play Episode Listen Later Apr 23, 2019


Sistemas baseados em microsserviços têm crescido bastante nos últimos anos. Mas será que é algo que você também deveria usar? Hora de descobrir! Participantes: Gabs Ferreira, o host que acha que tem mais é que colocar microsserviço em tudo mesmoAlberto Souza, o co-host que é cauteloso e acha que tem que tomar cuidado com essa abordagemAlexandre Aquiles, instrutor e desenvolvedor na Caelum Brasília Links: Curso de Microserviços com Spring CloudDocumento com os estudos do AlexandreFalácias dos sistemas distribuídosArtigo em que Martin Fowler argumenta em favor de começar com um MonólitoArtigo em que Stefan Tilkov argumenta que devemos começar com MicroservicesArtigo em que Henrique Lobo mostra que Microservices é SOAArtigo em que Martin Fowler indica o que acha que são os pré-requisitos pra adotar MicroservicesArtigo em que Michael Feathers argumenta que há algo como uma "lei da conservação da complexidade" em software Produção e conteúdo: Alura Cursos online de Tecnologia - https://www.alura.com.br === Caelum Ensino e Inovação – https://www.caelum.com.br/ Edição e sonorização: Radiofobia Podcast e Multimídia

Hipsters Ponto Tech
Microsserviços na Caelum – Hipsters On The Road #6

Hipsters Ponto Tech

Play Episode Listen Later Apr 22, 2019 38:01


Sistemas baseados em microsserviços têm crescido bastante nos últimos anos. Mas será que é algo que você também deveria usar? Hora de descobrir! Participantes: Gabs Ferreira, o host que acha que tem mais é que colocar microsserviço em tudo mesmoAlberto Souza, o co-host que é cauteloso e acha que tem que tomar cuidado com essa abordagemAlexandre Aquiles, instrutor e desenvolvedor na Caelum Brasília Links: Curso de Microserviços com Spring CloudDocumento com os estudos do AlexandreFalácias dos sistemas distribuídosArtigo em que Martin Fowler argumenta em favor de começar com um MonólitoArtigo em que Stefan Tilkov argumenta que devemos começar com MicroservicesArtigo em que Henrique Lobo mostra que Microservices é SOAArtigo em que Martin Fowler indica o que acha que são os pré-requisitos pra adotar MicroservicesArtigo em que Michael Feathers argumenta que há algo como uma "lei da conservação da complexidade" em software Produção e conteúdo: Alura Cursos online de Tecnologia - https://www.alura.com.br === Caelum Ensino e Inovação – https://www.caelum.com.br/ Edição e sonorização: Radiofobia Podcast e Multimídia

React Round Up
RRU 056: React Conf 2018 with Adam Laycock

React Round Up

Play Episode Listen Later Apr 9, 2019 56:37


Sponsors Netlify Sentry use the code “devchat” for $100 credit Triplebyte offers a $1000 signing bonus CacheFly Panel Charles Max Wood Nader Dabit Justin Bennett Joined by Special Guest: Adam Laycock Summary Adam Laycock describes his experience at React conf 2018, the atmosphere, the people and the talks. The panel shares how the approach conferences, taking notes, getting to know people, accessing information and getting out of their comfort zone. Adam shares some of the major topics covered at including, hooks, suspense, and concurrent rendering. The panel considers these topics and React conferences they look forward to attending. The episode ends with the panel comparing Angular and React, conferences, upgrades, and routers for React. Links https://www.microsoft.com/en-us/build https://medium.com/curated-by-versett/talks-worth-watching-react-conf-2018-bfbdd40922aa https://reactjs.org/community/conferences.html https://twitter.com/atlaycock https://github.com/alaycock https://adamlaycock.ca/ https://medium.com/@adam.laycock https://twitter.com/reactroundup https://www.facebook.com/React-Round-Up Picks Charles Max Wood https://www.notion.so/ The Effective Executive by Peter F. Drucker http://entreprogrammers.com/ Michael Feathers Kent Beck Nader Dabit https://dev.to/dabit3 Justin Bennett https://github.com/Bogdan-Lyashenko/codecrumbs https://medium.com/palantir/tslint-in-2019-1a144c2317a9 https://www.npmjs.com/package/rate-limiter-flexible Adam Laycock https://kentcdodds.com/blog/please-stop-building-inaccessible-forms-and-how-to-fix-them https://medium.com/curated-by-versett/dont-eject-your-create-react-app-b123c5247741 Clean Architecture: A Craftsman's Guide to Software Structure and Design by Robert C. Martin

Devchat.tv Master Feed
RRU 056: React Conf 2018 with Adam Laycock

Devchat.tv Master Feed

Play Episode Listen Later Apr 9, 2019 56:37


Sponsors Netlify Sentry use the code “devchat” for $100 credit Triplebyte offers a $1000 signing bonus CacheFly Panel Charles Max Wood Nader Dabit Justin Bennett Joined by Special Guest: Adam Laycock Summary Adam Laycock describes his experience at React conf 2018, the atmosphere, the people and the talks. The panel shares how the approach conferences, taking notes, getting to know people, accessing information and getting out of their comfort zone. Adam shares some of the major topics covered at including, hooks, suspense, and concurrent rendering. The panel considers these topics and React conferences they look forward to attending. The episode ends with the panel comparing Angular and React, conferences, upgrades, and routers for React. Links https://www.microsoft.com/en-us/build https://medium.com/curated-by-versett/talks-worth-watching-react-conf-2018-bfbdd40922aa https://reactjs.org/community/conferences.html https://twitter.com/atlaycock https://github.com/alaycock https://adamlaycock.ca/ https://medium.com/@adam.laycock https://twitter.com/reactroundup https://www.facebook.com/React-Round-Up Picks Charles Max Wood https://www.notion.so/ The Effective Executive by Peter F. Drucker http://entreprogrammers.com/ Michael Feathers Kent Beck Nader Dabit https://dev.to/dabit3 Justin Bennett https://github.com/Bogdan-Lyashenko/codecrumbs https://medium.com/palantir/tslint-in-2019-1a144c2317a9 https://www.npmjs.com/package/rate-limiter-flexible Adam Laycock https://kentcdodds.com/blog/please-stop-building-inaccessible-forms-and-how-to-fix-them https://medium.com/curated-by-versett/dont-eject-your-create-react-app-b123c5247741 Clean Architecture: A Craftsman's Guide to Software Structure and Design by Robert C. Martin

STEM on FIRE
78: Computer Science – You need to focus on the human side to solve complex problems – Zoey Gagnon

STEM on FIRE

Play Episode Listen Later Mar 3, 2019 20:21


Zoey Gagnon earned a Computer Science Degree from The Metropolitan State University of Denver and is an Engineering Manager at Meetup. [0:00] Zoey has focused on Agile development and builds large complex things with teams of people in humane ways and chose Computer Science in large part due to the degrees being offered. [1:50] Goes into a little bit about PHP and Micro Services and moving it into newer technologies. [5:45] Goes into what a day might look like – as a manager, Zoey is more of a coach. [8:40] What has Zoey fired up today is thinking about software architecture and goes into ways to make architecture decisions. — Need to have patience when making decision that will be hard to change in the future. [12:15] Getting through college – have an idea of the value that you want to get out of college. Go to college when you can really understand why you are going. [14:00] Software can be a very creative path, as the software is the path to solving problems and many ways to solve them. [15:40] Best advice is to have patience and to listen and to give others space to express themselves. And an attribute for success is to be an avid note taker. Zoey really likes the web page Lean X in Y Minutes for syntax translation from language to language. And a book Zoey recommends is “Working Effectively with Legacy Code” by Michael Feathers. [19:15] Parting advice – understand that the technology problems that we are trying to solve are very complex, too complex for a single person. So you need to focus on those human elements to work with people Free Audio Book from Audible. You can get a free book from Audible at www.stemonfirebook.com and can cancel within 30 days and keep the book of your choice with no cost.

Lambda3 Podcast
Lambda3 Podcast 126 – Princípios SOLID

Lambda3 Podcast

Play Episode Listen Later Jan 18, 2019 38:13


SOLID é um acrônimo criado por Michael Feathers que representa os 5 princípios identificados por Robert Cecil Martin (Uncle Bob) por volta dos anos 2000, e neste podcast falaremos sobre cada princípio de modo descontraído e pragmático. Feed do podcast: www.lambda3.com.br/feed/podcast Feed do podcast somente com episódios técnicos: www.lambda3.com.br/feed/podcast-tecnico Feed do podcast somente com episódios não técnicos: www.lambda3.com.br/feed/podcast-nao-tecnico Pauta: O que é? Origem Single Responsability Principle Open Closed Principle Liskov Principle Interface Segregation Principle Dependency Inversion Links Citados: Design Principles Orientação a Objetos e SOLID para Ninjas Artigo original do Robert C. Martin Livro que deu origin ao OCP Participantes: Lucas Teles - @lucasteles42 Marcela Carvalho -  @marcela_oak Daniela Rocha - @danimsrocha Edição: Luppi Arts Créditos das músicas usadas neste programa: Music by Kevin MacLeod (incompetech.com) licensed under Creative Commons: By Attribution 3.0 - creativecommons.org/licenses/by/3.0

Tech Done Right
Episode 53: Tribal Knowledge and Onboarding with Annie Sexton

Tech Done Right

Play Episode Listen Later Jan 16, 2019 36:49


Tribal Knowledge and On-boarding with Annie Sexton TableXI offers training for developers and product teams! For more info, email workshops@tablexi.com. Guest Annie Sexton (https://twitter.com/anniethesexton): Core Support Engineer at Heroku (https://www.heroku.com/). Traveler. Amateur graphic novelist. More at momotarocomic.com/ (http://momotarocomic.com/). Summary Developers and teams build up a lot of knowledge about their code and their process which never gets written down and which makes it harder together to get new team members up to speed. Our guest, Annie Sexton, is a support engineer for Heroku and has to deal with not only Heroku’s vast amount of knowledge, but also the unwritten information of many of her support customers. We’ll talk about the practical things Annie recommends to help make this knowledge explicit, and how your team can improve its group memory and team on-boarding. We’d also like to hear from you. Is there something your team has done to write down the things everybody knows? Let us know at http://techdoneright.io/53 (http://techdoneright.io/53) or on Twitter at @techdoneright (http://twitter.com/tech_done_right). Notes 01:51 - Why Tribal Knowledge is a Bad Thing Annie’s RubyConf Talk: The Dangers of Tribal Knowledge (https://www.youtube.com/watch?v=o-JL-so5Gm8) 04:50 - Legacy Code Noel Rappin: The Road To Legacy Is Paved With Good Intentions -- WindyCityRails, Sept 2017 (https://www.youtube.com/watch?v=NGIhW3nREac&list=PLP0HXAd1Anx3xVPvdnKXtlsqJhoZHBFF_&index=1) 06:38 - Capturing Tribal Knowledge 12:55 - Keeping Things Up-To-Date 15:57 - When the “Why” and the “Overview” Get Lost 17:49 - Becoming Immune to Complexity 20:39 - Tools for Documentation 28:50 - Convincing Others that Documentation is Important 33:31 - Planning for Succession Related Episodes Your First 100 Days Onboarding A New Employee With Shay Howe and John Gore (https://www.techdoneright.io/37) Your First 100 Days at a New Company with Katie Gore and Elizabeth Trepkowski Hodos (https://www.techdoneright.io/36) Avoiding Legacy Code with Michael Feathers (https://www.techdoneright.io/11) Special Guest: Annie Sexton.

The Agile Revolution
Episode 145: Working Effectively with (Legacy) Code with Michael Feathers

The Agile Revolution

Play Episode Listen Later Oct 27, 2018 33:30


Craig is in Atlanta at Agile 2016 and catches up with Michael Feathers, author of “Working Effectively with Legacy Code” and they talk about the following: Working Effectively with Legacy Code originally started as a book about Test First Programming but morphed into a book about the techniques for refactoring code in legacy systems The … Continue reading →

All JavaScript Podcasts by Devchat.tv
JSJ 331: “An Overview of JavaScript Testing in 2018” with Vitali Zaidman

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Sep 18, 2018 54:56


Panel: AJ O’Neal Aimee Knight Joe Eames Charles Max Wood Special Guests: Vitali Zaidman In this episode, the panel talks with programmer, Vitali Zaidman, who is working with Software Solutions Company. He researches technologies and starts new projects all the time, and looks at these new technologies within the market. The panel talks about testing JavaScript in 2018 and Jest. Show Topics: 1:32 – Chuck: Let’s talk about testing JavaScript in 2018. 1:53 – Vitali talks about solving problems in JavaScript. 2:46 – Chuck asks Vitali a question. 3:03 – Vitali’s answer. 3:30 – Why Jest? Why not Mocha or these other programs? 3:49 – Jest is the best interruption of what testing should look like and the best practice nowadays. There are different options, they can be better, but Jest has this great support from their community. There are great new features. 4:31 – Chuck to Joe: What are you using for testing nowadays? 4:43 – Joe: I use Angular, primarily. 6:01 – Like life, it’s sometimes easier to use things that make things very valuable. 7:55 – Aimee: I have heard great things about Cypress, but at work we are using another program. 8:22 – Vitali: Check out my article. 8:51 – Aimee: There are too many problems with the program that we use at work. 9:39 – Panelist to Vitali: I read your article, and I am a fan. Why do you pick Test Café over Cypress, and how familiar are you with Cypress? What about Selenium and other programs? 10:12 – Vitali: “Test Café and Cypress are competing head-to-head.” Listen to Vitali’s suggestions and comments per the panelists’ question at this timestamp. 11:25 – Chuck: I see that you use sign-on... 12:29 – Aimee: Can you talk about Puppeteer? It seems promising. 12:45 – Vitali: Yes, Puppeteer is promising. It’s developed by Google and by Chrome. You don’t want to use all of your tests in Puppeteer, because it will be really hard to do in other browsers. 13:26: Panelist: “...5, 6, 7, years ago it was important of any kind of JavaScript testing you had no idea if it worked in one browser and it not necessarily works in another browser. That was 10 years ago. Is multiple browsers testing as important then as it is now? 14:51: Vitali answers the above question. 15:30 – Aimee: If it is more JavaScript heavy then it could possibly cause more problems. 15:56 – Panelist: I agree with this. 16:02 – Vitali continues this conversation with additional comments. 16:17 – Aimee: “I see that Safari is the new Internet Explorer.” 16:23: Chuck: “Yes, you have to know your audience. Are they using older browsers? What is the compatibility?” 17:01 – Vitali: There are issues with the security. Firefox has a feature of tracking protection; something like that. 17:33 – Question to Vitali by Panelist. 17:55 – Vitali answers the question. 18:30 – Panelist makes additional comments. 18:43 – If you use Safari, you reap what you sow. 18:49 – Chuck: I use Chrome on my iPhone. (Aimee does, too.) Sometimes I wind up in Safari by accident. 19:38 – Panelist makes comments. 19:52 – Vitali tells a funny story that relates to this topic. 20:45 – There are too many standards out there. 21:05 – Aimee makes comments. 21:08 – Brutalist Web Design. Some guy has this site – Brutalist Web Design – where he says use basic stuff and stop being so custom. Stop using the web as some crazy platform, and if your site is a website that can be scrolled through, that’s great. It needs to be just enough for people to see your content. 22:16 – Aimee makes additional comments about this topic of Brutalist Web Design. 22:35 – Panelist: I like it when people go out and say things like that. 22:45 – Here is the point, though. There is a difference between a website and a web application. Really the purpose is to read an article. 23:37 – Vitali chimes in. 24:01 – Back to the topic of content on websites. 25:17 – Panelist: Medium is very minimal. Medium doesn’t feel like an application. 26:10 – Is the website easy enough for the user to scroll through and get the content like they want to? 26:19 – Advertisement. 27:22 – See how far off the topic we got? 27:31 – These are my favorite conversations to have. 27:39 – Vitali: Let’s talk about how my article got so popular. It’s an interesting thing, I started researching “testing” for my company. We wanted to implement one of the testing tools. Instead of creating a presentation, I would write first about it in Medium to get feedback from the community as well. It was a great decision, because I got a lot of comments back. I enjoyed the experience, too. Just write about your problem in Medium to see what people say. 28:48 – Panelist: You put a ton of time and energy in this article. There are tons of links. Did you really go through all of those articles? 29:10 – Yes, what are the most permanent tools? I was just reading through a lot of comments and feedback from people. I tested the tools myself, too! 29:37 – Panelist: You broke down the article, and it’s a 22-minute read. 30:09 – Vitali: I wrote the article for my company, and they ad to read it. 30:24 – Panelist: Spending so much time – you probably felt like it was apart of your job. 30:39 – Vitali: I really like creating and writing. It was rally amazing for me and a great experience. I feel like I am talented in this area because I write well and fast. I wanted to express myself. 31:17 – Did you edit and review? 31:23 – Vitali: I wrote it by myself and some friends read it. There were serious mistakes, and that’s okay I am not afraid of mistakes. This way you get feedback. 32:10 – Chuck: “Some people see testing in JavaScript, and people look at this and say there are so much here. Is there a place where people can start, so that way they don’t’ get too overwhelmed? Is there a way to ease into this and take a bite-size at a time?” 32:52 – Vitali: “Find something that works for them. Read the article and start writing code.” He continues this conversation from here on out. 34:03 – Chuck continues to ask questions and add other comments. 34:16 – Vitali chimes-in.  34:38 – Chuck.  34:46 – Vitali piggybacks off of Chuck’s comments. 36:14 – Panelist: Let’s go back to Jest. There is a very common occurrence where we see lots of turn and we see ideas like this has become the dominant or the standard, a lot of people talk about stuff within this community. Then we get this idea that ‘this is the only thing that is happening.’ Transition to jQuery to React to... With that context do you feel like Jest will be a dominant program? Are we going to see Jest used just as common as Mocha and other popular programs? 38:15 – Vitali comments on the panelist’s question. 38:50 – Panelist: New features. Are the features in Jest (over Jasmine, Mocha, etc.) so important that it will drive people to it by itself? 40:30 – Vitali comments on this great question. 40:58 – Panelist asks questions about features about Jest. 41:29 – Vitali talks about this topic. 42:14 – Let’s go to picks! 42:14 – Advertisement. Links: Vitali Zaidman’s Facebook Vitali Zaidman’s Medium Vitali Zaidman’s GitHub Vitali Zaidman’s NPM Vitali Zaidman’s LinkedIn Vitali Zaidman’s Medium Article JavaScript Brutalist Web Design Jasmine Cypress React jQuery Jest Protractor – end to end testing for Angular Test Café Intern Sinon XKCD Sponsors: Kendo UI Sentry Digital Ocean Cache Fly Picks: AJ O’Neal Continuous from last week’s episode: Crossing the Chasm – New Technologies from Niche to General Adaptation. Go Lang Joe Eames Board Game: Rajas of the Ganges Framework Summit Conference in Utah React Conference Aimee Knight Hacker News – “Does Software Understand Complexity” via Michael Feathers Cream City Code Chuck E-Book: How do I get a job? Express VPN Vitali Book: The Square and The Tower: Networks and Power, from the Freemasons to Facebook by Niall Ferguson My article!

JavaScript Jabber
JSJ 331: “An Overview of JavaScript Testing in 2018” with Vitali Zaidman

JavaScript Jabber

Play Episode Listen Later Sep 18, 2018 54:56


Panel: AJ O’Neal Aimee Knight Joe Eames Charles Max Wood Special Guests: Vitali Zaidman In this episode, the panel talks with programmer, Vitali Zaidman, who is working with Software Solutions Company. He researches technologies and starts new projects all the time, and looks at these new technologies within the market. The panel talks about testing JavaScript in 2018 and Jest. Show Topics: 1:32 – Chuck: Let’s talk about testing JavaScript in 2018. 1:53 – Vitali talks about solving problems in JavaScript. 2:46 – Chuck asks Vitali a question. 3:03 – Vitali’s answer. 3:30 – Why Jest? Why not Mocha or these other programs? 3:49 – Jest is the best interruption of what testing should look like and the best practice nowadays. There are different options, they can be better, but Jest has this great support from their community. There are great new features. 4:31 – Chuck to Joe: What are you using for testing nowadays? 4:43 – Joe: I use Angular, primarily. 6:01 – Like life, it’s sometimes easier to use things that make things very valuable. 7:55 – Aimee: I have heard great things about Cypress, but at work we are using another program. 8:22 – Vitali: Check out my article. 8:51 – Aimee: There are too many problems with the program that we use at work. 9:39 – Panelist to Vitali: I read your article, and I am a fan. Why do you pick Test Café over Cypress, and how familiar are you with Cypress? What about Selenium and other programs? 10:12 – Vitali: “Test Café and Cypress are competing head-to-head.” Listen to Vitali’s suggestions and comments per the panelists’ question at this timestamp. 11:25 – Chuck: I see that you use sign-on... 12:29 – Aimee: Can you talk about Puppeteer? It seems promising. 12:45 – Vitali: Yes, Puppeteer is promising. It’s developed by Google and by Chrome. You don’t want to use all of your tests in Puppeteer, because it will be really hard to do in other browsers. 13:26: Panelist: “...5, 6, 7, years ago it was important of any kind of JavaScript testing you had no idea if it worked in one browser and it not necessarily works in another browser. That was 10 years ago. Is multiple browsers testing as important then as it is now? 14:51: Vitali answers the above question. 15:30 – Aimee: If it is more JavaScript heavy then it could possibly cause more problems. 15:56 – Panelist: I agree with this. 16:02 – Vitali continues this conversation with additional comments. 16:17 – Aimee: “I see that Safari is the new Internet Explorer.” 16:23: Chuck: “Yes, you have to know your audience. Are they using older browsers? What is the compatibility?” 17:01 – Vitali: There are issues with the security. Firefox has a feature of tracking protection; something like that. 17:33 – Question to Vitali by Panelist. 17:55 – Vitali answers the question. 18:30 – Panelist makes additional comments. 18:43 – If you use Safari, you reap what you sow. 18:49 – Chuck: I use Chrome on my iPhone. (Aimee does, too.) Sometimes I wind up in Safari by accident. 19:38 – Panelist makes comments. 19:52 – Vitali tells a funny story that relates to this topic. 20:45 – There are too many standards out there. 21:05 – Aimee makes comments. 21:08 – Brutalist Web Design. Some guy has this site – Brutalist Web Design – where he says use basic stuff and stop being so custom. Stop using the web as some crazy platform, and if your site is a website that can be scrolled through, that’s great. It needs to be just enough for people to see your content. 22:16 – Aimee makes additional comments about this topic of Brutalist Web Design. 22:35 – Panelist: I like it when people go out and say things like that. 22:45 – Here is the point, though. There is a difference between a website and a web application. Really the purpose is to read an article. 23:37 – Vitali chimes in. 24:01 – Back to the topic of content on websites. 25:17 – Panelist: Medium is very minimal. Medium doesn’t feel like an application. 26:10 – Is the website easy enough for the user to scroll through and get the content like they want to? 26:19 – Advertisement. 27:22 – See how far off the topic we got? 27:31 – These are my favorite conversations to have. 27:39 – Vitali: Let’s talk about how my article got so popular. It’s an interesting thing, I started researching “testing” for my company. We wanted to implement one of the testing tools. Instead of creating a presentation, I would write first about it in Medium to get feedback from the community as well. It was a great decision, because I got a lot of comments back. I enjoyed the experience, too. Just write about your problem in Medium to see what people say. 28:48 – Panelist: You put a ton of time and energy in this article. There are tons of links. Did you really go through all of those articles? 29:10 – Yes, what are the most permanent tools? I was just reading through a lot of comments and feedback from people. I tested the tools myself, too! 29:37 – Panelist: You broke down the article, and it’s a 22-minute read. 30:09 – Vitali: I wrote the article for my company, and they ad to read it. 30:24 – Panelist: Spending so much time – you probably felt like it was apart of your job. 30:39 – Vitali: I really like creating and writing. It was rally amazing for me and a great experience. I feel like I am talented in this area because I write well and fast. I wanted to express myself. 31:17 – Did you edit and review? 31:23 – Vitali: I wrote it by myself and some friends read it. There were serious mistakes, and that’s okay I am not afraid of mistakes. This way you get feedback. 32:10 – Chuck: “Some people see testing in JavaScript, and people look at this and say there are so much here. Is there a place where people can start, so that way they don’t’ get too overwhelmed? Is there a way to ease into this and take a bite-size at a time?” 32:52 – Vitali: “Find something that works for them. Read the article and start writing code.” He continues this conversation from here on out. 34:03 – Chuck continues to ask questions and add other comments. 34:16 – Vitali chimes-in.  34:38 – Chuck.  34:46 – Vitali piggybacks off of Chuck’s comments. 36:14 – Panelist: Let’s go back to Jest. There is a very common occurrence where we see lots of turn and we see ideas like this has become the dominant or the standard, a lot of people talk about stuff within this community. Then we get this idea that ‘this is the only thing that is happening.’ Transition to jQuery to React to... With that context do you feel like Jest will be a dominant program? Are we going to see Jest used just as common as Mocha and other popular programs? 38:15 – Vitali comments on the panelist’s question. 38:50 – Panelist: New features. Are the features in Jest (over Jasmine, Mocha, etc.) so important that it will drive people to it by itself? 40:30 – Vitali comments on this great question. 40:58 – Panelist asks questions about features about Jest. 41:29 – Vitali talks about this topic. 42:14 – Let’s go to picks! 42:14 – Advertisement. Links: Vitali Zaidman’s Facebook Vitali Zaidman’s Medium Vitali Zaidman’s GitHub Vitali Zaidman’s NPM Vitali Zaidman’s LinkedIn Vitali Zaidman’s Medium Article JavaScript Brutalist Web Design Jasmine Cypress React jQuery Jest Protractor – end to end testing for Angular Test Café Intern Sinon XKCD Sponsors: Kendo UI Sentry Digital Ocean Cache Fly Picks: AJ O’Neal Continuous from last week’s episode: Crossing the Chasm – New Technologies from Niche to General Adaptation. Go Lang Joe Eames Board Game: Rajas of the Ganges Framework Summit Conference in Utah React Conference Aimee Knight Hacker News – “Does Software Understand Complexity” via Michael Feathers Cream City Code Chuck E-Book: How do I get a job? Express VPN Vitali Book: The Square and The Tower: Networks and Power, from the Freemasons to Facebook by Niall Ferguson My article!

Devchat.tv Master Feed
JSJ 331: “An Overview of JavaScript Testing in 2018” with Vitali Zaidman

Devchat.tv Master Feed

Play Episode Listen Later Sep 18, 2018 54:56


Panel: AJ O’Neal Aimee Knight Joe Eames Charles Max Wood Special Guests: Vitali Zaidman In this episode, the panel talks with programmer, Vitali Zaidman, who is working with Software Solutions Company. He researches technologies and starts new projects all the time, and looks at these new technologies within the market. The panel talks about testing JavaScript in 2018 and Jest. Show Topics: 1:32 – Chuck: Let’s talk about testing JavaScript in 2018. 1:53 – Vitali talks about solving problems in JavaScript. 2:46 – Chuck asks Vitali a question. 3:03 – Vitali’s answer. 3:30 – Why Jest? Why not Mocha or these other programs? 3:49 – Jest is the best interruption of what testing should look like and the best practice nowadays. There are different options, they can be better, but Jest has this great support from their community. There are great new features. 4:31 – Chuck to Joe: What are you using for testing nowadays? 4:43 – Joe: I use Angular, primarily. 6:01 – Like life, it’s sometimes easier to use things that make things very valuable. 7:55 – Aimee: I have heard great things about Cypress, but at work we are using another program. 8:22 – Vitali: Check out my article. 8:51 – Aimee: There are too many problems with the program that we use at work. 9:39 – Panelist to Vitali: I read your article, and I am a fan. Why do you pick Test Café over Cypress, and how familiar are you with Cypress? What about Selenium and other programs? 10:12 – Vitali: “Test Café and Cypress are competing head-to-head.” Listen to Vitali’s suggestions and comments per the panelists’ question at this timestamp. 11:25 – Chuck: I see that you use sign-on... 12:29 – Aimee: Can you talk about Puppeteer? It seems promising. 12:45 – Vitali: Yes, Puppeteer is promising. It’s developed by Google and by Chrome. You don’t want to use all of your tests in Puppeteer, because it will be really hard to do in other browsers. 13:26: Panelist: “...5, 6, 7, years ago it was important of any kind of JavaScript testing you had no idea if it worked in one browser and it not necessarily works in another browser. That was 10 years ago. Is multiple browsers testing as important then as it is now? 14:51: Vitali answers the above question. 15:30 – Aimee: If it is more JavaScript heavy then it could possibly cause more problems. 15:56 – Panelist: I agree with this. 16:02 – Vitali continues this conversation with additional comments. 16:17 – Aimee: “I see that Safari is the new Internet Explorer.” 16:23: Chuck: “Yes, you have to know your audience. Are they using older browsers? What is the compatibility?” 17:01 – Vitali: There are issues with the security. Firefox has a feature of tracking protection; something like that. 17:33 – Question to Vitali by Panelist. 17:55 – Vitali answers the question. 18:30 – Panelist makes additional comments. 18:43 – If you use Safari, you reap what you sow. 18:49 – Chuck: I use Chrome on my iPhone. (Aimee does, too.) Sometimes I wind up in Safari by accident. 19:38 – Panelist makes comments. 19:52 – Vitali tells a funny story that relates to this topic. 20:45 – There are too many standards out there. 21:05 – Aimee makes comments. 21:08 – Brutalist Web Design. Some guy has this site – Brutalist Web Design – where he says use basic stuff and stop being so custom. Stop using the web as some crazy platform, and if your site is a website that can be scrolled through, that’s great. It needs to be just enough for people to see your content. 22:16 – Aimee makes additional comments about this topic of Brutalist Web Design. 22:35 – Panelist: I like it when people go out and say things like that. 22:45 – Here is the point, though. There is a difference between a website and a web application. Really the purpose is to read an article. 23:37 – Vitali chimes in. 24:01 – Back to the topic of content on websites. 25:17 – Panelist: Medium is very minimal. Medium doesn’t feel like an application. 26:10 – Is the website easy enough for the user to scroll through and get the content like they want to? 26:19 – Advertisement. 27:22 – See how far off the topic we got? 27:31 – These are my favorite conversations to have. 27:39 – Vitali: Let’s talk about how my article got so popular. It’s an interesting thing, I started researching “testing” for my company. We wanted to implement one of the testing tools. Instead of creating a presentation, I would write first about it in Medium to get feedback from the community as well. It was a great decision, because I got a lot of comments back. I enjoyed the experience, too. Just write about your problem in Medium to see what people say. 28:48 – Panelist: You put a ton of time and energy in this article. There are tons of links. Did you really go through all of those articles? 29:10 – Yes, what are the most permanent tools? I was just reading through a lot of comments and feedback from people. I tested the tools myself, too! 29:37 – Panelist: You broke down the article, and it’s a 22-minute read. 30:09 – Vitali: I wrote the article for my company, and they ad to read it. 30:24 – Panelist: Spending so much time – you probably felt like it was apart of your job. 30:39 – Vitali: I really like creating and writing. It was rally amazing for me and a great experience. I feel like I am talented in this area because I write well and fast. I wanted to express myself. 31:17 – Did you edit and review? 31:23 – Vitali: I wrote it by myself and some friends read it. There were serious mistakes, and that’s okay I am not afraid of mistakes. This way you get feedback. 32:10 – Chuck: “Some people see testing in JavaScript, and people look at this and say there are so much here. Is there a place where people can start, so that way they don’t’ get too overwhelmed? Is there a way to ease into this and take a bite-size at a time?” 32:52 – Vitali: “Find something that works for them. Read the article and start writing code.” He continues this conversation from here on out. 34:03 – Chuck continues to ask questions and add other comments. 34:16 – Vitali chimes-in.  34:38 – Chuck.  34:46 – Vitali piggybacks off of Chuck’s comments. 36:14 – Panelist: Let’s go back to Jest. There is a very common occurrence where we see lots of turn and we see ideas like this has become the dominant or the standard, a lot of people talk about stuff within this community. Then we get this idea that ‘this is the only thing that is happening.’ Transition to jQuery to React to... With that context do you feel like Jest will be a dominant program? Are we going to see Jest used just as common as Mocha and other popular programs? 38:15 – Vitali comments on the panelist’s question. 38:50 – Panelist: New features. Are the features in Jest (over Jasmine, Mocha, etc.) so important that it will drive people to it by itself? 40:30 – Vitali comments on this great question. 40:58 – Panelist asks questions about features about Jest. 41:29 – Vitali talks about this topic. 42:14 – Let’s go to picks! 42:14 – Advertisement. Links: Vitali Zaidman’s Facebook Vitali Zaidman’s Medium Vitali Zaidman’s GitHub Vitali Zaidman’s NPM Vitali Zaidman’s LinkedIn Vitali Zaidman’s Medium Article JavaScript Brutalist Web Design Jasmine Cypress React jQuery Jest Protractor – end to end testing for Angular Test Café Intern Sinon XKCD Sponsors: Kendo UI Sentry Digital Ocean Cache Fly Picks: AJ O’Neal Continuous from last week’s episode: Crossing the Chasm – New Technologies from Niche to General Adaptation. Go Lang Joe Eames Board Game: Rajas of the Ganges Framework Summit Conference in Utah React Conference Aimee Knight Hacker News – “Does Software Understand Complexity” via Michael Feathers Cream City Code Chuck E-Book: How do I get a job? Express VPN Vitali Book: The Square and The Tower: Networks and Power, from the Freemasons to Facebook by Niall Ferguson My article!

The InfoQ Podcast
Uncle Bob Martin on Clean Software, Craftsperson, Origins of SOLID, DDD, & Software Ethics

The InfoQ Podcast

Play Episode Listen Later Aug 24, 2018 30:06


Wes Reisz sits down and chats with Uncle Bob about The Clean Architecture, the origins of the Software Craftsperson Movement, Livable Code, and even ethics in software. Uncle Bob discusses his thoughts on how The Clean Architecture is affected by things like functional programming, services meshes, and microservices. Why listen to this podcast: * Michael Feathers wrote to Bob and said if you rearrange the order of the design principles, it spells SOLID. * Software Craftsperson should be used when you talking about software craftsmanship in a gender-neutral way to steer clear of anything exclusionary. * Clean Architecture is a way to develop software with low coupling and is independent of implementation details. * Clean Architecture and Domain Driven Design (DDD) are compatible terms. You would find the ubiquitous language and bounded context of DDD at the innermost circles of a clean architecture. * Services do not form an architecture. They form a deployment pattern that is a way of decoupling and therefore has no impact on the idea of clean architecture. * There is room for “creature comforts” in a code base that makes for more livable, convenient code. * “We have no ethics that are defined [in software].” If we don’t find a way to police it ourselves, governments will. We have to come up with a code of ethics. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2Nebspj You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2Nebspj

Weekly Dev Tips
Maintain Legacy Code with New Code

Weekly Dev Tips

Play Episode Listen Later Jan 8, 2018 8:57


Maintain Legacy Code with New CodeMany developers work in legacy codebases, which are notoriously difficult to test and maintain in many cases. One way you can address these issues is by trying to maximize the use of new, better designed constructs in the code you add to the system.Sponsor - DevIQThanks to DevIQ for sponsoring this episode! Check out their list of available courses and how-to videos.Show Notes / TranscriptLegacy code can be difficult to work with. Michael Feathers defines legacy code in his book, Working Effectively with Legacy Code, as "code without tests", and frequently it's true that legacy codebases are difficult to test. They're often tightly coupled, overly complex, and weren't written with modern understanding of good design principles in mind. Whether you're working with a legacy codebase you've inherited, or one you wrote yourself over some period of time, you probably have experienced the pain that can be involved with trying to change a large, complex system that suffers from a fair bit of technical debt and lacks the safety net of tests.There are several common approaches to working with such codebases. One simple approach, that can be appropriate in many scenarios, is to do as little as possible to the code. The business is running on it, none of the original authors are still with the company, nobody understands it, so just keep your distance and hope it doesn't break on your watch. Maybe in the meantime someone is working on a replacement, but you have no idea if or when that might ever ship, and anyway you have other things you need to work on that are less likely to keep you at work late or bring you in on the weekends. I don't have any solid numbers on how much software falls into this category, but I suspect it's a lot.The second approach is also common, and usually takes place when the first one isn't an option because business requirements won't wait for a rewrite of the current system. In this case, developers must spend time working with the legacy system in order to add or change functionality. Because it's big, complex, and probably untestable, changes and deployments are stressful and error-prone, and a lot of manual testing effort is required. Regression bugs are common, as tight coupling within the system means changes in one area affect others areas in often inexplicable and unpredictable ways. This is where I think the largest amount of maintenance software development takes place, since let's face it most software running today was written without tests but still needs to be updated to meet changing business needs.A third approach some forward-thinking companies take, understanding the risks and costs involved in full application rewrites, is to invest in refactoring the legacy system to improve its quality. This can take the place of dedicated effort focused on refactoring, as opposed to adding features or fixing bugs. Or it can be a commitment to follow the Boy Scout Rule such that every new change to the system also improves the system's quality by improving its design (and, ideally, adding tests). Some initial steps teams often take when adopting this approach are to ensure source control is being used effectively and to set up a continuous integration server if none is in place. An initial assessment using static analysis tools can establish the baseline quality metrics for the application, and the build server can track these heuristics to help the team measure progress over time. This approach works well for systems that are mission-critical and aren't yet so far gone into technical debt that it's better to just declare "technical bankruptcy" and rewrite them. I've had success working with several companies using this approach - let me know if you have questions about how to do it with your application.Now let's stop for a moment and think about why working with legacy code is so expensive and stressful. Yes, there's the lack of tests which limits our confidence that changes to the code don't break things unintentionally, but that's based on a root assumption. The assumption is that we're changing existing code and therefore, other code that depends on it might break unexpectedly. What if we break down that assumption, and instead we minimize the amount of existing code we touch in favor of writing new code. Yes, there's still some risk that our changes to allow incorporating our new code might cause problems, but outside of that, we're able to operate in the liberating zone of green field development, at least on a small scale.When I say write new code, I don't mean go into a method, add a new if statement or else clause, and start writing new statements in that method. That's the traditional approach that tends to increase complexity and technical debt. What I'm proposing instead is that you write new classes. You put new functionality into types and methods that didn't exist before. Since you're writing brand new classes, you know that no other code in the system currently has any dependencies on the code you're writing. You're also free to unit test your new classes and methods, since you're able to write them in a way that ensures they're loosely coupled and follow SOLID principles.So, what does this look like in practice? Frequently, the first step will be some kind of refactoring in order to accommodate the use of a new class. Let's you've identified a big, complex method that currently does the work that you need to change, and in a certain case you need it to do something different. Your de facto approach would be to dive into the nested conditional statements, find the right place to add an else clause, and add the new behavior there. The alternative approach would be to put the new behavior into a new method, ideally in a new type so that it's completely separate from any existing structures. A very basic first step could be to do exactly what you were going to do, but instead of putting the actual code into the else clause, instantiate your new type and call your new method there instead, passing any parameters it might require. This works well if what you're adding is fairly complex, since now you have a much easier way to test that complex code rather than going through an already big and complex method to get to it.Depending on the conditions that dictate when your new behavior should run, you might be able to get out of using the existing big complex method at all. Let's say the existing method is called BigMethod. Move BigMethod into a new class called Original and wherever you had code calling BigMethod change it to call new Original().BigMethod(). This is one of those cases where you're forced to change the existing code in order to prepare it for your new code, so you'll want to be very careful and do a lot of testing. If there are a lot of global or static dependencies running through BigMethod, this approach might not work well, so keep that in mind. However, assuming you're able to pull BigMethod into its own class that you then call as needed, the next step is to create another new class for your new implementation. We'll call the new class BetterDesign and we'll keep the method named BigMethod for now so that if we want we can use polymorphism via inheritance or an interface. Copy BigMethod from the Original class to your BetterDesign class and modify it so it only does what your new requirements need. It should be much smaller and simpler than what's in Original. Now, find all the places where you're instantiating Original and put in a conditional statement there so you'll instantiate BetterDesign instead, in the appropriate circumstances. At this point you should be able to add the behavior you need, in a new and testable class, without breaking anything that previously depended on BigMethod. If you have more than a few places where you need to decide whether to create Original or BetterDesign, look at using the Factory design pattern.By adjusting the way we maintain legacy systems to maximize how much new behavior we add through new classes and methods, we can minimize the likelihood of introducing regressions. This improves the code quality over time, increases team productivity, and makes the code more enjoyable to work with. If you have experience working with legacy code, please share it in this show's comments at www.weeklydevtips.com/015.Show Resources and LinksWorking Effectively with Legacy CodeTechnical DebtRefactoringBoy Scout RuleSOLID Principles

/dev/hell
Episode 94: Shaking Off The Rust

/dev/hell

Play Episode Listen Later Nov 19, 2017


It’s been a while since our last episode, but this time Chris and Ed go together in the same room down in Pawnee, Indiana to record an episode. Between Ed making faces at Chris while recording and Chris' totally messing up who wrote the testing blog post he went on and on, it was clear /dev/hell is not in the groove. In this episode Ed talks about some of his personal struggles that led to a long absence, Chris asks about what to do when the time frame for your goals doesn’t align with the types of places you work, and Chris and Ed try to enjoy being together in the same place while hoping the noise from the hand dryer in the bathroom on the other side of the wall from them doesn’t cause a ton of noise. Don’t worry, the podcast isn’t dead! Ed and Chris intend on talking to each other for a very very long time to come. However, you can expect to see less of us on the conference circuit in 2018. Both Chris and Ed have decided that for personal reasons they need to reduce their travel schedules. Do these things! Support us via our Patreon Buy stickers at devhell.info/shop Follow us on Twitter here Rate us on iTunes here Listen Download now (MP3) Links and Notes Matchbox Cowork Studio The Black Sparrow Apologies to Martin Fowler for attributing this awesome post about testing at Google hosted by Mr. Fowler and written by Mike Bland to Michael Feathers

All Ruby Podcasts by Devchat.tv
MRS 014 My Ruby Story Noel Rappin

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Aug 2, 2017 26:20


MRS 014 Noel Rappin Today's episode is a My Ruby Story with Noel Rappin. Noel talked about his contributions to the Ruby community and how they explore new technologies like Elixir. Listen to learn more about Noel! [00:01:40] – Introduction to Noel Rappin Noel is in episodes 30, which was about Software Craftsmanship. He was also on episode 185, which was about Rails 4 Test Prescriptions. And then, the latest one was 281, which was about Take My Money. [00:02:45] – How did you get into programming? Noel is a stereotypical nerdy kid so he started programming when he was young. He had afterschool classes in Applesoft BASIC at a place near their house. He had TRS-80 and Texas Instruments, and a couple of other things. [00:03:35] – Computer Science degree Noel has a Computer Science degree and a Ph.D. from the College of Computing at Georgia Tech, which was in the intersection of user interface design and Ed tech. He was designing interfaces for teaching, specifically for teaching engineers and developers. [00:04:15] – How did you get into Ruby? Noel came out of grad school immediately and went to a small web development company. He started hearing about Rails in about 2005. Having been one of the people who have done a lot of the Java-Struts web development that Rails was created in opposition to, Noel searched it up pretty quickly. But he started using it in 2005 or 2006 for some internal tools for his team. He built a test tracker and other things that his team is using internally. He built a couple of web apps for them to collaborate because they were working with some developers in Poland. And as he got comfortable with it, he contracted to do a Ruby on Rails book and got a full-time professional Ruby job. [00:06:30] – What is it about Ruby that got you excited? Noel has always like scripting languages and dynamic languages. He did a lot of work on Python for a while. It was extraordinary how quickly you do things in Rails compared to Java tools, even compared to Django, which was more or less contemporaneous. Ruby emphasized testing and Rails was very similar to some of the tools that he was building in Python. [00:08:50] – Books and contributions to the Ruby community Noel had a book which was out of date, 30 to 40 seconds after it was published. It’s normal in this industry. Sometime after that, he started publishing Rails Test Prescriptions and submitted it to the Pragmatic Bookshelf, and they purchased it. They published Rails Test Prescription 6 years. After that, he did a series of self-published JavaScript books called Master Space and Time with JavaScript. They are also out of date but they’re free now. He also did a self-published book about projects called Trust-Driven Development that you can still get. He did a book about purchasing, handling money and web purchases, and mostly this API called Take My Money, which came out last summer. Noel is currently working on a Rails 5 Test Prescriptions, which will include all the new Rails 5.1. It will come out this fall. [00:10:35] – Table XI Noel works at Table XI, which is a web consulting firm in Chicago with about 35 people. They do Rails development, websites, mobile development and a lot of React Native development. They build websites for companies that are not web software companies but companies that need web pages like non-profit or start-ups. They like to focus on solid business problems in software, rather than technology problems in software. [00:11:15] – What are you working on these days? Noel has his own podcast called Tech Done Right. The latest episode was with Michael Feathers. There is also an episode with somebody who is in charge of the Medicare Program under President Obama, who was actually the person who was called in to fix healthcare.gov and had some interesting stories about what that was like from a software manager perspective. From the development side, Noel has been doing a lot of Rails development, some JavaScript development, building purchase-sides for nonprofit, and doing a lot of upgrade work recently. [00:12:40] – Rails upgrades story This upgrade was for a Rails 2 application that was still in active development. The Rails community, at one point, was so bad at managing upgrades. And now, it does seem like the community has gotten better at managing new tools without breaking old ones. The security needs have pushed people towards the best practices. [00:14:15] – Ruby and Elixir Like a lot of Ruby companies, they’ve been exploring what the next tools are. They ran an Elixir project. It’s originally an internal prototype, which is a great way to get new technologies into the company. They wound up building a small project that was largely API focused. That’s the kind of thing that Rails is not super great at. They’re exploring what to do with front-end because there’s a sharp understanding of what Ruby on Rails is good for and what might be the purview of other tools. Elixir does a couple of things that Ruby doesn’t do very well. A lot of people who start with Ruby can learn a lot from going off to a functional language like Elixir or something that has a pattern-matching type of language like Elixir. Picks Noel Rappin R programming Podcast: Tech Done Right Author: Martha Wells The Murderbot Diaries by Martha Well Atom Editor Audio Hijack Bear Twitter @noelrap noelrappin.com Charles Max Wood Mighty Mug Phrase Express

Devchat.tv Master Feed
MRS 014 My Ruby Story Noel Rappin

Devchat.tv Master Feed

Play Episode Listen Later Aug 2, 2017 26:20


MRS 014 Noel Rappin Today's episode is a My Ruby Story with Noel Rappin. Noel talked about his contributions to the Ruby community and how they explore new technologies like Elixir. Listen to learn more about Noel! [00:01:40] – Introduction to Noel Rappin Noel is in episodes 30, which was about Software Craftsmanship. He was also on episode 185, which was about Rails 4 Test Prescriptions. And then, the latest one was 281, which was about Take My Money. [00:02:45] – How did you get into programming? Noel is a stereotypical nerdy kid so he started programming when he was young. He had afterschool classes in Applesoft BASIC at a place near their house. He had TRS-80 and Texas Instruments, and a couple of other things. [00:03:35] – Computer Science degree Noel has a Computer Science degree and a Ph.D. from the College of Computing at Georgia Tech, which was in the intersection of user interface design and Ed tech. He was designing interfaces for teaching, specifically for teaching engineers and developers. [00:04:15] – How did you get into Ruby? Noel came out of grad school immediately and went to a small web development company. He started hearing about Rails in about 2005. Having been one of the people who have done a lot of the Java-Struts web development that Rails was created in opposition to, Noel searched it up pretty quickly. But he started using it in 2005 or 2006 for some internal tools for his team. He built a test tracker and other things that his team is using internally. He built a couple of web apps for them to collaborate because they were working with some developers in Poland. And as he got comfortable with it, he contracted to do a Ruby on Rails book and got a full-time professional Ruby job. [00:06:30] – What is it about Ruby that got you excited? Noel has always like scripting languages and dynamic languages. He did a lot of work on Python for a while. It was extraordinary how quickly you do things in Rails compared to Java tools, even compared to Django, which was more or less contemporaneous. Ruby emphasized testing and Rails was very similar to some of the tools that he was building in Python. [00:08:50] – Books and contributions to the Ruby community Noel had a book which was out of date, 30 to 40 seconds after it was published. It’s normal in this industry. Sometime after that, he started publishing Rails Test Prescriptions and submitted it to the Pragmatic Bookshelf, and they purchased it. They published Rails Test Prescription 6 years. After that, he did a series of self-published JavaScript books called Master Space and Time with JavaScript. They are also out of date but they’re free now. He also did a self-published book about projects called Trust-Driven Development that you can still get. He did a book about purchasing, handling money and web purchases, and mostly this API called Take My Money, which came out last summer. Noel is currently working on a Rails 5 Test Prescriptions, which will include all the new Rails 5.1. It will come out this fall. [00:10:35] – Table XI Noel works at Table XI, which is a web consulting firm in Chicago with about 35 people. They do Rails development, websites, mobile development and a lot of React Native development. They build websites for companies that are not web software companies but companies that need web pages like non-profit or start-ups. They like to focus on solid business problems in software, rather than technology problems in software. [00:11:15] – What are you working on these days? Noel has his own podcast called Tech Done Right. The latest episode was with Michael Feathers. There is also an episode with somebody who is in charge of the Medicare Program under President Obama, who was actually the person who was called in to fix healthcare.gov and had some interesting stories about what that was like from a software manager perspective. From the development side, Noel has been doing a lot of Rails development, some JavaScript development, building purchase-sides for nonprofit, and doing a lot of upgrade work recently. [00:12:40] – Rails upgrades story This upgrade was for a Rails 2 application that was still in active development. The Rails community, at one point, was so bad at managing upgrades. And now, it does seem like the community has gotten better at managing new tools without breaking old ones. The security needs have pushed people towards the best practices. [00:14:15] – Ruby and Elixir Like a lot of Ruby companies, they’ve been exploring what the next tools are. They ran an Elixir project. It’s originally an internal prototype, which is a great way to get new technologies into the company. They wound up building a small project that was largely API focused. That’s the kind of thing that Rails is not super great at. They’re exploring what to do with front-end because there’s a sharp understanding of what Ruby on Rails is good for and what might be the purview of other tools. Elixir does a couple of things that Ruby doesn’t do very well. A lot of people who start with Ruby can learn a lot from going off to a functional language like Elixir or something that has a pattern-matching type of language like Elixir. Picks Noel Rappin R programming Podcast: Tech Done Right Author: Martha Wells The Murderbot Diaries by Martha Well Atom Editor Audio Hijack Bear Twitter @noelrap noelrappin.com Charles Max Wood Mighty Mug Phrase Express

My Ruby Story
MRS 014 My Ruby Story Noel Rappin

My Ruby Story

Play Episode Listen Later Aug 2, 2017 26:20


MRS 014 Noel Rappin Today's episode is a My Ruby Story with Noel Rappin. Noel talked about his contributions to the Ruby community and how they explore new technologies like Elixir. Listen to learn more about Noel! [00:01:40] – Introduction to Noel Rappin Noel is in episodes 30, which was about Software Craftsmanship. He was also on episode 185, which was about Rails 4 Test Prescriptions. And then, the latest one was 281, which was about Take My Money. [00:02:45] – How did you get into programming? Noel is a stereotypical nerdy kid so he started programming when he was young. He had afterschool classes in Applesoft BASIC at a place near their house. He had TRS-80 and Texas Instruments, and a couple of other things. [00:03:35] – Computer Science degree Noel has a Computer Science degree and a Ph.D. from the College of Computing at Georgia Tech, which was in the intersection of user interface design and Ed tech. He was designing interfaces for teaching, specifically for teaching engineers and developers. [00:04:15] – How did you get into Ruby? Noel came out of grad school immediately and went to a small web development company. He started hearing about Rails in about 2005. Having been one of the people who have done a lot of the Java-Struts web development that Rails was created in opposition to, Noel searched it up pretty quickly. But he started using it in 2005 or 2006 for some internal tools for his team. He built a test tracker and other things that his team is using internally. He built a couple of web apps for them to collaborate because they were working with some developers in Poland. And as he got comfortable with it, he contracted to do a Ruby on Rails book and got a full-time professional Ruby job. [00:06:30] – What is it about Ruby that got you excited? Noel has always like scripting languages and dynamic languages. He did a lot of work on Python for a while. It was extraordinary how quickly you do things in Rails compared to Java tools, even compared to Django, which was more or less contemporaneous. Ruby emphasized testing and Rails was very similar to some of the tools that he was building in Python. [00:08:50] – Books and contributions to the Ruby community Noel had a book which was out of date, 30 to 40 seconds after it was published. It’s normal in this industry. Sometime after that, he started publishing Rails Test Prescriptions and submitted it to the Pragmatic Bookshelf, and they purchased it. They published Rails Test Prescription 6 years. After that, he did a series of self-published JavaScript books called Master Space and Time with JavaScript. They are also out of date but they’re free now. He also did a self-published book about projects called Trust-Driven Development that you can still get. He did a book about purchasing, handling money and web purchases, and mostly this API called Take My Money, which came out last summer. Noel is currently working on a Rails 5 Test Prescriptions, which will include all the new Rails 5.1. It will come out this fall. [00:10:35] – Table XI Noel works at Table XI, which is a web consulting firm in Chicago with about 35 people. They do Rails development, websites, mobile development and a lot of React Native development. They build websites for companies that are not web software companies but companies that need web pages like non-profit or start-ups. They like to focus on solid business problems in software, rather than technology problems in software. [00:11:15] – What are you working on these days? Noel has his own podcast called Tech Done Right. The latest episode was with Michael Feathers. There is also an episode with somebody who is in charge of the Medicare Program under President Obama, who was actually the person who was called in to fix healthcare.gov and had some interesting stories about what that was like from a software manager perspective. From the development side, Noel has been doing a lot of Rails development, some JavaScript development, building purchase-sides for nonprofit, and doing a lot of upgrade work recently. [00:12:40] – Rails upgrades story This upgrade was for a Rails 2 application that was still in active development. The Rails community, at one point, was so bad at managing upgrades. And now, it does seem like the community has gotten better at managing new tools without breaking old ones. The security needs have pushed people towards the best practices. [00:14:15] – Ruby and Elixir Like a lot of Ruby companies, they’ve been exploring what the next tools are. They ran an Elixir project. It’s originally an internal prototype, which is a great way to get new technologies into the company. They wound up building a small project that was largely API focused. That’s the kind of thing that Rails is not super great at. They’re exploring what to do with front-end because there’s a sharp understanding of what Ruby on Rails is good for and what might be the purview of other tools. Elixir does a couple of things that Ruby doesn’t do very well. A lot of people who start with Ruby can learn a lot from going off to a functional language like Elixir or something that has a pattern-matching type of language like Elixir. Picks Noel Rappin R programming Podcast: Tech Done Right Author: Martha Wells The Murderbot Diaries by Martha Well Atom Editor Audio Hijack Bear Twitter @noelrap noelrappin.com Charles Max Wood Mighty Mug Phrase Express

The Art of Product
6: Refactoring Rails and Shareable Workflows

The Art of Product

Play Episode Listen Later Jun 29, 2017 33:53


On today’s episode we discuss Ben’s release of the first video in his Refactoring Rails course, on the benefits of following Rest in Rails applications. He goes over how he produced the video, sent it out and the feedback he got on his initial release. He now knows the rough format he wants to follow for the course, and he is still figuring out how many modules that would include. He is getting ready to release a final polished version, and has ordered some better audio recording equipment for future videos. His goal for next week is to release two more videos. Derrick has been shipping even more this week, and has seen a huge improvement in his workflow by reducing the notifications on GitHub. His team is adjusting to a more efficient way of working instead of push notifications. He shipped out a widget to embed sharable workflows within a website that includes affiliate links. He and his team have also been strategizing about their back-end infrastructure and scaling their Drip delivery system. Today’s Topics Include: Ben’s Refactoring Rails course first course video release and feedback The goals for the course format, length and possible supplementary materials The tools and skills included in the video course that could be expanded Releasing his video to a mailing list and ways to get more traffic Reflecting on doing the course solo and how he’s liking the co-working space Derrick’s improving work clarity by reducing GitHub notifications by unwatching of main drip repository Creating a new system for the team to be most productive Sharable workflows and the new embed widget Improving scale for Drip delivery system and looking for new infrastructure systems If you’re enjoying the show please give us your ratings and reviews in iTunes. Links and resources: RefactoringRails.io Working Effectively with Legacy Code by Michael Feathers Test-Driven Rails course on Upcase Spring Upcase courses Audio-Technica ATR2100 Drip Amazon DynamoDB SendGrid Amazon API Gateway How I Built This podcast by NPR

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 295: Michael Feathers on Legacy Code

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Jun 27, 2017 58:24


Felienne talks with Michael Feathers about Legacy Code. When is something legacy? Is working on legacy different from working on greenfield code? Do developers need different skills and techniques? Testing legacy code. How to test a legacy system? When do we have enough tests to feel safe to start coding? Techniques to make legacy systems […]

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 295: Michael Feathers on Legacy Code

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Jun 27, 2017 58:24


Felienne talks with Michael Feathers about Legacy Code. When is something legacy? Is working on legacy different from working on greenfield code? Do developers need different skills and techniques? Testing legacy code. How to test a legacy system? When do we have enough tests to feel safe to start coding? Techniques to make legacy systems more testable.

Tech Done Right
Episode 11: Avoiding Legacy Code with Michael Feathers

Tech Done Right

Play Episode Listen Later May 17, 2017 41:00


Avoiding Legacy Code with Michael Feathers Follow us on Twitter! @techdoneright or leave us a review on iTunes and sign up for our newsletter (http://www.techdoneright.io/newsletter)! Guest Michael Feathers (https://twitter.com/mfeathers): Author of Working Effectively with Legacy Code (https://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052); r7krecon.com (https://www.r7krecon.com/) Summary What makes a code base go bad and become "Legacy Code"? Can teams avoid writing bad code? Michael Feathers, author of Working Effectively With Legacy Code joins Tech Done Right to talk about technical debt, how communication can prevent bad coding practices, why coding problems are never just about code, and what it's like to go around the world seeing the worst code messes ever written. Notes 02:36 - The Definition of “Legacy Code” 04:25 - What makes code bases go bad? 07:26 - Working as a Team to Avoid Technical Debt and Other Problems 09:49 - Tools and Techniques That Have Changed Since the Book was Written scythe (https://github.com/michaelfeathers/scythe) 12:38 - Lack of Institutional Memory 15:24 - What creates technical debt? - Scrum (https://www.scrumalliance.org/why-scrum) - Extreme Programming (http://www.extremeprogramming.org) 22:50 - “Symbiotic Design” - Symbiotic Design Provocation (https://www.r7krecon.com/provocation) - Symbiotic Design Implications (https://www.r7krecon.com/implications) - Conway’s Law (https://en.wikipedia.org/wiki/Conway%27s_law) 25:38 - Test-Driven Development - Keynote - Writing Software (2014 TDD is dead from DHH) (http://confreaks.tv/videos/railsconf2014-keynote-writing-software) - RailsConf 2017: Opening Keynote by David Heinemeier Hansson (https://www.youtube.com/watch?v=Cx6aGMC6MjU) 31:44 - Fads in Codebases 36:58 - Error Handling in Applications (in Relation to Conway’s Law) Special Guest: Michael Feathers.

Advance Tech Podcast
Michael Feathers

Advance Tech Podcast

Play Episode Listen Later Apr 30, 2017 51:54


This week we interview our special guest Michael Feathers of r7k Research & Conveyance and author of Working Effectively with Legacy Code. To find out more about this episode take a look at the shownotes by clicking the episode title.

Legacy Code Rocks
Working Effectively with Legacy Code with Michael Feathers

Legacy Code Rocks

Play Episode Listen Later Feb 8, 2017 44:52


Michael Feathers (R7K Research & Conveyance) is a luminary, expert in software and organization design, and author of Working Effectively with Legacy Code. Over the past 20 years, he has spoken at conferences around the world, and some even call him the “godfather of legacy code.” In this episode, we discuss software best practices, Conway’s Law – or as Michael sometimes calls it, The Fundamental Theorem of Software Engineering –, the impact of code that could be deleted, and feature selection.

Fatal Error
9. Getting Started with Testing

Fatal Error

Play Episode Listen Later Nov 21, 2016 35:31


Today on Fatal Error: a crash course on a bunch of useful concepts for testing iOS apps in Swift. Automated Tests as Documentation Code Coverage in Xcode Danger CI & Fastlane View Models: see episodes 2 and 3 Dependency Injection Mocking Classes You Don't Own Protocol-Oriented Programming in Swift (WWDC 2015) Don't mock what you don't own Screenshot testing: Facebook's SnapshotTestCase; objc.io article Working Effectively with Legacy Code by Michael Feathers Testing, for people who hate testing OHHTTPStubs OCMock Other links Chris likes, which we didn't discuss in this episode: 5 Questions Every Unit Test Must Answer Mocks Aren't Stubs When is it safe to introduce test doubles? Test Isolation is about Avoiding Mocks Chris's Pinboard on Testing

ZADevChat Podcast
Episode 49 - Segfault E_POOR_DEVELOPMENT_PRACTICES

ZADevChat Podcast

Play Episode Listen Later Jul 14, 2016 63:29


What poor development practices get under your skin? In the episode Kenneth, Kevin & Len unpack a few poor software development practices that they've seen over and over again. More or less in order they tackled long-lived branches in version control, having too many automated tests, being too reliant on your IDE, copying the first answer from StackOverflow and not questioning enough. Each topic yielded some interesting insights and counterpoints! We hope you enjoy the episode and would love to know what you thought. Only two resources were mentioned explicitly: * Michael Feathers - the deep synergy between testability and good design - https://www.youtube.com/watch?v=4cVZvoFGJTU * What is the easiest way to parse a number in Clojure? - http://stackoverflow.com/questions/2640169/whats-the-easiest-way-to-parse-numbers-in-clojure Thanks for listening! Stay in touch: * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - http://bit.ly/zadevchat-itunes

Full Stack Radio
39: Michael Feathers - First Class Error Handling, Tell Don't Ask, and Collection Pipelines

Full Stack Radio

Play Episode Listen Later Apr 5, 2016 58:58


In this episode, Adam talks to Michael Feathers, author of Working Effectively with Legacy Code, about strategies for writing cleaner error handling code, the "tell don't ask" principle, and transforming data with collection pipelines. Sponsors: Laracasts, use coupon code FULLSTACK2016 for 50% off your first month Rollbar, sign up at https://rollbar.com/fullstackradio to try their Bootstrap Plan free for 90 days Links: Refactoring to Collections, Adam's book Michael's Blog r7k, Michael's company Working Effectively with Legacy Code The Null Object Pattern The Haskell Maybe Monad Giant Robots podcast on Tell Don't Ask vs. SRP Learn You a Haskell APL Programming Language Michael's Arrays on Steroids presentation Building guitar tab with collection pipelines The Spaceship Operator Tweet The Agile Alliance Technical Conference

Developer On Fire
Episode 102 | Michael Feathers - Providing Options

Developer On Fire

Play Episode Listen Later Feb 24, 2016 42:15


Guest: Michael Feathers @mfeathers Full show notes are at https://developeronfire.com/podcast/episode-102-michael-feathers-providing-options

All JavaScript Podcasts by Devchat.tv
195 JSJ Rollup.js with Rich Harris and Oskar Segersvärd

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Jan 20, 2016 64:56


02:17 - Rich Harris Introduction Twitter GitHub Blog The Guardian 02:34 - Oskar Segersvärd Introduction Twitter GitHub Widespace 02:50 - rollup.js rollup - npm 04:47 - Caveats and Fundamental Differences Between CommonJS and AMD Modules and ES6 Modules lodash Static Analysis 11:26 - Where rollup.js Fits in the Ecosystem Bundler vs Loader systemjs jspm webpack 17:40 - Input Modules 18:35 - Why Focus on Bundling Tools vs HTTP/2 20:13 - Tree-shaking versus dead code elimination 25:53 - ES6/ES2016 Support 27:36 - Other Important Optimizations 32:11 - Small modules: it’s not quite that simple three.js 41:54 - jsnext:main – should we use it, and what for? Picks Better Off Ted (Joe) Elementary (Joe) Ruby Rogues Episode #137: Book Club - Functional Programming for the Object-Oriented Programmer with Brian Marick (Aimee) Ruby Rogues Episode #115: Functional and Object Oriented Programming with Jessica Kerr (Aimee) Ruby Rogues Episode #65: Functional vs Object Oriented Programming with Michael Feathers (Aimee) Operation Code (Aimee) Google Define Function (Dave) Scott Hanselman: Dark Matter Developers: The Unseen 99% (Dave) MyFitnessPal (Chuck) Nike+ Running (Chuck) Couch to 10k (Chuck) Aftershokz Bluez 2 Headphones (Chuck) Pebble Time Steel (Chuck) Climbing (Rich) The Codeless Code (Rich) Star Wars (Rich) The Website Obesity Crisis (Oskar)

JavaScript Jabber
195 JSJ Rollup.js with Rich Harris and Oskar Segersvärd

JavaScript Jabber

Play Episode Listen Later Jan 20, 2016 64:56


02:17 - Rich Harris Introduction Twitter GitHub Blog The Guardian 02:34 - Oskar Segersvärd Introduction Twitter GitHub Widespace 02:50 - rollup.js rollup - npm 04:47 - Caveats and Fundamental Differences Between CommonJS and AMD Modules and ES6 Modules lodash Static Analysis 11:26 - Where rollup.js Fits in the Ecosystem Bundler vs Loader systemjs jspm webpack 17:40 - Input Modules 18:35 - Why Focus on Bundling Tools vs HTTP/2 20:13 - Tree-shaking versus dead code elimination 25:53 - ES6/ES2016 Support 27:36 - Other Important Optimizations 32:11 - Small modules: it’s not quite that simple three.js 41:54 - jsnext:main – should we use it, and what for? Picks Better Off Ted (Joe) Elementary (Joe) Ruby Rogues Episode #137: Book Club - Functional Programming for the Object-Oriented Programmer with Brian Marick (Aimee) Ruby Rogues Episode #115: Functional and Object Oriented Programming with Jessica Kerr (Aimee) Ruby Rogues Episode #65: Functional vs Object Oriented Programming with Michael Feathers (Aimee) Operation Code (Aimee) Google Define Function (Dave) Scott Hanselman: Dark Matter Developers: The Unseen 99% (Dave) MyFitnessPal (Chuck) Nike+ Running (Chuck) Couch to 10k (Chuck) Aftershokz Bluez 2 Headphones (Chuck) Pebble Time Steel (Chuck) Climbing (Rich) The Codeless Code (Rich) Star Wars (Rich) The Website Obesity Crisis (Oskar)

Nación Lumpen
NL6: legacy code

Nación Lumpen

Play Episode Listen Later Jan 20, 2016 83:12


Yo... he visto cosas que vosotros no creeríais: ifs anidados hasta más allá de Orión. He visto clases C++ repletas de sleeps cerca de la Puerta de Tannhäuser. Todas esa deuda técnica se perderá... en el tiempo... como lágrimas en la lluvia. Es hora de refactorizar.Participantes:Sebastián Ortega, @_sortega.Álvaro Castellanos, @AlvaroCasteÓscar Pernas, Kris Kovalik, @kkvlkEnlaces:Libro: “Working effectively with legacy code”, Michael Feathers. http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052Artículo: “Working Effectively With Legacy Code Michael Feathers Object Mentor, Inc”https://web.archive.org/web/20150213051804/http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdfLibro: “Refactoring improving the design of existing code”, Martin Fowler.http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672Libro: “The Goal, Process of Ongoing Improvement”, Eliyahu M. Goldratt, http://www.amazon.es/The-Goal-Process-Ongoing-Improvement/dp/0884271951

Nación Lumpen
NL6: legacy code

Nación Lumpen

Play Episode Listen Later Jan 20, 2016 83:12


Yo... he visto cosas que vosotros no creeríais: ifs anidados hasta más allá de Orión. He visto clases C++ repletas de sleeps cerca de la Puerta de Tannhäuser. Todas esa deuda técnica se perderá... en el tiempo... como lágrimas en la lluvia. Es hora de refactorizar. Participantes: Sebastián Ortega, @_sortega. Álvaro Castellanos, @AlvaroCaste Óscar Pernas, Kris Kovalik, @kkvlk Enlaces: Libro: “Working effectively with legacy code”, Michael Feathers. http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052 Artículo: “Working Effectively With Legacy Code Michael Feathers Object Mentor, Inc” https://web.archive.org/web/20150213051804/http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf Libro: “Refactoring improving the design of existing code”, Martin Fowler. http://www.amazon.com/Refactoring-Improving-Design-Existing-Code/dp/0201485672 Libro: “The Goal, Process of Ongoing Improvement”, Eliyahu M. Goldratt, http://www.amazon.es/The-Goal-Process-Ongoing-Improvement/dp/0884271951

Devchat.tv Master Feed
195 JSJ Rollup.js with Rich Harris and Oskar Segersvärd

Devchat.tv Master Feed

Play Episode Listen Later Jan 20, 2016 64:56


02:17 - Rich Harris Introduction Twitter GitHub Blog The Guardian 02:34 - Oskar Segersvärd Introduction Twitter GitHub Widespace 02:50 - rollup.js rollup - npm 04:47 - Caveats and Fundamental Differences Between CommonJS and AMD Modules and ES6 Modules lodash Static Analysis 11:26 - Where rollup.js Fits in the Ecosystem Bundler vs Loader systemjs jspm webpack 17:40 - Input Modules 18:35 - Why Focus on Bundling Tools vs HTTP/2 20:13 - Tree-shaking versus dead code elimination 25:53 - ES6/ES2016 Support 27:36 - Other Important Optimizations 32:11 - Small modules: it’s not quite that simple three.js 41:54 - jsnext:main – should we use it, and what for? Picks Better Off Ted (Joe) Elementary (Joe) Ruby Rogues Episode #137: Book Club - Functional Programming for the Object-Oriented Programmer with Brian Marick (Aimee) Ruby Rogues Episode #115: Functional and Object Oriented Programming with Jessica Kerr (Aimee) Ruby Rogues Episode #65: Functional vs Object Oriented Programming with Michael Feathers (Aimee) Operation Code (Aimee) Google Define Function (Dave) Scott Hanselman: Dark Matter Developers: The Unseen 99% (Dave) MyFitnessPal (Chuck) Nike+ Running (Chuck) Couch to 10k (Chuck) Aftershokz Bluez 2 Headphones (Chuck) Pebble Time Steel (Chuck) Climbing (Rich) The Codeless Code (Rich) Star Wars (Rich) The Website Obesity Crisis (Oskar)

The iPhreaks Show
116 iPS TDD and Testing with Jon Reid

The iPhreaks Show

Play Episode Listen Later Aug 6, 2015 51:34


01:21 - John Reid Introduction Twitter GitHub Blog 02:45 - Tools For Testing and Test-Driven Development (TDD) XCTest OCHamcrest OCMockito 03:24 - Matching/Matchers 07:13 - Getting Started OCHamcrest/README 08:58 - Partial Matching 10:26 - Mocking and Stubbing 14:04 - TDD Process and Workflow 17:49 - TDD vs Unit Testing Red, Green, Refactor 19:54 - iOS Code That Doesn’t/Does Adapt Well to TDD 21:17 - User Interface Testing 24:58 - End-to-End Testing 30:18 - Communication and Collaboration Working Effectively with Legacy Code by Michael Feathers 33:39 - OCMock, OCMockito 39:13 - OCMockito with Swift? Quick Brian Gesak 41:07 - Inside Out vs Outside In Picks wit.ai (Mike) Jon's UIViewController TDD Screencast (Jaim) Test-Driven iOS Development (Developer's Library) by Graham Lee (Jaim) NeewerHandheld Video Stabilizer for DV GoPro Mini Cameras (Chuck) Cell Phone Tripod Adapter (Chuck) Working Effectively with Legacy Code by Michael Feathers (Jon) Clean Coders (Jon) AppCode (Jon)

Devchat.tv Master Feed
116 iPS TDD and Testing with Jon Reid

Devchat.tv Master Feed

Play Episode Listen Later Aug 6, 2015 51:34


01:21 - John Reid Introduction Twitter GitHub Blog 02:45 - Tools For Testing and Test-Driven Development (TDD) XCTest OCHamcrest OCMockito 03:24 - Matching/Matchers 07:13 - Getting Started OCHamcrest/README 08:58 - Partial Matching 10:26 - Mocking and Stubbing 14:04 - TDD Process and Workflow 17:49 - TDD vs Unit Testing Red, Green, Refactor 19:54 - iOS Code That Doesn’t/Does Adapt Well to TDD 21:17 - User Interface Testing 24:58 - End-to-End Testing 30:18 - Communication and Collaboration Working Effectively with Legacy Code by Michael Feathers 33:39 - OCMock, OCMockito 39:13 - OCMockito with Swift? Quick Brian Gesak 41:07 - Inside Out vs Outside In Picks wit.ai (Mike) Jon's UIViewController TDD Screencast (Jaim) Test-Driven iOS Development (Developer's Library) by Graham Lee (Jaim) NeewerHandheld Video Stabilizer for DV GoPro Mini Cameras (Chuck) Cell Phone Tripod Adapter (Chuck) Working Effectively with Legacy Code by Michael Feathers (Jon) Clean Coders (Jon) AppCode (Jon)

Devchat.tv Master Feed
097 iPS Deconstructing Your Codebase with Michele Titolo

Devchat.tv Master Feed

Play Episode Listen Later Mar 19, 2015 52:42


Support the shows at devchat.tv/kickstarter!   01:45 - Michele Titolo Introduction Twitter Blog Reddit Women Who Code Ruby Rogues Episode #147: APIs That Don't Suck with Michele Titolo 02:26 - Deconstructing and Decoupling Reuse Goals 08:36 - Having Seams in Your Code to Avoid Conflict 8 Patterns to Help You Destroy Massive View Controller 11:35 - The Deconstructing Mindset (Finding Reuse Patterns) The Rule of Three Inheritance 17:48 - The Decorator Pattern 18:43 - Categories 21:34 - Sharing UI (User Interface) Codes 23:55 - Mechanics of Sharing Code Between Apps Jeffrey Jackson: Private Cocoapods CocoaPods Guide: Podspec Syntax Reference 29:02 - Lessons Learned: Easy Ways/Patterns to Know When to Break Up Small Functionalities Separate as Soon As Possible Do a Local Pod Using the Path Option (Path is Your Friend!) CocoaPods Guide: Private Pods Have a Good Code Review Process 33:23 - Cocoapods: Commit to Source or Not? 39:59 - Team Collaboration Spotify [YouTube] Kent Beck: Software G Forces: The Effects of Acceleration Picks Refactoring: Improving the Design of Existing Code by Martin Fowler (Pete) Working Effectively with Legacy Code by Michael Feathers (Pete) Refactoring To Patterns by Joshua Kerievsky (Pete) WWDC 2010 Session 138: API Design for Cocoa and Cocoa Touch (Andrew) [Slides] Michele Titolo: Cocoa Design Patterns in Swift (Andrew) The Cocotron (Andrew) Matt Gallagher: Design of a multi-platform app using The Cocotron (Andrew) Zombie Monkie by Tallgrass Brewing Company (Jaim) Getting out and participating in programming language communities (Chuck) The Earthsea Cycle Series Book Series by Ursula K. Le Guin (Chuck) The Pixar Touch by David A. Price (Chuck) 8 Patterns to Help You Destroy Massive View Controller (Michele) Artsy - iOS at Scale - objc.io issue #22 (Michele)

The iPhreaks Show
097 iPS Deconstructing Your Codebase with Michele Titolo

The iPhreaks Show

Play Episode Listen Later Mar 19, 2015 52:42


Support the shows at devchat.tv/kickstarter!   01:45 - Michele Titolo Introduction Twitter Blog Reddit Women Who Code Ruby Rogues Episode #147: APIs That Don't Suck with Michele Titolo 02:26 - Deconstructing and Decoupling Reuse Goals 08:36 - Having Seams in Your Code to Avoid Conflict 8 Patterns to Help You Destroy Massive View Controller 11:35 - The Deconstructing Mindset (Finding Reuse Patterns) The Rule of Three Inheritance 17:48 - The Decorator Pattern 18:43 - Categories 21:34 - Sharing UI (User Interface) Codes 23:55 - Mechanics of Sharing Code Between Apps Jeffrey Jackson: Private Cocoapods CocoaPods Guide: Podspec Syntax Reference 29:02 - Lessons Learned: Easy Ways/Patterns to Know When to Break Up Small Functionalities Separate as Soon As Possible Do a Local Pod Using the Path Option (Path is Your Friend!) CocoaPods Guide: Private Pods Have a Good Code Review Process 33:23 - Cocoapods: Commit to Source or Not? 39:59 - Team Collaboration Spotify [YouTube] Kent Beck: Software G Forces: The Effects of Acceleration Picks Refactoring: Improving the Design of Existing Code by Martin Fowler (Pete) Working Effectively with Legacy Code by Michael Feathers (Pete) Refactoring To Patterns by Joshua Kerievsky (Pete) WWDC 2010 Session 138: API Design for Cocoa and Cocoa Touch (Andrew) [Slides] Michele Titolo: Cocoa Design Patterns in Swift (Andrew) The Cocotron (Andrew) Matt Gallagher: Design of a multi-platform app using The Cocotron (Andrew) Zombie Monkie by Tallgrass Brewing Company (Jaim) Getting out and participating in programming language communities (Chuck) The Earthsea Cycle Series Book Series by Ursula K. Le Guin (Chuck) The Pixar Touch by David A. Price (Chuck) 8 Patterns to Help You Destroy Massive View Controller (Michele) Artsy - iOS at Scale - objc.io issue #22 (Michele)

Coding Blocks
SOLID as a Rock!

Coding Blocks

Play Episode Listen Later Mar 2, 2014 63:47


This week we tackle the SOLID principles in .NET and discuss the eternal struggle between perfect code and looming deadlines. Please leave us feedback in your Podcasting app of choice! Update: Great comments/debate on reddit! About Solid SOLID Principles for writing maintainable and extendable software Michael Feathers and “Uncle Bob” Robert Martin May be impossible […]

All Ruby Podcasts by Devchat.tv
087 RR Book Club: Practical Object-Oriented Design in Ruby with Sandi Metz

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Jan 9, 2013 114:16


1:35 - Introducing Sandi Metz Practical Object-Oriented Design in Ruby by Sandi Metz Website Twitter 6:15 - The book writing process and the speech writing process 17:30 - Flow of POODR 21:35 - Why design is for everyone 24:20 - The fear of writing a book: Am I really an expert? 27:00 - Breaking the rules 34:00 - Cheat sheets, screencasts, and diagrams for POODR 42:00 - Topics beyond POODR 45:20 - Why Sandi loves Rails 51:05 - How long will Rails last? 55:30 - When should you begin introducing design? 1:01:00 - Working with an Inheritance interface 1:06:30 - Rules for testing 1:14:45 - Well-tested objects without well-tested interactions 1:18:45 - Sandi’s rules for coding and breaking them 1:26:15 - Having too many small objects versus having too big objects Picks: “The Deep Synergy Between Testability and Good Design” Speech by Michael Feathers (James) Endless Space game on Steam (James) Board games: Lords of Waterdeep, Love Letter, Eminent Domain (James) George Takai’s episode on the Penn’s Sunday School podcast (Avdi) Hardcore History podcast by Dan Carlin  (Avdi) Infinite Monkey Cage podcast by BBC Radio 4 (Avdi) Marked App (Josh) Herman Miller Aeron chair (Charles) Bubble Timer (Sandi) Gutter Cleaning Robot (Sandi)

Ruby Rogues
087 RR Book Club: Practical Object-Oriented Design in Ruby with Sandi Metz

Ruby Rogues

Play Episode Listen Later Jan 9, 2013 114:16


1:35 - Introducing Sandi Metz Practical Object-Oriented Design in Ruby by Sandi Metz Website Twitter 6:15 - The book writing process and the speech writing process 17:30 - Flow of POODR 21:35 - Why design is for everyone 24:20 - The fear of writing a book: Am I really an expert? 27:00 - Breaking the rules 34:00 - Cheat sheets, screencasts, and diagrams for POODR 42:00 - Topics beyond POODR 45:20 - Why Sandi loves Rails 51:05 - How long will Rails last? 55:30 - When should you begin introducing design? 1:01:00 - Working with an Inheritance interface 1:06:30 - Rules for testing 1:14:45 - Well-tested objects without well-tested interactions 1:18:45 - Sandi’s rules for coding and breaking them 1:26:15 - Having too many small objects versus having too big objects Picks: “The Deep Synergy Between Testability and Good Design” Speech by Michael Feathers (James) Endless Space game on Steam (James) Board games: Lords of Waterdeep, Love Letter, Eminent Domain (James) George Takai’s episode on the Penn’s Sunday School podcast (Avdi) Hardcore History podcast by Dan Carlin  (Avdi) Infinite Monkey Cage podcast by BBC Radio 4 (Avdi) Marked App (Josh) Herman Miller Aeron chair (Charles) Bubble Timer (Sandi) Gutter Cleaning Robot (Sandi)

Devchat.tv Master Feed
087 RR Book Club: Practical Object-Oriented Design in Ruby with Sandi Metz

Devchat.tv Master Feed

Play Episode Listen Later Jan 9, 2013 114:16


1:35 - Introducing Sandi Metz Practical Object-Oriented Design in Ruby by Sandi Metz Website Twitter 6:15 - The book writing process and the speech writing process 17:30 - Flow of POODR 21:35 - Why design is for everyone 24:20 - The fear of writing a book: Am I really an expert? 27:00 - Breaking the rules 34:00 - Cheat sheets, screencasts, and diagrams for POODR 42:00 - Topics beyond POODR 45:20 - Why Sandi loves Rails 51:05 - How long will Rails last? 55:30 - When should you begin introducing design? 1:01:00 - Working with an Inheritance interface 1:06:30 - Rules for testing 1:14:45 - Well-tested objects without well-tested interactions 1:18:45 - Sandi’s rules for coding and breaking them 1:26:15 - Having too many small objects versus having too big objects Picks: “The Deep Synergy Between Testability and Good Design” Speech by Michael Feathers (James) Endless Space game on Steam (James) Board games: Lords of Waterdeep, Love Letter, Eminent Domain (James) George Takai’s episode on the Penn’s Sunday School podcast (Avdi) Hardcore History podcast by Dan Carlin  (Avdi) Infinite Monkey Cage podcast by BBC Radio 4 (Avdi) Marked App (Josh) Herman Miller Aeron chair (Charles) Bubble Timer (Sandi) Gutter Cleaning Robot (Sandi)

Devnology Podcast
Devnology Podcast 033 - Michael Feathers

Devnology Podcast

Play Episode Listen Later Oct 17, 2012 52:31


This episode features an interview with Michael Feathers, regular conference speaker, author of Working Effectively with Legacy Code and one of the deep thinkers on programming. We talk about various programming approaches and techniques and the effect they have on the way we create and maintain software systems. We touch upon subjects like functional programming, technical debt and computer science papers. Also listen to this episode to learn about his plans for a new book! Follow Michael on twitter via @mfeathers and read his blog on http://michaelfeathers.typepad.com This interview was recorded on the 25th of sept 2012 at the Peabody Opera House in St Louis during the Strangeloop conference. Interview by @freekl and @mrijnAudio post-production by @mendelt Links for this podcast: Book: : Working Effectively with Legacy Code Michael Feathers, 2004. Check the Strangeloop video schedule for release dates of recorded talks Michael wrote about the subject of his Strangeloop talk on the Line Break kata in this blogpost. Michael's blogpost from 2009: 10 Papers Every Programmer Should Read (At Least Twice) that is mentioned in this episode Book: Making Software: What Really Works, and Why We Believe It, Andy Oram & Greg Wilson, 2010. Video: Michael's keynote Code Blindness from Rocky Mountain Ruby 2011 Video: Dealing with Dynamically Typed Legacy Code, Michael Feathers, NDC Oslo 2012 This podcast is in English - Deze podcast is in het Engels

All Ruby Podcasts by Devchat.tv
065 RR Functional vs Object Oriented Programming with Michael Feathers

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Aug 7, 2012 57:55


The Rogues talk about functional vs. object oriented programming with Michael Feathers.

Ruby Rogues
065 RR Functional vs Object Oriented Programming with Michael Feathers

Ruby Rogues

Play Episode Listen Later Aug 7, 2012 57:55


The Rogues talk about functional vs. object oriented programming with Michael Feathers.

Devchat.tv Master Feed
065 RR Functional vs Object Oriented Programming with Michael Feathers

Devchat.tv Master Feed

Play Episode Listen Later Aug 7, 2012 57:55


The Rogues talk about functional vs. object oriented programming with Michael Feathers.

/dev/hell
Episode 14: The PHP Guy Is Sulking

/dev/hell

Play Episode Listen Later Jun 14, 2012


This week we’re joined by Justin Searls, JavaScript developer and JS testing EXPERT. We talk lots about building and testing “fat” browser apps, particularly about best practices and different testing approaches. After a while Chris felt bad and told us to shut up. This was the first podcast we broadcast live while recording. Big ups to WonderNetwork for providing the streaming bandwidth, and Engine Yard for sponsoring the podcast. Keep an eye on the @dev_hell Twitter account for info on our next live stream. If you love us, you will do these things: You should follow us on Twitter here. You should rate us on iTunes here Listen Download now (MP3, 33.1MB, 1:16) Notes TestDouble training.gaslightsoftware.com Backbone Idiomatic JS Require.js Jasmine Cucumber Behat Michael Feathers book on working with legacy code Rails asset pipeline Dieter for Clojure Kohana Assets Assetic Webassets for Python Drumkit.js by Chris Powers QUnit Searls on GitHub https://github.com/searls/jasmine-fixture https://github.com/searls/jasmine-given https://github.com/searls/jasmine-stealth Pro JavaScript (John Resig)

Hanselminutes - Fresh Talk and Tech for Developers
Working Effectively with Legacy Code with Michael Feathers

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Jun 18, 2009 23:40


Scott's in Norway this week and he sits down with Michael Feathers. Michael is the author of "Working Effectively with Legacy Code." What is legacy code? Are you writing legacy code right now?

Agile Toolkit Podcast
Agile 2008 - Emily and Geoff Bache - Programming with the Stars and the TextTest framework

Agile Toolkit Podcast

Play Episode Listen Later Nov 19, 2008 23:42


I spoke with Emily and Geoff at the end of the conference and we were all a bit tired.  Emily performed in the Coding With The Stars stage at Agile 2008 and gave a great run for the gold.  One especially interesting segment was the customer story testing segment.  For this segment Geoff played the customer and wrote a story test in TextTest and then Emily and  Michael Feathers made it pass.  This appearance of the customer was well received by the audience and judges. Finally I get Geoff on a podcast.  I recorded him last year but he will jump the queue with this one.  Geoff works on a heuristic scheduling problem that is somewhat tricky to write acceptance and regression tests around.  To solve this problem he did wrote his own test framework, TextTest.I hope you enjoy this power duo and I suspect we will see more of them in the agile space.-bob

.NET Rocks!
Michael Feathers talks Legacy Code

.NET Rocks!

Play Episode Listen Later Jan 1, 1970 59:43


Carl and Richard talk to Michael Feathers about how to bring legacy code (that which has no testing code coverage) into the 21st century.Support this podcast at — https://redcircle.com/net-rocks/donations