Podcasts about rspec

  • 42PODCASTS
  • 192EPISODES
  • 42mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jan 28, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about rspec

Latest podcast episodes about rspec

Rails with Jason
247 - Steven R. Baker, Creator of RSpec

Rails with Jason

Play Episode Listen Later Jan 28, 2025 100:44 Transcription Available


In this podcast episode, Steven R. Baker dives into test doubles like mocks and stubs, discussing their essential role in robust code development and challenging traditional testing practices. The conversation covers the nuances of Test-Driven Development (TDD), including writing failing tests first for better code clarity and test coverage, and explores RSpec's influence on TDD. Additionally, Steven examines Ruby's adaptability and the integration of AI in programming, providing listeners with actionable strategies for more maintainable codebases and a balanced view on AI's evolving role in software development.

The Bike Shed
438: Writing abstractions in tests

The Bike Shed

Play Episode Listen Later Sep 3, 2024 49:08


Writing abstractions in tests can be surprisingly similar to storytelling. The most masterful stories are those where the author has stripped away all of the extra information, and given you just enough knowledge to be immersed and aware of what is going on. But striking that balance can be tricky, both in storytelling and abstractions in tests. Too much information and you risk overwhelming the reader. Too little and they won't understand why things are operating the way they are. Today, Stephanie and Joël get into some of the more controversial practices around testing, why people use them, and how to strike the right balance with your information. They discuss the most common motivations for introducing abstractions, from improved readability to simplifying the test's purpose and the types of tests where they are most likely to introduce abstractions. Our hosts also reflect on how they feel about different abstractions in tests – like custom matchers and shared examples – outlining when they reach for them, and the tradeoffs and benefits that come with each. To learn more about how to find the perfect level of abstraction, be sure to tune in! Key Points From This Episode: What's new in Joël's world; mocking out screens for processes or a new bit of UI. The new tool Stephanie's using for reading on the web: Reader by Readwise. Today's topic: controversial practices around testing. How Stephanie and Joël feel about looping through arrays and having IT blocks for each. The most common motivations for introducing abstractions or helper methods into your tests. Pros and cons of factories as abstractions in testing. Types of tests where Joël and Stephanie are more likely to introduce abstractions. Using page objects in system tests to improve user experience. Finding the balance between too little and too much information with abstraction in testing. Why Stephanie has been enjoying fancier matchers like RSpecs. Top uses of custom matchers, especially for specialized error messaging. Why Stephanie prefers custom matchers over shared examples. Using helper methods as a lighter version of abstraction. Differences and similarities between abstractions in tests versus application code. A reminder to keep your goals in mind when using abstraction. Links Mentioned in Today's Episode: Reader by Readwise (https://readwise.io/read) Why factories (https://thoughtbot.com/blog/why-factories) Why not factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Capybara at a single level of abstraction (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction) Writing custom RSpec matchers (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction) Value objects shared examples (https://thoughtbot.com/blog/value-object-semantics-in-ruby) 'DRY is about knowledge' (https://verraes.net/2014/08/dry-is-about-knowledge/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)

Remote Ruby
Rails 7.2 – First Impressions

Remote Ruby

Play Episode Listen Later Aug 23, 2024 43:24


In this episode, Jason, Chris, and Andrew dive deep into Ruby on Rails 7.2 discussionsand share their experiences with the new RC1 rate limited feature. The conversationalso covers the challenges of upgrading dependencies, the shift from asdf to mise forfaster language management and explores ways to simplify development workflowswith dev containers. There's also a big debate on various testing methodologies,comparing RSpec and minitest, and deliberate the merits and pitfalls of fixtures versusfactory libraries in maintaining robust codebases. Also, find out about Oaken, a hybridtool blending features of Fixtures, FactoryBot, and Fabricator. Hit download now!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Jason Charnes X/Twitter Chris Oliver X/Twitter Andrew Mason X/Twitter

The Bike Shed
430: Test Suite Pain & Anti-Patterns

The Bike Shed

Play Episode Listen Later Jun 25, 2024 40:57


Stephanie and Joël discuss the recent announcement of the call for proposals for RubyConf in November. Joël is working on his proposals and encouraging his colleagues at thoughtbot to participate, while Stephanie is excited about the conference being held in her hometown of Chicago! The conversation shifts to Stephanie's recent work, including completing a significant client project and her upcoming two-week refactoring assignment. She shares her enthusiasm for refactoring code to improve its structure and stability, even when it's not her own. Joël and Stephanie also discuss the everyday challenges of maintaining a test suite, such as slowness, flakiness, and excessive database requests. They discuss strategies to balance the test pyramid and adequately test critical paths. Finally, Joël emphasizes the importance of separating side effects from business logic to enhance testability and reduce complexity, and Stephanie highlights the need to address testing pain points and ensure tests add real value to the codebase. RubyConf CFP (https://sessionize.com/rubyconf-2024/) RubyConf CFP coaching (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform?pli=1) Testing pyramid (https://thoughtbot.com/blog/rails-test-types-and-the-testing-pyramid) Outside-in testing (https://thoughtbot.com/blog/testing-from-the-outsidein) Writing fewer system specs with request specs (https://thoughtbot.com/blog/faster-tests-with-capybara-and-request-specs) Unnecessary factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Your Test Suite is Making Too Many Database Calls (https://www.youtube.com/watch?v=LOlG4kqfwcg) Your flaky tests might be time dependent (https://thoughtbot.com/blog/your-flaky-tests-might-be-time-dependent) The Secret Ingredient: How To Understand and Resolve Just About Any Flaky Test (https://www.youtube.com/watch?v=De3-v54jrQo) Separating side effects to improve tests (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects) Functional core, imperative shell (https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell) Thoughtbot testing articles (https://thoughtbot.com/blog/tags/testing) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Something that's new in my world is that RubyConf just announced their call for proposals for RubyConf in November. They're open for...we're currently recording in June, and it's open through early July, and they're asking people everywhere to submit talk ideas. I have a few of my own that I'm working with. And then, I'm also trying to mobilize a lot of other colleagues at thoughtbot to get excited to submit. STEPHANIE: Yes, I am personally very excited about this year's RubyConf in November because it's in Chicago, where I live, so I have very little of an excuse not to go [laughs]. I feel like so much of my conference experience is traveling to just kind of, like, other cities in the U.S. that I want to spend some time in and, you know, seeing all of my friends from...my long-distance friends. And it definitely does feel like just a bit of an immersive week, right? And so, I wonder how weird it will feel to be going to this conference and then going home at the end of the night. Yeah, that's just something that I'm a bit curious about. So, yeah, I mean, I am very excited. I hope everyone comes to Chicago. It's a great city. JOËL: I think the pitch that I'm hearing is submit a proposal to the RubyConf CFP to get a chance to get a free ticket to go to RubyConf, where you get to meet Bike Shed co-host Stephanie Minn. STEPHANIE: Yes. Ruby Central should hire me to market this conference [laughter] and that being the main value add of going [laughs], obviously. Jokes aside, I'm excited for you to be doing this initiative again because it was so successful for RailsConf kind of internally at thoughtbot. I think a lot of people submitted proposals for the first time with some of the programming you put on. Are you thinking about doing things any differently from last time, or any new thoughts about this conference cycle? JOËL: I think I'm iterating on what we did last time but trying to keep more or less the same formula. Among other things, people don't always have ideas immediately of what they want to speak about. And so, I have a brainstorming session where we're just going to get together and brainstorm a bunch of topics that are free for anyone to take. And then, either someone can grab one of those topics and pitch a talk on it, or it can be, like, inspiration where they see that it jogs their mind, and they have an idea that then they go off and write a proposal. And so, that allows, I think, a lot of colleagues as well, who are maybe not interested in speaking but might have a lot of great ideas, to participate and sort of really get a lot of that energy going. And then, from there, people who are excited to speak about something can go on to maybe draft a proposal. And then, I've got a couple of other events where we support people in drafting a proposal and reviewing and submitting, things like that. STEPHANIE: Yes, I really love how you're just involving people with, you know, just different skills and interests to be able to support each other, even if, you know, there's someone on our team who's, like, not interested in speaking at all, but they're, like, an ideas person, right? And they would love to see their idea come to life in a talk from someone else. Like, I think that's really cool, and I certainly appreciate it as a not ideas person [laughs]. JOËL: Also, I want to shout out that Ruby Central is doing CFP coaching sessions on June 24th, June 25th, and June 26th, and those are open to anyone. You can sign up. We'll put a link to the signup form in the show notes. If you've never submitted something before and you'd like some tips on what makes for a good CFP, how can you up your chances of getting accepted, or maybe you've submitted before, you just want to get better at it; I recommend joining one of those slots. So, Stephanie, what's new in your world? STEPHANIE: So, I just successfully delivered a big project on my client work last week. So, I'm kind of riding that wave and getting into the next bit of work that I have been assigned for this team, and I'm really excited to do this. But I also, I don't know, I've been just, like, thinking about it quite a bit. Basically, I'm getting to spend two dedicated weeks to just refactoring [laughs] some really, I guess, complicated code that has led to some bugs recently and just needing some love, especially because there's some whiffs of potentially, like, really investing in this area of the product, and people wanting to make sure that the foundation does feel very stable to build on top of for extending and changing that code. And I think I, like, surprised myself by how excited I was to do this because it's not even code I wrote. You know, sometimes when you are the one who wrote code, you're like, oh, like, I would love time to just go back and clean up all these things that I kind of missed the first time around or just couldn't attend to for whatever reason. But yeah, I think I was just a little bit in the peripheries of that code, and I was like, oh, like, just seeing some weird stuff. And now to kind of have the time to be like, oh, this is all I'm going to be doing for two weeks, to, like, really dive into it and get my hands dirty [laughs], I'm very excited. JOËL: I think that refactoring is a thing that can be really fun. And also, when you have a larger chunk of time, like two days, it's easy to sort of get lost in sort of grand visions or projects. How do you kind of balance the, I want to do a lot of refactoring; I want to take on some bigger things while maybe trying to keep some focus or have some prioritization? STEPHANIE: Yeah, that's a great question. I was actually the one who said, like, "I want two weeks on this." And it also helped that, like, there was already some thoughts about, like, where they wanted to go with this area of the codebase and maybe what future features they were thinking about. And there are also a few bugs that I am fixing kind of related to this domain. So, I think that is actually what I started with. And that was really helpful in just kind of orienting myself in, like, the higher impact areas and the places that the pain is felt and exploring there first to, like, get a sense of what is going on here. Because I think that information gathering is really important to be able to kind of start changing the code towards what it wants to be and what other devs want it to be. I actually also started a thread in Slack for my team. I was, like, asking for input on what's the most confusing or, like, hard to reason about files or areas in this particular domain or feature set and got a lot of really good engagement. I was pleasantly surprised [laughs], you know, because sometimes you, like, ask for feedback and just crickets. But I think, for me, it was very affirming that I was, like, exploring something that a lot of people are like, oh, we would love for someone to, you know, have just time to get into this. And they all were really excited for me, too. So, that was pretty cool. JOËL: Interesting. So, it sounds like you sort of budgeted some refactoring time and then, from there, broke it down into a series of a couple of debugging projects and then a couple of, like, more bounded refactoring projects, where, like, specifically, I want to restructure the way this object works or something like that. STEPHANIE: Yeah. I think there was that feeling of wanting to clean up this area of the codebase, but you kind of caught on to that bit of, you know, it can go so many different ways. And, like, how do you balance your grand visions [laughs] of things with, I guess, a little bit of pragmatism? So, it was very much like, here's all these bugs that are causing our customers problems that are kind of, like, hard for the devs to troubleshoot. You know, that kind of prompts the question, like, why? And so, if there can be, you know, the fixing of the bugs, and then the learning of, like, how that part of the system works, and then, hopefully, some improvements along the way, yeah, that just felt like a dream [laughs] for me. And two weeks felt about the right amount of time. I don't know if anyone kind of hears that and feels like it's too long or too little. I would be really curious. But I feel like it is complex enough that, like, context switching would, I think, make this work harder, and you kind of do have to just sit with it for a little bit to get your bearings. JOËL: A scenario that we encounter on a pretty regular basis is a customer coming to us and telling us that they're feeling a lot of test pain and asking what are the ways that we can help them to make things better and that test pain can come under a lot of forms. It might be a test suite that's really slow and that's hurting the team in terms of their velocity. It might be a test suite that is really flaky. It might be one that is really difficult to add to, or maybe one that has very low coverage, or one that is just really brittle. Anytime you try to make a change to it, a bunch of things break, and it takes forever to ship anything. So, there's a lot of different aspects of challenging test suites that clients come to us with. I'm curious, Stephanie, what are some of the ones that you've encountered most frequently? STEPHANIE: I definitely think that a slow test suite and a flaky test suite end up going hand in hand a lot, or just a brittle one, right? That is slowing down development and, like you said, causing a lot of pain. I think even if that's not something that a client is coming to us directly about, it maybe gets, like, surfaced a little bit, you know, sometime into the engagement as something that I like to keep an eye on as a consultant. And I actually think, yeah, that's one of kind of the coolest things, I think, about our consulting work is just getting to see so many different test suites [laughs]. I don't know. I'm a testing nerd, so I love that kind of stuff. And then, I think you were also kind of touching on this idea of, like, maintaining a test suite and, yeah, making testing just a better experience. I have a theory [laughs], and I'd be curious to get your thoughts on it. But one thing that I really struggle with in the industry is when people talk about writing tests as if it's, like, the morally superior thing to do. And I struggle with this because I don't think that it is a very good strategy for helping people feel better or more confident and, like, upskill at writing tests. I think it kind of shames people a little bit who maybe either just haven't gotten that experience or, you know, just like, yeah, like, for whatever reason, are still learning how to do this thing. And then, I think that mindset leads to bad tests [laughs] or tests that aren't really serving the purpose that you hope they would because people are doing it more out of obligation rather than because they truly, like, feel like it adds something to their work. Okay, I kind of just dropped that on you [laughs]. Do you have any reactions? JOËL: Yeah, I guess the idea that you're just checking a box with your test rather than writing code that adds value to the codebase. They're two very different perspectives that, in the end, will generate more lines of code if you're just doing a checkbox but may or may not add a whole lot of value. So, maybe before even looking at actual, like, test practices, it's worth stepping back and asking more of a mindset question: Why does your team test? What is the value that your team feels they get out of testing? STEPHANIE: Yeah. Yeah. I like that because I was about to say they go hand in hand. But I do think that maybe there is some, you know, question asking [laughs] to be done because I do think people like to kind of talk about the testing practices before they've really considered that. And I am, like, pretty certain from just kind of, at least what I've seen, and what I've heard, and what I've experienced on embedding into client teams, that if your team can't answer that question of, like, "What value does testing bring?" then they probably aren't following good testing practices [laughs]. Because I do think you kind of need to approach it from a perspective of like, okay, like, I want to do this because it helps me, and it helps my team, rather than, like you said, getting the check mark. JOËL: So, once we've sort of established maybe a bit of a mindset or we've had a conversation with the team around what value they think they're getting out of tests, or maybe even you might need to sell the team a little bit on like, "Hey, here's, like, all these different ways that testing can bring value into your life, make your life as developers easier," but once you've done that sort of pre-work and you can start looking at what's actually the problem with a test suite, a common complaint from developers is that the test suite is too slow. How do you like to approach a slow test suite? STEPHANIE: That's a good question. I actually...I think there's a lot of ways to answer that. But to kind of stay on the theme of stepping back a little bit, I wonder if assessing how well your test suite aligns with the testing pyramid would be a good place to start; at least, that could be where I start if I'm coming into a client team for the first time, right, and being asked to start assessing or just poking around. Because I think the slowness a lot of the time comes from a lot of quote, unquote, "integration tests" or, like, unit tests masquerading as integration tests, where you end up having, like, a lot of duplication of things that are being tested in ways that are integrating with some slow parts of the system like the database. And yeah, I think even before getting into some of the more discreet reasons why you might be writing slow tests, just looking at the structure of your test suite and what kinds of things you're testing, and, again, even going back to your team and asking, like, "What kinds of things do you test?" Or like, "Do you try to test or wish to be testing more of, less of?" Like looking at the structure, I have found to be a good place to start. JOËL: And for those who are not familiar, you used the term testing pyramid. This is a concept which says that you probably want to have a lot of small, fast unit tests, a medium amount of integration tests that test a few different components together, and then a few end-to-end tests. Because as you go up that pyramid, tests become more expensive. They take a lot longer to run, whereas the little unit tests are super cheap. You can create thousands of them, and they will barely impact your run time. Adding a dozen end-to-end tests is going to be noticeable. So, you want to balance sort of the coverage that you get from end to end with the sort of cheapness and ubiquity of the little unit tests, and then split the difference for tests that are in between. STEPHANIE: And I think that is challenging, even, you know, you're talking about how you want the peak of your pyramid to be end-to-end tests. So, you don't want a lot of them, but you do want some of them to really ensure that things are totally plumbed and working correctly. But that does require, I think, really looking at your application and kind of identifying what features are the most critical to it. And I think that doesn't get paid enough attention, at least from a lot of my client experiences. Like, sometimes teams just end up with a lot of feature bloat and can't say like, you know, they say, "Everything's important [chuckles]," but everything can't be equally important, you know? JOËL: Right. I often like to develop using a sort of outside-in approach, where you start by writing an end-to-end test that describes the behavior that your new feature ticket is asking for and use that to drive the work that I'm doing. And that might lead to some lower-level unit tests as I'm building out different components, but the sort of high-level behavior that we're adding is driven by adding an end-to-end spec. Do you feel that having one new end-to-end spec for every new feature ticket that you work on is a reasonable thing to do, or do you kind of pick and choose? Do you write some, but maybe start, like, coalescing or culling them, or something like that? How do you manage that idea that maybe you would or would not want one end-to-end spec for each feature ticket? STEPHANIE: Yeah, it's a good question. Actually, as you were saying that, I was about to ask you, do you delete some afterwards [laughs]? Because I think that might be what I do sometimes, especially if I'm testing, you know, edge cases or writing, like, the end-to-end test for error states. Sometimes, not all of them make it into my, like, final, you know, commit. But they, you know, had their value, right? And at least it prompted me to make sure I thought about them and make sure that they were good error states, right? Like things that had visible UI to the user about what was going on in case of an error. So, I would say I will go back and kind of coalesce some of them, but they at least give me a place to start. Does that match your experience? JOËL: Yeah, I tend to mostly write end-to-end tests for happy paths and then write kind of mid-level things to cover some of my edge cases, maybe a couple of end-to-end tests for particularly critical paths. But, at some point, there's just too many paths through the app that you can't have end-to-end coverage for every single branch on every single path that can happen. STEPHANIE: Yeah, I like that because if you find yourself having a lot of different conditions that you need to test in an end-to-end situation, maybe there's room for that to, like, be better encapsulated in that, like, more, like, middle layer or, I don't know, even starting to ask questions about, like, does this make sense with the product? Like, having all of these different things going on, does that line up with kind of the vision of what this feature is trying to be or should be? Because I do think the complexity can start at that high of a level. JOËL: How do you feel about the idea that adding more end-to-end tests, at some point, has diminishing returns? STEPHANIE: I'm not quite sure I'm following [laughs]. JOËL: So, let's say you have an end-to-end test for the happy path for every core feature of the app. And you decide, you know what, I want to add maybe some, like, side features in, or maybe I want to have more error states. And you start, like, filling in more end-to-end tests for those. Is it fair to say that adding some of those is a bit of a diminishing return? Like, you're not getting as much value as you would from the original specs. And maybe as you keep finding more and more rare edge cases, you get less and less value for your test. STEPHANIE: Oh, yeah, I see. And there's more of a cost, too, right? The cost of the time to run, maintain, whatever. JOËL: Right. Let's say they're roughly all equally expensive in terms of cost to run. But as you stray further and further off of that happy path, you're getting less and less value from that integration test or that end-to-end test. STEPHANIE: I'm actually a little conflicted about this because that sounds right in theory, but then in practice, I feel like I've seen error states not get enough love [laughs] that it's...I don't even want to say, like, you make any kind of claim [laughs] about it. But, you know, if you're going to start somewhere, if you have, like, a limited amount of time and you're like, okay, I'm only going to write a handful of end-to-end tests, yeah, like, write tests for your happy paths [laughs]. JOËL: I guess it's probably fair to say that error states just don't get as much love as they should throughout the entire testing stack: at the unit level, at the integration level, all the way up to end to end. STEPHANIE: I'm curious if you were trying to get at some kind of conclusion, though, with the idea of diminishing returns. JOËL: I guess I'm wondering if, from there, we can talk about maybe a breakdown of a particular testing pyramid for a particular test suite is being top heavy, and whether there's value in maybe pushing some of these tests, some of these edge cases, some of these maybe less important features down from that, like, top end-to-end layer into maybe more of an integration layer. So, in a Rails context, that might be moving system specs down to something like a request spec. STEPHANIE: Yeah, I think that is what I tend to do. I'm trying to think of how I get there, and I'm not quite sure that I can explain it quite yet. Yeah, I don't know. Do you think you can help me out here? Like, how do you know it's time to start writing more tests for your unhappy paths lower on the pyramid? JOËL: Ideally, I think a lot of your code should be unit-tested. And when you are unit testing it, those pieces all need coverage of the happy and unhappy paths. I think the way it may often happen naturally is if you're pushing logic out of your controllers because it's a little bit challenging sometimes to test Rails controllers. And so, if you're moving things into domain objects, even service objects, depending on how you implement them, just doing that and then making sure you unit test them can give you a lot more coverage of all the different edge cases that can happen. Where things sometimes fall apart is getting out of that business layer into the web layer and saying, "Hey, if something raises an error or if the save fails or something like that, does the user get a good experience, or do we just crash and give them a 500 page?" STEPHANIE: Yeah, that matches with a lot of what I've seen, where if you then spend too much time in that business layer and only handling errors there, you don't really think too much about how it bubbles up. And, you know, then you are digging through, like, your error monitoring [laughs] service, trying to find out what happened so that you can tell, you know, your customer support team [laughs] to help them resolve, like, a bug report they got. But I actually think...and you were talking about outside in, but, in some ways, in my experience, I also get feedback from the bottom up sometimes that then ends up helping me adjust some of those integration or end-to-end tests about kind of what errors are possible, like, down in the depths of the code [laughs], and then finding ways to, you know, abstract that or, like, kind of be like, "Oh, like, here are all these possible, like, exceptions that might be raised." Like, what HTTP status code do I want to be returned to capture all of these things? And what do I want to say to the user? So, yeah, I'm [laughs] kind of a little lost myself, but this idea that going both, you know, outside in and then maybe even going back up a little bit has served me well before. JOËL: I think there can be a lot of value in sort of dropping down a level in the pyramid, and maybe instead of doing sort of end-to-end tests where you, like, trigger a scenario where something fails, you can just write a request back against the controller and say, "Hey, if I go to this controller and something raises an error, expect that you get redirected to this other location." And that's really cheap to run compared to an end-to-end test. And so, I think that, for me, is often the right compromise is handling error states at sort of the next lowest level and also in slightly more atomic pieces. So, more like, if you hit this endpoint and things go wrong, here's how things happen. And I use endpoint not so much in an API sense, although it could be, but just your, you know, maybe you've got a flow that's multiple steps where, you know, you can do a bunch of things. But I might have a test just for one controller action to say, "Hey, if things go wrong, it redirects you here, or it shows you this error page." Whereas the end-to-end test might say, "Oh, you're going to go through the entire flow that hits multiple different controllers, and the happy path is this nice chain." But each of the exit points off at where things fail would be covered by a more scoped request spec on that controller. STEPHANIE: Yeah. Yeah. That makes sense. I like that. JOËL: So, that's kind of how I've attempted to balance my pyramid in a way that balances complexity and time with coverage. You mentioned that another area that test suites get slow is making too many requests to the database. There's a lot of ways that that happens. Oftentimes, I think a classic is using a factory where you really don't need to, persisting data to the database when all you needed was some object in memory. So, there are different strategies for avoiding that. It's also easy to be creating too much data. So, maybe you do need to persist some things in the database, but you're persisting a hundred objects into memory or into the database when you really meant to persist two, so that's an easy accident. A couple of years ago, I gave a talk at RailsConf titled "Your Test Suite is Making Too Many Database Requests" that went over a bunch of different ways that you can be doing a lot of expensive database requests you didn't plan on making and how that slows down your test suite. So, that is also another hot spot that I like to look at when dealing with a slow test suite. STEPHANIE: Yeah, I mentioned earlier the idea of unit tests really masquerading as integration tests [laughs]. And I think that happens especially if you're starting with a class that may already be a little bit too big than it should be or have more responsibilities than it should be. And then, you are, like, either just, like, starting with using the create build, like, strategy with factories, or you find yourself, like, not being able to fully run the code path you're trying to test without having stuff persisted. Those are all, I think, like, test smells that, you know, are signaling a little bit of a testing anti-pattern that, yeah, like, is there a way to write, like, true unit tests for this stuff where you are only using objects in memory? And does that require breaking out some responsibilities? That is a lot of what I am kind of going through right now, actually, with my little refractoring project [laughs] is backfilling some tests, finding that I have to create a lot of records. And you know what? Like, the first step will probably be to write those tests and commit them, and just have them live there for a little while while I figure out, you know, the right places to start breaking things up, and that's okay. But yeah, I did want to, like, just mention that if you are having to create a lot of records and then also noticing, like, your test is running kind of slow [laughs], that could be a good indicator to just give a good, hard look at what kind of style of test you think you're writing [laughs]. JOËL: Yeah, your tests speak to you, and when you're feeling pain, oftentimes, it can be a sign that you should consider refactoring your implementation. And I think that's doubly true if you're writing tests after the fact rather than test driving. Because sometimes you sort of...you came up with an implementation that you thought would be good, and then you're writing tests for it, and it's really painful. And that might be telling you something about the underlying implementation that you have that maybe it's...you thought it's well scoped, but maybe it actually has more responsibilities than you initially realized, or maybe it's just really tightly coupled in a way that you didn't realize. And so, learning to listen to your tests and not just sort of accepting the world for being the way it is, but being like, "No, I can make it better." STEPHANIE: Yeah, I've been really curious why people have a hard time, like, recognizing that pain sometimes, or maybe believing that this is the way it is and that there's not a whole lot that you can do about it. But it's not true, like, testing really does not have to be painful. And I feel like, again, this is one of those things that's like, it's hard to believe until you really experience it, at least, that was the case for me. But if you're having a hard time with tests, it's not because you're not smart enough. Like, that, I think, is a thing that I really want to debunk right now [laughs] for anyone who has ever had that thought cross their mind. Yeah, things are just complicated and complex somehow, or software entropy happens. That's, like, not how it should be, and we don't have to accept that [laughs]. So, I really like what you said about, oh, you can change it. And, you know, that is a bit of a callback to the whole mindset of testing that we mentioned earlier at the beginning. JOËL: Speaking of test suites, we have not covered yet is paralyzing it. That could probably be its own Bike Shed episode on its own entirely on paralyzing a test suite. We've done entire engagements where our job was to come in and help paralyze a test suite, make it faster. And there's a lot of, like, pros and cons. So, I think maybe we can save that for a different episode. And, instead, I'd like to quickly jump in a little bit to some other common pain points of test suites, and I would say probably top of that list is test flakiness. How do you tend to approach flakiness in a client project? STEPHANIE: I am, like, laughing to myself a little bit because I know that I was dealing with test flakiness on my last client engagement, and that was, like, such a huge part of my day-to-day is, like, hitting that retry button. And now that I am on a project with, like, relatively low flakiness, I just haven't thought about it at all [laughs], which is such a privilege, I think [laughs]. But one of the first things to do is just start, like, capturing metrics around it. If you, you know, are hearing about flakiness or seeing that, like, start to plague your test suite or just, you know, cropping up in different ways, I have found it really useful to start, like, I don't know, just, like, maybe putting some of that information in a dashboard and seeing how, just to, like, even make sure that you are making improvements, that things are changing, and seeing if there's any, like, patterns around what's causing the flakiness because there are so many different causes of it. And I think it is pretty important to figure out, like, what kind of code you're writing or just trying to wrangle. That's, you know, maybe more likely to crop up as flakiness for your particular domain or application. Yeah, I'm going to stop there and see, like, because I know you have a lot of thoughts about flakiness [laughs]. JOËL: I mean, you mentioned that there's a lot of different causes for flakiness. And I think, in my experience, they often sort of group into, let's say, like, three different buckets. Anytime you're testing code that's doing things that are non-deterministic, that's easy for tests to be flaky. And so, you might think, oh, well, you know, you have something that makes a call to random, and then you're going to assert on a particular outcome of that. Well, clearly, that's going to not be the same every time, and that might be flaky. But there are, like, more subtle versions of that, so maybe you're relying on the system clock in some way. And, you know, depending on the time you run that test, it might give you a different value than you expect, and that might cause it to fail. And it doesn't have to be you're asserting on, like, oh, specifically a given millisecond. You might be doing math around, like, number of days, and when you get near to, let's say, the daylight savings boundary, all of a sudden, no, you're off by an hour, and your number of days...calculation breaks because relying on the clock is something that is inherently non-deterministic. Non-determinism is a bucket. Leaky tests is another bucket of failures that I see, things where one test might impact another that gets run after the fact, oftentimes by mutating some sort of global state. So, maybe you're both relying on some sort of, like, external file that you're both writing to or maybe a cache that one is writing to that the other one is reading from, something like that. It could even just be writing records into the database in a way that's not wrapped in a transaction, such that there's more data in the database when the next test runs than it expects. And then, finally, if you are doing any form of parallelization, that can improve your test suite speed, but it also potentially leads to race conditions, where if your resources aren't entirely isolated between parallel test runners, maybe you're sharing a database, maybe you're sharing Redis instance or whatever, then you can run into situations where you're both kind of fighting over the same resources or overriding each other's data, or things like that, in a way that can cause tests to fail intermittently. And I think having a framework like that of categorization can then help you think about potential solutions because debugging approaches and then solutions tend to be a little bit different for each of these buckets. STEPHANIE: Yeah, the buckets of different causes of flaky tests you were talking about, I think, also reminded me that, you know, some flakiness is caused by, like, your testing environment and your infrastructure. And other kinds of flakiness are maybe caused more from just the way that you've decided how your code should work, especially that, like, non-deterministic bucket. So, yeah, I don't know, that was just, like, something that I noticed as you were going through the different categories. And yeah, like, certainly, the solutions for approaching each kind are very different. JOËL: I would like to pitch a talk from RubyConf last year called "The Secret Ingredient: How To Resolve And Understand Just About Any Flaky Test" by Alan Ridlehoover. Just really excellent walkthrough of these different buckets and common debugging and solving approaches to each of them. And I think having that framework in mind is just a great way to approach different types of flaky tests. STEPHANIE: Yes, I'll plus one that talk, lots of great pictures of delicious croissants as well. JOËL: Very flaky pastry. STEPHANIE: [laughs] Joël, do you have any last testing anti-pattern guidances for our audience who might be feeling some test pain out there? JOËL: A quick list, I'm going to say tight coupling that has then led to having a lot of stubbing in your tests often leads to tests that are very brittle, so tests that maybe don't fail when they should when you've actually broken things, or maybe, alternatively, tests that are constantly failing for the wrong reasons. And so, that is a thing that you can fix by making your code less coupled. Tests that also require stubbing a lot of things because you do a lot of side effects. If you are making a lot of HTTP calls or things like that, that can both make a test more complex because it has to be aware of that. But also, it can make it more non-deterministic, more flaky, and it can just make it harder to change. And so, I have found that separating side effects from sort of business logic is often a great way to make your test suite much easier to work with. I have a blog post on that that I'll link in the show notes. And I think this maybe also approaches the idea of a functional core and an imperative shell, which I believe was an idea pitched by Gary Bernhardt, like, over ten years ago. There's a famous video on that that we'll also link in the show notes. But that architecture for building an app can lead to a much nicer test to write. I guess the general idea being that testing code that does side effects is complicated and painful. Testing code that is more functional tends to be much more pleasant. And so, by not intermingling the two, you tend to get nicer tests that are easier to maintain. STEPHANIE: That's really interesting. I've not heard that guidance before, but now I am intrigued. That reminded me of another thing that I had a conversation with someone about. Because after the RailsConf talk I gave, which was about testing pain, there was some stubbing involved in the examples that I was showing because I just see a lot of that stuff. And, you know, this audience member kind of had that question of, like, "How do you know that things are working correctly if you have to stub all this stuff out?" And, you know, sometimes you just have to for the time being [chuckles]. And I wanted to just kind of call back to that idea of having those end-to-end tests testing your critical paths to at least make sure that those things work together in the happy way. Because I have seen, especially with apps that have a lot of service objects, for some reason, those being kind of the highest-level test sometimes. But oftentimes, they end up not being composed well, being quite coupled with other service objects. So, you end up with a lot of stubbing of those in your test for them. And I think that's kind of where you can see things start to break down. JOËL: Yep. And when the RailsConf videos come out, I recommend seeing Stephanie's talk, some great gems in there for building a more maintainable test suite. Stephanie and I and, you know, most of us here at thoughtbot, we're testing nerds. We think about this a lot. We've also written a lot about this. There are a lot of resources in the show notes for this episode. Check them out. Also, just generally, check out the testing tag on the thoughtbot blog. There is a ton of content there that is worth looking into if you want to dig further into this topic. STEPHANIE: Yeah, and if you are wanting some, like, dedicated, customized testing training, thoughtbot offers an RSpec workshop that's tailored to your team. And if you kind of are interested in the things we're sharing, we can definitely bring that to your company as well. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
427: RailsConf Recap and Conversing About Coupling

The Bike Shed

Play Episode Listen Later May 28, 2024 37:03


Joël and Stephanie talk RailsConf! (https://railsconf.org/). Joël shares how he performed as a D&D character, Glittersense the gnome, to make his Turbo features talk entertaining and interactive. Stephanie's talk focused on addressing test pain by connecting it to code coupling, offering practical insights and solutions. They agree on the importance of continuous improvement as speakers and developers and trying new approaches in talks and code design, and recommend Jared Norman's RailsConf talk on design patterns, too! That One Thing: Reduce Coupling for More Scalable and Sustainable Software (https://www.informit.com/articles/article.aspx?p=2222816) Connascence.io (https://connascence.io/) [Connascence as a vocabulary to discuss coupling](https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling](https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling) The value of specialized vocabulary (https://bikeshed.thoughtbot.com/356?t=0) Transcript: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I think I can speak for both of us and say what's new in our world is that you and I just came back from RailsConf in Detroit. JOËL: Yeah, we were there for, I guess, it's a three-day conference. Both of us were giving talks. STEPHANIE: Yeah. I don't think we've both spoken at a conference for at least a little over a year, so that was really fun kind of to catch up in person. And there was a whole crew of thoughtboters who were there. Yeah, I feel like we were hanging out, like, a lot [chuckles] all of last week, just seeing each other, talking about, you know, rehearsing our talks and spending time together on...there was, like, a hack day, and we were sitting at the table together. So, I feel like I'm totally caught up on everything that's new in your world, and that's it. That's the end of the show [laughs]. JOËL: On that note, shall we wrap up? STEPHANIE: [laughs] That would not be very fair to our listeners. [laughter] JOËL: Yeah. So, how was the conference speaking experience for you? STEPHANIE: Ooh, it was really great this year. I have not spoken at a RailsConf before, so this was actually, I think, a bigger stage than I had experienced before, and I had a great time. I met Ruby friends, new and old, and, yeah, I left feeling very gooeyed, and very energized, and just so grateful for the Rails community [laughs]. Yeah, I had a very lovely time, kind of being a little bit outside my normal life for a few days. And I think my favorite part about these things is just like, anywhere you go, you can kind of just have a shared interest with someone, and you can start a conversation with them. JOËL: That's really interesting. Do you find yourself just reaching out to strangers at conferences like this? Or do you tend to just hang out with the people that you know? STEPHANIE: Oh, I think a little bit of both. I like to get meals with people I know. But if I'm just hanging out in, like, the lobby or if I happen to get a seat for a talk and I'm sitting next to someone that I don't know, I find it quite easy to just be like, "Hi, like, I'm Stephanie. Are you excited for this talk?" Or, like, "What good talks have you seen recently?" There's an aspect of, like, the social butterfly that comes out of me when I'm at these things. Because I just don't get to have, like, easy access to, I don't know, people with, like, that shared interest or people who are willing to just have a conversation with you normally, I think. JOËL: Yeah, would you describe yourself more as an introvert or an extrovert? STEPHANIE: I am an extroverted introvert [laughter]. I feel like maybe that might be interpreted as a non-answer, but I think I lean more on the introvert side. But you know when you're with a group of people, and there's not, like, a very clear extrovert in that conversation, and then you're like, oh, I have to do the heavy [chuckles] lifting of the social lubrication [laughs] in this conversation, I can step into that role, reluctantly [laughs]. JOËL: Okay. I like the label that you used, the extrovert introvert, in that I enjoy social situations. I do well in social situations. But they also consume a lot of energy for me. I don't necessarily get sort of recharged by doing social events. So, people will be surprised when they find out that I tend to talk about myself as an introvert because, like, "Oh, but you're, like, you know, you're not awkward. You engage very well in different group situations." STEPHANIE: You have a podcast [laughs]. JOËL: And the truth is I enjoy those things, right? I really like social interaction, but it does, after a while, wear me out. STEPHANIE: Yeah, that makes sense. I did want to spend a little bit of time talking about the talk you gave at RailsConf this year: "Dungeons & Dragons & Rails." JOËL: I got to have a lot of fun with the theme. The actual content was introducing people to Turbo by building an interactive Dungeons & Dragons character sheet using vanilla Rails and a little bit of Turbo. So, we're not even writing any JavaScript. We're just using the Turbo helpers, a little bit of Action Cable to mimic something a little bit like...people who are in the know might be familiar with the site D&D Beyond, which is kind of the official D&D online character sheet website. Of course, it wasn't anywhere near as fancy because it's a 30-minute talk and showcasing different features, but that's what we were aiming for. STEPHANIE: Yeah, you know, you've talked a bit about giving talks on the show before, but I wanted to get into what made this one different because I think it could be fun for our listeners. [laughter] JOËL: The way I structured this talk so it has a theme. It's about Dungeons & Dragons, and we're building a character sheet. The way I wrote the talk was it's broken up into chapters. Each chapter is teaching a new feature in Turbo that I want to show off. In order to motivate learning each of these features...because I don't like to just say, "Oh, here's a thing that technology can do. Oh, here's a thing that technology can do." That's boring. You need a reason to learn that. So, I needed a reason to say, "We need to add this to a character sheet." So, every sort of chapter of the talk opens up with a little narrative portion. We're following this character, Glittersense, the gnome, and he's on adventures. And at different points in the adventures, he's going to do different types of roles or need different stats and things. And so, when we reach the point in the adventure where we need that, we sort of freeze frame and then say, "Okay, let's add that as a feature to the character sheet." And then, oh no, it turns out that this feature is a little bit more complicated. We're going to have to learn a new Turbo feature to do that. Who would have guessed? And then, we learn a new Turbo feature together. And then, we go back to the narrative portion. The adventures of Glittersense continue. And then, oh no, we're going to need to add another feature to the character sheet. And that's sort of how the talk is structured. STEPHANIE: Yeah. And you did a really cool thing with the narrative portions, which was you basically performed as Glittersense, the gnome, voice and posture, and a lot of really great acting from you [laughs], in my opinion. JOËL: That is something that came out pretty late in the talk preparation. So, I knew I wanted this kind of alternating story and code structure. Then, like, the weekend before RailsConf, I'm running through my slide deck, and I realized, you know what? What if instead of narrating Glittersense's adventures, what if I went first person for those sections? Glittersense tells his own story. And then, from there, it wasn't a big jump to say, you know what? This is D&D. If I'm going first person and narrating, I really should do a voice. And this is a conversation I had with a couple of people at the speaker dinner. And, of course, everyone's like, "You should 100% do the voice." And I was really not feeling confident in my ability to pull it off. So, for the next two nights, because I was speaking on the third day, the next two nights at the conference, in the evenings, I'm in the hotel room in front of the mirror just practicing my gnome voice to try to get something that got the persona of Glitterense, the gnome, across to the audience. STEPHANIE: How would you describe the persona? JOËL: Very extra. STEPHANIE: [laughs] JOËL: Very high energy. STEPHANIE: Yes. The name Glittersense is very extra, after all. JOËL: [laughs]. I punctuated a lot of the things that he says with just high-pitched laughter. He's also...so, the framing device for all of this is that you're in a tavern listening to him tell his adventures. I wanted a little bit of the sense that Glittersense is maybe embellishing a little bit. I think it may be too much to say he's full of himself, but he's definitely making himself to be the hero of the story, and maybe making himself to be slightly cooler than he really was. STEPHANIE: Yeah. I definitely got, like, a little bit of eccentricity, too, from the persona. And you know when you just, I don't know, meet an older person who has, like, a lot of life experience, and they want to tell you about it [laughter], but you do kind of maybe have a little bit of suspicion around how much they're exaggerating [laughs]. But it was really fun. Everyone I talked to afterwards, like, loved it. And I got to share the little nugget that, like, oh yeah, and Joël only, like, started doing the voice, like, decided that he was going to do it two days ago. And they were just all really, like, blown away because it seemed so well practiced, and it was really fun. JOËL: I got to do something really fun, also, with physical space because Glittersense narrates his portion, sort of the story portions, but then the code portions where we're talking about Turbo, I'm talking in my own voice. And so, when I'm talking about Turbo, I'm standing at the lectern. And when I'm Glittersense, I'm kind of off to the side on the stage and doing the voice. And so, there's this almost, like, two worlds that are inhabited: one by Joël, the speaker, and one by Glittersense, the gnome. And it got to the point where I don't say or do anything. I only move from the lectern to the, like, portion of the stage where Glittersense lives. And the audience starts chuckling and, like, nothing has happened yet, like, no jokes have been told. No voice has happened. No slides have changed. But the anticipation, people know what's coming. STEPHANIE: Yeah. And I think the best part, what I really found just really fun and, I don't know, every time it happened, I just really enjoyed it, when you transitioned out of Glittersense, the gnome, and back to Joël because you were so nonchalant about it. You kind of, like, straighten up rather than having your little kind of crouchy gnome posture, and then just walk across back to the podium. And then, in your normal voice, go back to just, you know, sharing very...not necessarily dry, but just, like, straight to the point. "And this is, like, how you, you know, create a frame in [laughs] Turbo," as if nothing happened [laughs] when even just, like, you know, 20 seconds ago, you were just enthusing about, like, slaying the bandit, chieftain [laughter] known as Glittersense. JOËL: Uh-huh. I think, especially when I open, so I get introduced. I'm off stage. I walk onto the stage, and I'm immediately Glittersense. And I'm telling a story, and the intro goes on for, like, quite a while. It's a big story chunk. And then, at some point, I just walk over to the lectern, drop the voice, hit next slide, and it's my title slide. I'm just like, "Okay, now welcome to Dungeons & Dragons on Rails. We're going to build a character sheet together." STEPHANIE: Yeah, that's exactly the moment I'm thinking of. JOËL: The walking in as Glittersense and just immediately going to the voice caught everyone by surprise. And then, the, like, oh, he keeps going for this. Is the whole talk going to be like this? And then, the, like, just when you think, oh, he's really going for it, the, like, dropping it and going to the podium and title slide. It wasn't intended to be a funny moment, but I think the contrast and the fact that I just switched over was one of the biggest laughs I got. STEPHANIE: Yeah, I mean, I think that attests to how good the delivery of it was because that contrast was very felt. So, props to you. JOËL: I love the idea of, you know, the thought that you put into building a talk and, like, the narrative structure and the pedagogy of the stuff. And, I think, in this particular case, this is almost like a narrative approach called in media res, where you start kind of in the middle. You open your book, or your movie, or whatever in the middle of the story. And then, you kind of come back to the beginning at some point later. So, it starts with some kind of action scene that grabs your attention. So, in this case, my title slide is 10, 15 slides into the talk. We get immediately started with Glittersense and his adventures. And then, once we're sort of all bought into this world, then we move to the title slide and talk about, okay, we're here to build a character sheet and all that stuff. And I think that it wouldn't have had the same impact if I'd, like, opened with that and then gone into Glittersense's adventures. And that's something that was not the case at the beginning. I really reworked the talk to make it in that order. And I think that the talk had a lot more impact for doing that. STEPHANIE: Yeah, definitely. I guess I also just wanted to point out that this is very different from all your other talks. And I think it's really cool that, you know, you are a veteran speaker, but you still find ways to do something new and try something that you've never done before, and yeah, find ways, new ways to, like, speak and engage people and teach. I don't know, do you have just any thoughts about why or how you got into a position to be like, "Oh, you know, I'm going to do something super different this time around" [laughs]? JOËL: So, every talk I give, I try to do something new, something different, to push myself as a speaker to get better. That might be in the writing of the talk; that might be in the delivery. More recently, I've been trying to do more with dynamic presence on stage. So, when I spoke at RubyConf San Diego, I was trying to not just stand at the lectern but to learn to be able to give my talk while also, you know, walking around the stage, looking at the audience, making pauses where it's necessary, not to just be so into the delivery of the talk by just standing at the podium and, like, going through my deck, which is a small thing but I think is an area I wanted to improve in. This time, I was playing around with some more narrative framing and ended up, yeah, like, pushing it to an extreme. And it works with the theme because inhabiting a character and role-playing is the core part of D&D. Not everybody plays a D&D character by doing a voice. You are a little bit extra if you do that. But it's not uncommon for people to do a voice. And so, it kind of fit perfectly with my theme. I just needed to get the self-confidence to do it. So, thank you to everyone at the speaker dinner that was like, "No, you totally got this. You should do this," because I was feeling very unsure. STEPHANIE: It really paid off, so... JOËL: I'd like to circle back to your talk, though. So, you gave, basically, the first talk of the conference. You were the first session after the keynote. A theme that came up multiple times in your talk was this idea of coupling and how it affects different parts of our code and, particularly the way that we structure tests or the way that we feel test pain. How did you, when you were prepping this talk, discover that theme and decide to lift it up? Was that something that you knew ahead of time you wanted to talk about, or did it just sort of emerge as part of the talk preparation process? STEPHANIE: That's a really great question, and I'm glad you picked up on that. So, my talk was called: "So, Writing Tests Feels Painful. What Now?" Originally, when I came up with this idea, it actually started with coupling. I realized that I wanted to give a talk about coupling because it's just something that I was struggling with or, like, had seen other people struggle with and really wanting kind of a discrete resource, wanting to provide that. But as I was just thinking about it, I was like, oh, like, there are so many different ways that this could go. On one hand, it was a very like important topic to me, but also maybe too big of a topic. And so, I actually, like, kind of put that on the back burner. And it wasn't until later when I connected it to another...it wasn't necessarily different at all, but just, like, an extension of this idea is, oh, like, people are struggling with coupling in tests or, like, it manifests in tests. And so, I thought maybe that could be the angle that I took on this topic that kind of gave me a little bit more focus. And I didn't even end up saying like, "Yeah, this talk was, like, born out of just, you know, wrestling with coupling or anything like that." So, it's cool, to me, that you picked up on it as a theme because it was...I had, you know, ended up not being super explicit about it, but it was certainly, like, a thing that was driving the content from my perspective. JOËL: Interesting. So, it started as a coupling talk and then got sort of focused through the lens of testing. STEPHANIE: Yeah. And I think there was a part of me that was like, you know, I don't know if I could just teach the concept of coupling, like, by itself without the framing of testing for people who this is, like, a new concept for them. I realized that maybe it would be more effective to be like, "Hey, like, have you experienced test pain? You know, have you had to mock out a billion objects or changed, you know, made one change and then had to fix, like, a million tests subsequently? Then this talk is for you." And then weave in the idea of coupling in it to kind of start to help people feel familiar with it or just, like, identify it without as much, like, jargon as kind of I've seen when I've tried to figure out, like, how to manage it. JOËL: It's interesting because I think it gives you a, like, concrete, valuable thing to optimize for as opposed to, like, hey, let's lower coupling because then you're writing, you know, quote, unquote, "better code." And you get to feel better about yourself as a programmer because you're doing things the, quote, unquote, "right way." That's very kind of hand-wavy, and I think sometimes leads people down a bad path where they're optimizing things that they shouldn't be. But the tests give you this very concrete way to say, "Hey, we're not just trying to reach the, like, low score record for the app in terms of coupling. We're trying to reduce test pain. Tests are painful. And that pain is telling us something. It's telling us that we've crossed some sort of threshold for coupling. Let's find ways to reduce it, not so that we can feel good about ourselves, but so that our tests are actually manageable." STEPHANIE: Yeah, I am really glad you picked up on that, too, because I feel the exact same way when someone just tells me to decouple something or, like, makes a note that, like, oh, this feels really coupled. I don't know what that means necessarily. And it's not very convincing to just be like, "Oh, you should write loosely coupled code [laughs]," at least for me. What you said just now, it's like, it's not to feel good about ourselves, you know, to write code that way, but, actually, to just feel good about our code, period [laughs]. And, yeah, finding that validation through just, like, actually working with code that is easier to change that is the goal, not necessarily to, yeah, kind of pursue some totally subjective, like, metric. JOËL: So, one of the kinds of coupling that you called out, I think, was where you hardcode a class name of some other class in your object. And that feels, like, really sort of innocuous. Like, of course, my objects can talk to other objects. And maybe I want to, like, refer to a class somewhere. Why is that such a like tricky piece of coupling to work with? STEPHANIE: It's not necessarily intentional sometimes. Like, you just do it because you're like, well, I need access to this class somewhere, and I happen to already be in this file. So, why not just hard-code it here? I do think it's a little tricky because the file that you're writing might be, like, very far down in, like, your code flow or, like, your code path, like, very far from, like, a controller or any kind of entry point into your system, at least based on what I've seen in a lot of modern Rails apps. And so, I think that coupling gets really, really obscured. I have found that, like, if I have to kind of write a more, like, a higher level test, like, maybe a request spec or something, there are times when I'm, like, having to deal with a lot of classes just to set stuff up in a test like that that I didn't think I would have to [chuckles] when I first went about trying to just be like, oh, like, let's just figure out how to get a 200 response [laughs] from this request. So, you're really burying perhaps the things that are needed to set up, like, that full path of execution. And sometimes, it only comes out when you're writing a test for it. JOËL: And you mentioned briefly, in passing, the idea that oftentimes this sort of coupling manifests as a lot of extra test setup because your object that you're trying to test now also needs all these other things that are related in order to be tested. But sometimes even when you hard code a class, though, you can't even just say, "Oh, I want this particular user or something returned." So, you have to then do something like allow this class to receive class method and return, and now you're stubbing. And I don't know how you feel about stubs in RSpec. I always treat them a little bit like a code smell in the like classic sense of it's not necessarily bad, but maybe pause, take a look, and ask yourself, "Why is that there, and should I do things differently?" STEPHANIE: Yeah. I ended up having, like, a lot of examples of stubbing in my example because the code had just been set up where that was the only way that you could access those collaborators, essentially, to, like, make an assertion on them, or have them do something different because you actually needed to go into a different path, right? And I was like, yeah, this should feel weird. You should feel a little bad [laughs] or at least, you know, kind of just pay attention to that feeling, even if you can't really do anything about it in that particular instance. But on the flip side, you know, it's like, yes, it feels a bit strange, you know, but it's not all bad, right? Like, you're kind of learning like, oh, hey, like, I am coupled to this hard-coded class because I am needing to stub, like, a class method that returns it, or that constructs it. And at least you've exposed that, you know, for yourself. One thing that I was running into a lot in my example, too, was that those things, like, weren't obvious when you were just reading maybe, like, the public methods and trying to figure out what was happening in them because they were wrapped in private methods. I was a little bit conflicted about this because there were times when it was already just a single method call, but then it was just kind of wrapped in a private method that actually hid [laughs] the things, like all the dependencies that were passed as arguments. And I found that to be, sure, it looks kind of cleaner. But then all you need to do is scroll down [laughs], and then you're like, oh, actually, there's all these other things involved, but it was kind of hidden away for me. And I found that, actually, like, at least when I actually needed to change things, less helpful than I imagine what the, you know, code author intended. Do you have any thoughts about hiding details like that? JOËL: I'm kind of a big fan. STEPHANIE: Hmmm. JOËL: The general idea, I think, is called the single level of abstraction principle. Whatever sort of public method that you're calling is often implemented in terms of...let's say it does a few different things. It's implemented in terms of, like, these sort of high-level concepts. So, whoever is reading the public method doesn't need to like care about the details of how each step is implemented. So, maybe you're fetching something from an API, and then you're making a database call, and then you're doing some transformation and creating some new objects from it. Having all of the, like, HTTP calls and the ActiveRecord stuff and the, like, transformation all in the public method, yes, there's a lot of complexity happening there, and it makes that obvious. But it also makes it really hard to get a sense of what is happening. So, I like to say, "Hey, there are four steps. Let's wrap them all each in a private method then you can call all of those in the public method." The public method now sort of reads like a very simple sort of script. First, fetch data from the HTTP API, then fetch some data from the database, then apply this transformation, then create this object. And if I'm mostly caring about what this object does and not the how let's say I'm building some other objects that interact with this, that is the information I want to know. Where I care about the actual implementation of, oh, well, exactly how is the ActiveRecord stuff done when I'm doing internal changes to the object, that's when I care about those private methods. I think where it gets tricky, and I think that's the point that you were bringing up, is that if you write code in that way, it has to change the heuristics of how you read code to detect complexity. Because, oftentimes, I think a very classic heuristic for code complexity is just line length. If you have a 50-line method, probably there's a lot of complexity there. Maybe there's a lot of coupling. If it's a four-line method that is written at a high level of abstraction that just calls out to private methods, you scan over. You're like, oh, nice and clean. Nothing to see here. Move on. And so, that heuristic doesn't really hold up in a codebase where you're applying this single level of abstraction. Do you think that lines up with your experience? STEPHANIE: Hmm. As I was listening to you, I was like, yeah, like, that makes total sense to me. But then I also clearly disagreed a little bit [laughs] in my initial...kind of what I was saying initially. And I think it's because that single layer of abstraction was not very well defined. JOËL: Hmm. That's fair. STEPHANIE: Yeah. Where, in fact, it was actually misleading. Like, it wanted to be at that level of abstraction, but it really wasn't. Like, it was operating on things at, like, a lower level and wasn't designed with that kind of readability in mind. So, it was more, like, it was just hiding stuff a little bit, at least for me. And, I think, it certainly would have taken, like, more work to figure out what that code, like, really was meant to convey. It might have taken some refactoring to coalesce at that single level. And that was essentially kind of what I was showing in my talk as, like, how to get to saying, like, "Hey, we actually are operating in the lower level, but I don't think we need to." There was some amount of, like, looking at all of the how to figure out, like, oh, maybe these things we don't even need to expose in this class. And we kind of got to a place where those details weren't, like, needed in that class at all. So, it's one of those things where it's harder than it sounds [laughs]. JOËL: It's definitely an art. STEPHANIE: Yeah. JOËL: And I think what you're saying about some of the coupling being, like, scattered throughout the class, it's something that I see a lot with situations where you're coupled, not so much to, like, a single class, but to something side effectful. So, you're building some kind of integration with a third-party API, and you're going to have to make a lot of HTTP calls. And each of those might be individually simple, and they're all sort of maybe in different private methods or whatever, or they're interspersed among a larger chunk of logic. And that makes your tests really complicated. But there's no, like, one place you can point at and be like, ooh, that's the one place where there's a lot of complexity. What's happening here, though, is that your business object that's doing stuff is coupled to the network, and that coupling is going to force you to do some stubbing. It's going to force you to deal with a bunch of side effects that are non-deterministic in your code. And you used the word coalesce earlier that I really liked because I think that's often a situation where you do have to stand back and say, "Look, there's a lot of HTTP going on here. What if I coalesced it all into an object? Now I have two objects: one that's responsible for business logic, and one that's responsible for just the HTTP calls." And, all of a sudden, the tests just totally simplify. And we've removed some coupling, but that's not something that you would have seen just from reading the code. Because, as you were saying, it's sort of scattered in little bits and pieces throughout your file that don't necessarily catch your eye. STEPHANIE: Yeah. Which brings me to a blog post that I had found a lot of inspiration from in the talk that I'll link. It's called "That One Thing: Reduce Coupling for More Scalable and Sustainable Software." But it's actually about tests [laughs], even though it doesn't make an appearance in the title of the blog post at all. But this is where I kind of got the idea of necessary versus unnecessary coupling in test. Because I had never thought about how, yeah, like, when you write a test, you are very correctly coupling yourself to at least the method and class under test [laughs], if not also the arguments, right? Or anything else needed to construct what you're testing. And literally having that listed out for me in this blog post I think it's a...they use some examples in Java. And so, there's, like, a little bit more [laughs] setup involved. But I think they're like, yeah, these are six things that, like, it's mostly fine if you're coupled to these because that's kind of what needs to happen in a test. But, like, even having something to compare a test I wrote to just, like, okay, these are the things I know I need. And then, you can start to see when you've diverged from that list, when you are finding yourself coupled to some internals of your class. I really...that was actually, like, really helpful for me because, as we talked about earlier, like, it can be kind of communicated so abstractly. But here is, like, a very clear heuristic for when you should at least, like, start to pay attention or be like, oh, this is something that was needed to get the test to run but is now starting to feel a little unnecessary because it's not on this list. JOËL: That list reminds me, or the idea of a list of things to check out for when thinking about coupling, reminds me of the concept of connascence, which is a fancy word for almost a, like, categorization of different types of coupling because coupling comes in different flavors, some of which are tighter forms of coupling than others. And so, having that vocabulary has been really helpful for me when I'm looking at PRs and code review, or even when I'm refactoring my own code. Kind of like that list that you mentioned that you have, now I have some heuristics to look at that and say, "Oh, can I go from a connascence of position to a connascence of naming, and does that help me?" STEPHANIE: Yeah, I like that you mentioned the positional connascence because I also came across a really great metaphor for kind of things that need to change together, like, when that makes sense. And it was basically the idea of a dishwasher and a laundry machine [laughs]. I wish I could recall, like, what book this was from. But it was basically like, oh yeah, like, in theory, you're washing two things. So, maybe they are similar, but then you're like, no, actually, you want these to be a little bit separate because, you know, you don't want to wash your dishes and your clothes in the same machine. I don't know, maybe that exists [laughs], but I don't think it would do a very good job for either goal. And I think that was really helpful, for me, in imagining, like, the difference between kind of coupling and cohesion, like things that...even just imagining, like, kind of where I'm doing those things in the house, right? It's like, okay, that lives in a separate room. And, like, the kitchen is for the dishes, and that could be like, you know, a module if you will. And, like, laundry happens in the laundry room, and how to kind of just separate those things, even though they also do share some qualities, too. Like, they're both appliances, right? And so, that's the way that they are similar, but they're not the same. JOËL: You just mentioned the sort of keyword cohesion. And for our listeners who are not familiar with that term, it refers to an object sort of having one thing that it does well. Like, everything in that class sort of works towards the same goal, kind of similar to the idea of the single responsibility principle. So, in my earlier example, where we're sort of interspersing some business logic, a lot of HTTP requests, and pulling out an object that's focused on HTTP, like everything is based around that, now that object has higher cohesion because it's all doing one thing. So, if you read classic object-oriented literature, the recommendations that you'll typically see are that objects should have high cohesion and low coupling. STEPHANIE: Yeah. Think of a dishwasher and a washing machine next time [laughs] you come across something like that. Because I feel like those are really great, like, real-life examples of that separation. JOËL: Did you go to Jared Norman's talk on the third day: "Undervalued: The Most Useful Design Pattern"? STEPHANIE: No, I didn't. Can you tell me about it? JOËL: It felt like he was addressing a lot of the same themes as you were but from more of a code perspective than a test perspective. Talking a lot about, again, forms of coupling, dependencies, and then, specifically, one of the tools that he focused on to reduce the coupling that we see is value objects and factory methods to construct those. So, for any of our listeners who, when the talks come out, watch Stephanie's talk and are like, "Wow, I would love to learn more about this," a great follow-up, Jared Norman's talk: "Undervalued: The Most Useful Design Pattern." STEPHANIE: Yeah, that's neat because I can see that being a solution to the hard code did class names that we were talking about earlier. And I like how that is kind of, like, a progressive lesson in coupling a little bit. I'm really glad you shared that talk with me because now I'm excited to watch it when it comes out. And in general, I just love learning new vocabulary or finding new ways to speak about this topic with clarity. So, if any of our listeners have just additional mental models for coupling [laughs] different metaphors, different household appliances [laughs], or something like that, I would love to know. JOËL: You would like that, given that our first episode together was about "The Value Of Specialized Vocabulary." STEPHANIE: Yeah, it's clearly undervalued. JOËL: Haha, I see what you did there. STEPHANIE: Thank you. Thank you very much [laughs]. JOËL: On that terrible/wonderful pun, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

vision detroit mvp dragons tests dungeons api turbo java javascript rails prs mandy moore coupling conversing d beyond quenneville railsconf activerecord rspec stephanie you stephanie yeah stephanie how stephanie it stephanie thank stephanie no stephanie oh action cable
The Bike Shed
424: The Spectrum of Automated Processes for Your Dev Team

The Bike Shed

Play Episode Listen Later Apr 30, 2024 36:47


Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes. The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions. Transcript: AD: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky. And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error. STEPHANIE: That sounds reasonable to me. JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions. And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice. STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that. JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate. Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern. STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain? JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that. STEPHANIE: Gotcha. JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors. But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that. STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that? JOËL: This is more about the developer experience. STEPHANIE: Got it. JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape. And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that. So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated. STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense. JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on. STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts. JOËL: Yeah. STEPHANIE: Cool. JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system. STEPHANIE: Absolutely. That's very exciting. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet. JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor? STEPHANIE: Yeah, it's the latter. JOËL: Okay. STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs]. JOËL: Oh no! STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears. JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me. STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky. And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding. JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk. STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective. JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell? STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it. JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th. STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else. JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top? STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app. So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing. JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right? STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right? JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming. And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior. STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that. JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work? STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally. JOËL: But it's an optional test failure, so you're allowed to let that test fail. STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically. JOËL: I see. STEPHANIE: Or an ignore list, yeah. JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync. STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs]. JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you. STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods. JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different? STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done. JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant. STEPHANIE: Ooh. JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app. STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here? JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime? Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match. So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it. STEPHANIE: Hmm. That is very compelling to me. JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that? STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous. When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check. JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool. STEPHANIE: Ooh. JOËL: And what makes sense in a linter. STEPHANIE: What was the argument for the linter being a worse tool by doing that? JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article. STEPHANIE: I'll have to revisit it after the show [laughs]. JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes. STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team? JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you." STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen." I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking. JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go. STEPHANIE: Interesting. It's kind of like the honor system, then [laughs]. JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production. Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide. STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum. JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool. STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work? JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build. You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up." STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice. JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that. STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers? JOËL: I'm going to throw a fancy phrase at you. STEPHANIE: Ooh, I'm ready. JOËL: Signal-to-noise ratio. STEPHANIE: Whoa, uh-huh. JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds. So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful. STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process. JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check. And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem." STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system? [laughter] JOËL: I may be too much of a developer. STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations. JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up. STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
422: Listener Topics Grab Bag

The Bike Shed

Play Episode Listen Later Apr 9, 2024 35:23


Joël conducted a thoughtbot mini-workshop on query plans, which Stephanie found highly effective due to its interactive format. They then discuss the broader value of interactive workshops over traditional talks for deeper learning. Addressing listener questions, Stephanie and Joël explore the strategic use of if and else in programming for clearer code, the importance of thorough documentation in identifying bugs, and the use of Postgres' EXPLAIN ANALYZE, highlighting the need for environment-specific considerations in query optimization. Episode mentioning query plans (https://bikeshed.thoughtbot.com/418) Query plan visualizer (https://explain.dalibo.com/) RailsConf 2024 (https://railsconf.org/) Episode 349: Unpopular Opinions (https://bikeshed.thoughtbot.com/349) Squint test (https://www.youtube.com/watch?v=8bZh5LMaSmE) Episode 405: Retro on Sandi Metz rules (https://bikeshed.thoughtbot.com/405) Structuring conditionals in a wizard (https://thoughtbot.com/blog/structuring-conditionals-in-a-wizard) Episode 417: Module docs (https://bikeshed.thoughtbot.com/417) Episode 416: Multidimensional numbers (https://bikeshed.thoughtbot.com/416) ruby-units gem (https://github.com/olbrich/ruby-units) Solargraph (https://marketplace.visualstudio.com/items?itemName=castwide.solargraph) parity (https://github.com/thoughtbot/parity) Transcript: STEPHANIE:  Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Just recently, I ran a sort of mini workshop for some colleagues here at thoughtbot to dig into the idea of query plans and, how to read them, how to use them. And, initially, this was going to be more of a kind of presentation style. And a colleague and I who were sharing this decided to go for a more interactive format where, you know, this is a, like, 45-minute slot. And so, we set it up so that we did a sort of intro to query plans in about 10 minutes then 15 minutes of breakout rooms, where people got a chance to have a query plan. And they had some sort of comprehension questions to answer about it. And then, 15 minutes together to have each group share a little bit about what they had discovered in their query plan back with the rest of the group, so trying to balance some understanding, some application, some group discussion, trying to keep it engaging. It was a pretty fun approach to sharing information like that. STEPHANIE: Yeah. I wholeheartedly agree. I got to attend that workshop, and it was really great. Now that I'm hearing you kind of talk about the three different components and what you wanted people attending to get out of it, I am impressed because [laughs] there is, like, a lot more thought, I think, that went into just participant engagement that reflecting on it now I'm like, oh yeah, like, I think that was really effective as opposed to just a presentation. Because you had, you know, sent us out into breakout rooms, and each group had a different query that they were analyzing. You had kind of set up links that had the query set up in the query analyzer. I forget what the tool was called that you used. JOËL: I forget the name of it, but we will link it in the show notes. STEPHANIE: Yeah. It was helpful for me, though, because, you know, I think if I were just to have learned about it in a presentation or even just looked at, you know, screenshots of it on a slide, that's different still from interacting with it and feeling more confident to use it next time I find myself in a situation where it might be helpful. JOËL: It's really interesting because that was sort of the goal of it was to make it a bit more interactive and then, hopefully, helping people to retain more information than just a straight up, like, presentation would be. I don't know how you feel, I find that often when I go to a place like, let's say, RailsConf, I tend to stay away from more of the workshop-y style events and focus more on the talks. Is that something that you do as well? STEPHANIE: Yeah. I have to confess that I've never attended a workshop [laughs] at a conference. I think it's partly my learning style and also partly just honestly, like, my energy level when I'm at the conference. I kind of just want to sit back. It's on my to-do list. Like, I definitely want to attend one just to see what it's like. And maybe that might even inspire me to want to create my own workshop. But it's like, once I'm in it, and, you know, like, everyone else is also participating, I'm very easily peer pressured [laughs]. So, in a group setting, I will find myself enjoying it a lot more. And I felt that kind of same way with the workshop you ran for our team. Though, I will say a funny thing that happened was that when I went out into my breakout group with another co-worker, and we were trying to grok this query that you gave us, we found out that we got the hardest one, the most complicated one [laughs] because there were so many things going on. There was, like, multiple, like, you know, unions, some that were, like, nested, and then just, like, a lot of duplication as well, like, some conditions that were redundant because of a different condition happening inside of, like, an inner statement. And yeah, we were definitely scratching our heads for a bit and were very grateful that we got to come back together as a group and be like, "Can someone please help? [laughs] Let's figure out what's going on here." JOËL: Sort of close that loop and like, "Hey, here's what we saw. What does everybody else see?" STEPHANIE: Yeah, and I appreciated that you took queries from actual client projects that you were working on. JOËL: Yeah, that was the really fun part of it was that these were not sort of made-up queries to illustrate a point. These were actual queries that I had spent some time trying to optimize and where I had had to spend a lot of time digging into the query plans to understand what was going on. And it sounds like, for you, workshops are something that is...they're generally more engaging, and you get more value out of them. But there's higher activation energy to get started. Does that sound right? STEPHANIE: Yeah, that sounds right. I think, like, I've watched so many talks now, both in person and on YouTube, that a lot of them are easily forgettable [laughs], whereas I think a workshop would be a lot more memorable because of that interactivity and, you know, you get out of it what you put in a little bit. JOËL: Yeah, that's true. Have you looked at the schedule for RailsConf 2024 yet? And are there any workshops on there that you're maybe considering or that maybe have piqued your interest? STEPHANIE: I have, in fact, and maybe I will check off attending a workshop [laughs] off my bucket list this year. There are two that I'm excited about. Unfortunately, they're both at the same time slot, so I -- JOËL: Oh no. You're going to have to choose. STEPHANIE: I know. I imagine I'll have to choose. But I'm interested in the Let's Extend Rails With A Gem by Noel Rappin and Vision For Inclusion Workshop run by Todd Sedano. The Rails gem one I'm excited about because it's just something that I haven't had to do really in my dev career so far, and I think I would really appreciate having that guidance. And also, I think that would be motivation to just get that, like, hands-on experience with it. Otherwise, you know, this is something that I could say that I would want to do and then never get [chuckles] around to it. JOËL: Right, right. And building a gem is the sort of thing that I think probably fits better in a workshop format than in a talk format. STEPHANIE: Yeah. And I've really appreciated all of Noel's content out there. I've found it always really practical, so I imagine that the workshop would be the same. JOËL: So, other than poring over the RailsConf schedule and planning your time there, what has been new for you this week? STEPHANIE: I have a really silly one [laughs]. JOËL: Okay. STEPHANIE: Which is, yesterday I went out to eat dinner to celebrate my partner's birthday, and I experienced, for the first time, robots [laughter] at this restaurant. So, we went out to Hot Pot, and I guess they just have these, like, robot, you know, little, small dish delivery things. They were, like, as tall as me, almost, at least, like, four feet. They were cat-themed. JOËL: [laughs] STEPHANIE: So, they had, like...shaped like cat...they had cat ears, and then there was a screen, and on the screen, there was, like, a little face, and the face would, like, wink at you and smile. JOËL: Aww. STEPHANIE: And I guess how this works is we ordered our food on an iPad, and if you ordered some, like, side dishes and stuff, it would come out to you on this robot cat with wheels. JOËL: Very fun. STEPHANIE: This robot tower cat. I'm doing a poor job describing it because I'm still apparently bewildered [laughs]. But yeah, I was just so surprised, and I was not as...I think I was more, like, shocked than delighted. I imagine other people would find this, like, very fun. But I was a little bit bewildered [laughs]. The other thing that was very funny about this experience is that these robots were kind of going down the aisle between tables, and the aisles were not quite big enough for, like, two-way traffic. And so, there were times where I would be, you know, walking up to go use the restroom, and I would turn the corner and find myself, like, face to face with one of these cat robot things, and, like, it's starting to go at me. I don't know if it will stop [laughs], and I'm the kind of person who doesn't want to find out. JOËL: [laughs] STEPHANIE: So, to avoid colliding with this, you know, food delivery robot, I just, like, ran away from it [laughs]. JOËL: You don't know if they're, like, programmed to yield or something like that. STEPHANIE: Listen, it did not seem like it was going to stop. JOËL: [laughs] STEPHANIE: It got, like, I was, you know, kind of standing there frozen in paralysis [laughs] for a little while. And then, once it got, I don't know, maybe two or three feet away from me, I was like, okay, like, this is too close for comfort [laughs]. So, that was my, I don't know, my experience at this robot restaurant. Definitely starting to feel like I'm in the, I don't know, is this the future? Someone, please let me know [laughs]. JOËL: Is this a future that you're excited or happy about, or does this future seem a little bit dystopian to you? STEPHANIE: I was definitely alarmed [laughter]. But I'm not, like, a super early adopter of new technology. These kinds of innovations, if you will, always surprise me, and I'm like, oh, I guess this is happening now [laughs]. And I will say that the one thing I did not enjoy about it is that there was not enough room to go around this robot. It definitely created just pedestrian traffic issues. So, perhaps this could be very cool and revolutionary, but also, maybe design robots for humans first. JOËL: Or design your dining room to accommodate your vision for the robots. I'm sure that flying cars and robots will solve all of this, for sure. STEPHANIE: Oh yeah [laughter]. Then I'll just have to worry about things colliding above my head. JOËL: And for the listeners who cannot see my face right now, that was absolutely sarcasm [laughs]. Speaking of our listeners, today we're going to look at a group of different listener questions. And if you didn't know that, you could send in a question to have Stephanie and I discuss, you can do that. Just send us an email at hosts@bikeshed.fm. And sometimes, we put it into a regular episode. Sometimes, we combine a few and sort of make a listener question episode, which is what we're doing today. STEPHANIE: Yeah. It's a little bit of a grab bag. JOËL: Our first question comes from Yuri, and Yuri actually has a few different questions. But the first one is asking about Episode 349, which is pretty far back. It was my first episode when I was coming on with Chris and Steph, and they were sort of handing the baton to me as a host of the show. And we talked about a variety of hot takes or unpopular opinions. Yuri mentions, you know, a few that stood out to him: one about SPAs being not so great, one about how you shouldn't need to have a side project to progress in your career as a developer, one about developer title inflation, one about DRY and how it can be dangerous for a mid-level dev, avoiding let in RSpec specs, the idea that every if should come with an else, and the idea that developers shouldn't be included in design and planning. And Yuri's question is specifically the question about if statements, that every if should come with an else. Is that still an opinion that we still have, and why do we feel that way? STEPHANIE: Yeah, I'm excited to get into this because I was not a part of that episode. I was a listener back then when it was still Steph and Chris. So, I am hopefully coming in with a different, like, additional perspective to add as well while we kind of do a little bit of a throwback. So, the one about every if should come with an else, that was an unpopular opinion of yours. Do you mind kind of explaining what that means for you? JOËL: Yeah. So, in general, Ruby is an expression-oriented language. So, if you have an if that does not include an else, it will implicitly return nil, which can burn you. There may be some super expert programmers out there that have never run into undefined method for nil nil class, but I'm still the kind of programmer who runs into that every now and then. And so, implicit nils popping up in my code is not something I generally like. I also generally like having explicit else for control flow purposes, making it a little bit clearer where flow of control goes and what are the actual paths through a particular method. And then, finally, doing ifs and elses instead of doing them sort of inline or as trailing conditionals or things like that, by having them sort of all on each lines and balancing out. The indentation itself helps to scan the code a little bit more. So, deeper indentation tells you, okay, we're, like, nesting multiple conditions, or something like that. And so, it makes it a little bit easier to spot complexity in the code. You can apply, and I want to say this is from Sandi Metz, the squint test. STEPHANIE: Yeah, it is. JOËL: Where you just kind of, like, squint at your code so you're not looking at the actual characters, and more of the structure, and the indentation is actually a friend there rather than something to fight. So, that was sort of the original, I think, idea behind that. I'm curious, in your experience, if you would like to balance your conditionals, ifs with something else, or if you would like to do sort of hanging ifs. STEPHANIE: Hanging ifs, I like that phrase that you just coined there. I agree with your opinion, and I think it's especially true if you're returning values, right? I mean, in Ruby, you kind of always are. But if you are caring about return values, like you said, to avoid that implicit nil situation, I find, especially if you're writing tests for that code, it's really easy, you know, if you spot that condition, you're like, okay, great. Like, this is a path I need to test. But then, oftentimes, you don't test that implicit path, and if you don't enter the condition, then what happens, right? So, I think that's kind of what you're referring to when you talk about both. It's, like, easier to spot in terms of control flow, like, all the different paths of execution, as well as, yeah, like, saving you the headaches of some bugs down the line. One thing that I thought about when I was kind of revisiting that opinion of yours is the idea of like, what are you trying to communicate is different or special about this condition when you are writing an if statement? And, in my head, I kind of came up with a few different situations you might find yourself in, which is, one, where you truly just have, like, a special case, and you're treating that completely differently. Another when you have more of a, like, binary situation, where it's you want to kind of highlight either...more of a dichotomy, right? It's not necessarily that there is a default but that these are two opposite things. And then, a third situation in which you have multiple conditions, but you only happen to have two right now. JOËL: Interesting. And do you think that, like, breaking down those situations would lead you to pick different structures for writing your conditionals? STEPHANIE: I think so. JOËL: Which of those scenarios do you think you might be more likely to reach for an if that doesn't have an else that goes with it? STEPHANIE: I think that first one, the special case one. And in Yuri's email, he actually asked, as a follow-up, "Do we avoid guard clauses as a result of this kind of heuristic or rule?" And I think that special case situation is where a guard clause would shine because you are highlighting it by putting it at the top of a method, and then saying like, you know, "Bail out of this" or, like, "Return this particular thing, and then don't even bother about the rest of this code." JOËL: I like that. And I think guard clauses they're not the first thing I reach for, but they're not something I absolutely avoid. I think they need to be used with care. Like you said, they have to be in the top of your method. If you're adding returns and things that break out of your method, deep inside a conditional somewhere, 20 lines into your method, you don't get to call that a guard clause anymore. That's something else entirely. I think, ideally, guard clauses are also things that will break out of the method, so they're maybe raising exception. Maybe they're returning a value. But they are things that very quickly check edge cases and bail so that the body of the method can focus on expecting data in the correct shape. STEPHANIE: I have a couple more thoughts about this; one is I'm reminded of back when we did that episode on kind of retroing Sandi Metz's Rules For Developers. I think one of the rules was: methods should only be five lines of code. And I recall we'd realized, at least I had realized for the first time, that if you write an if-else condition in Ruby, that's exactly five lines [laughs]. And so, now that I'm thinking about this topic, it's cool to see that a couple of these rules converge a little bit, where there's a bit of explicitness in saying, like, you know, if you're starting to have more conditions that can't just be captured in a five-line if-else statement, then maybe you need something else there, right? Something perhaps like polymorphic or just some way to have branched earlier. JOËL: That's true. And so, even, like, you were talking about the exceptional edge cases where you might want to bail. That could be a sign that your method is doing too much, trying to like, validate inputs and also run some sort of algorithm. Maybe this needs to be some sort of, like, two-step thing, where there's almost, like, a parsing phase that's handled by a different object or a different method that will attempt to standardize your inputs and raise the appropriate errors and everything. And then, your method that has the actual algorithm or code that you're trying to run can just assume that its inputs are in the correct shape, kind of that pushing the uncertainty to the edges. And, you know, if you've only got one edge case to check, maybe it's not worth to, like, build this in layers, or separate out the responsibilities, or whatever. But if you're having a lot, then maybe it does make sense to say, "Let's break those two responsibilities out into two places." STEPHANIE: Yeah. And then, the one last kind of situation I've observed, and I think you all talked about this in the Unpopular Opinions episode, but I'm kind of curious how you would handle it, is side effects that only need to be applied under a certain condition. Because I think that's when, if we're focusing less on return values and more just on behavior, that's when I will usually see, like, an if something, then do this that doesn't need to happen for the other path. JOËL: Yes. I guess if you're doing some sort of side effect, like, I don't know, making a request to an API or writing to a file or something, having, like, else return nil or some other sentinel value feels a little bit weird because now you're caring about side effects rather than return values, something that you need to keep thinking of. And that's something where I think my thing has evolved since that episode is, once you start having multiple of these, how do they compose together? So, if you've got if condition, write to a file, no else, keep going. New if condition, make a request to an API endpoint, no else, continue. What I've started calling independent conditions now, you have to think about all the different ways that they can combine, and what you end up having is a bit of a combinatorial explosion. So, here we've got two potential actions: writing to a file, making a request to an API. And we could have one or the other, or both, or neither could happen, depending on the inputs to your method, and maybe you actually want that, and that's cool. Oftentimes, you didn't necessarily want all of those, especially once you start going to three, four, five. And now you've got that, you know, explosion, like, two to the five. That's a lot of paths through your method. And you probably didn't really need that many. And so, that can get really messy. And so, sometimes the way that an if and an else work where those two paths are mutually exclusive actually cuts down on the total number of paths through your method. STEPHANIE: Hmm, I like that. That makes a lot of sense to me. I have definitely seen a lot of, like, procedural code, where it becomes really hard to tell how to even start relating some of these concepts together. So, if you happen to need to run a side effect, like writing to a file or, I don't know, one common one I can think of is notifying something or someone in a particular case, and maybe you put that in a condition. But then there's a different branching path that you also need to kind of notify someone for a similar reason, maybe even the same reason. It starts to become hard to connect, like, those two reasons. It's not something that, like, you can really scan for or, like, necessarily make that connection because, at that point, you're going down different paths, right? And there might be other signals that are kind of confusing things along the way. And it makes it a lot harder, I think, to find a shared abstraction, which could ultimately make those really complicated nested conditions a little more manageable or just, like, easier to understand at a certain complexity. I definitely think there is a threshold. JOËL: Right. And now you're talking about nested versus non-nested because when conditions are sort of siblings of each other, an if-else behaves differently from two ifs without an else. I think a classic situation where this pops up is when you're structuring code for a wizard, a multi-step form. And, oftentimes, people will have a bunch of checks. They're like, oh, if this field is present, do these things. If this field is present, do these things. And then, it becomes very tricky to know what the flow of control is, what you can expect at what moment, and especially which actions might get shared across multiple steps. Is it safe to refactor in one place if you don't want to break step three? And so, learning to think about the different paths through your code and how different conditional structures will impact that, I think, was a big breakthrough for me in terms of taking the next logical step in terms of thinking, when do I want to balance my ifs and when do I not want to? I wrote a whole article on the topic. We'll link it in the show notes. So, Yuri, thanks for a great question, bringing us back into a classic developer discussion. Yuri also asks or gives us a bit of a suggestion: What about revisiting this topic and doing an episode on hot takes or unpopular topics? Is that something that you'd be interested in, Stephanie? STEPHANIE: Oh yeah, definitely, because I didn't get to, you know, share my hot topics the last episode [laughs]. [inaudible 24:23] JOËL: You just got them queued up and ready to go. STEPHANIE: Yeah, exactly. So, yeah, I will definitely be brainstorming some spicy takes for the show. JOËL: So, Yuri, thanks for the questions and for the episode suggestion. STEPHANIE: So, another listener, Kevin, wrote in to us following up from our episode on Module Docs and about a different episode about Multi-dimensional Numbers. And he mentioned a gem that he maintains. It's called Ruby Units. And it basically handles the nitty gritty of unit conversions for you, which I thought was really neat. He mentioned that it has a numeric class, and it knows how to do math [laughs], which I would find really convenient because that is something that I have been grateful not to have to really do since college [laughs], at least those unit conversions and all the things that I'd probably learned in math and physics courses [laughs]. So, I thought that was really cool, definitely is one to check out if you frequently work with units. It seemed like it would be something that would make sense for a domain that is more science or deals in that kind of domain. JOËL: I'm always a huge fan of anything that tags raw numbers that you're working with with a quantity rather than just floating raw numbers around. It's so easy to make a mistake to either treat a number as a quantity you didn't think of, or make some sort of invalid operation on it, or even to think you have a value in a different size than you do. You think you're dealing with...you know you have a time value, but you think it's in seconds. It's actually in milliseconds. And then, you get off by some big factor. These are all mistakes that I have personally made in my career, so leaning on a library to help avoid those mistakes, have better information hiding for the things that really aren't relevant to the work that I'm trying to do, and also, kind of reify these ideas so that they have sort of a name, and they're, like, their own object, their own thing that we can interact with in the app rather than just numbers floating around, those are all big wins from my perspective. STEPHANIE: I also just thought of a really silly use case for this that is, I don't know, maybe I'll have to experiment with this. But every now and then, I find the need to have to convert a unit, and I just pop into Google, and I'm like, please give me, you know, I'll search for 10 kilometers in miles or something [laughs]. But then I have to...sometimes Google will figure it out for me, and sometimes it will just list me with a bunch of weird conversion websites that all have really old-school UI [laughs]. Do you know what I'm talking about here? Anyway, I would be curious to see if I could use this gem as a command-line interface [laughs] for me without having to go to my browser and roll the dice with freecalculator.com or something like that [laughs]. JOËL: One thing that's really cool with this library that I saw is the ability to define your own units, and that's a thing that you'll often encounter having to deal with values that are maybe not one of the most commonly used units that are out there, dealing with numbers that might mean a thing that's very particular to your domain. So, that's great that the library supports that. I couldn't see if it supports multi-dimensional units. That was the episode that inspired the comment. But either way, this is a really cool library. And thank you, Kevin, for sharing this with us. STEPHANIE: Kevin also mentions that he really enjoys using YARD docs. And we had done that whole episode on Module Docs and your experience writing them. So, you know, your people are out there [laughs]. JOËL: Yay. STEPHANIE: And we talked about this a little bit; I think that writing the docs, you know, on one hand, is great for future readers, but, also, I think has the benefit of forcing the author to really think about their inputs and outputs, as Kevin mentions. He's found bugs by simply just going through that process in designing his code, and also recommends Solargraph and Solargraph's VSCode extension, which I suspect really kind of makes it easy to navigate a complex codebase and kind of highlight just what you need to know when working with different APIs for your classes. So, I recently kind of switched to the Ruby LSP, build with Shopify, but I'm currently regretting it because nothing is working for me right now. So, that might be the push that I need [laughs] to go back to using Solargraph. JOËL: It's interesting that Kevin mentions finding bugs while writing docs because that has absolutely been my experience. And even in this most recent round, I was documenting some code that was just sort of there. It wasn't new code that I was writing. And so, I had given myself the rule that this would be documentation-only PRs, no code changes. And I found some weird code, and I found some code that I'm 98% sure is broken. And I had to have the discipline to just put a notice in the documentation to be like, "By the way, this is how the method should work. I'm pretty sure it's broken," and then, maybe come back to it later to fix it. But it's amazing how trying to document code, especially code that you didn't write yourself, really forces you to read it in a different way, interact with it in a different way, and really, like, understand it in a deep way that surprised me a little bit the first time I did it. STEPHANIE: That's cool. I imagine it probably didn't feel good to be like, "Hey, I know that this is broken, and I can't fix it right now," but I'm glad you did. That takes a lot of, I don't know, I think, courage, in my opinion [laughs], to be like, "Yeah, I found this, and I'm going to, you know, like, raise my hand acknowledging that this is how it is," as supposed to just hiding behind a broken functionality that no one [laughs] has paid attention to. JOËL: And it's a thing where if somebody else uses this method and it breaks in a way, and they're like, "Well, the docs say it should behave like this," that would be really frustrating. If the docs say, "Hey, it should behave like this, but it looks like it's broken," then, you know, I don't know, I would feel a little bit vindicated as a person who's annoyed at the code right now. STEPHANIE: For sure. JOËL: Finally, we have a message from Tim about using Postgres' EXPLAIN ANALYZE. Tim says, "Hey, Joël, in the last episode, you talked a bit about PG EXPLAIN ANALYZE. As you stated, it's a great tool to help figure out what's going on with your queries, but there is a caveat you need to keep in mind. The query planner uses statistics gathered on the database when making decisions about how to fetch records. If there's a big difference between your dev or staging database and production, the query may make different decisions. For example, if a table has a low number of records in it, then the query planner may just do a table scan, but in production, it might use an index. Also, keep in mind that after a schema changes, it may not know about new indexes or whatever unless an explicit ANALYZE is done on the table." So, this is really interesting because, as Tim mentions, EXPLAIN ANALYZE doesn't behave exactly the same in production versus in your local development environment. STEPHANIE: When you were trying to optimize some slow queries, where were you running the ANALYZE command? JOËL: I used a combination. I mostly worked off of production data. I did a little bit on a staging database that had not the same amount of records and things. That was pretty significant. And so, I had to switch to production to get realistic results. So, yes, I encountered this kind of firsthand. STEPHANIE: Nice. For some reason, this comment also made me think of..., and I wanted to plug a thoughtbot shell command that we have called Parity, which lets you basically download your production database into your local dev database if you use Heroku. And that has come in handy for me, obviously, in regular development, but would be really great in this use case. JOËL: With all of the regular caveats around security, and PII, and all this stuff that come with dealing with production data. But if you're running real productions on production, you should be cleared and, like, trained for access to all of that. I also want to note that the queries that you all worked with on Friday are also from the production database. STEPHANIE: Really? JOËL: So, you got to see what it actually does, what the actual timings were. STEPHANIE: I'm surprised by that because we were using, like, a web-based tool to visualize the query plans. Like, what were you kind of plugging into the tool for it to know? JOËL: So, the tool accepts a query plan, which is a text output from running a SQL query. STEPHANIE: Okay. So, it's just visualizing it. JOËL: Correct. Yeah. So, you've got this query plan, which comes back as this very intimidating block of, like, text, and arrows, and things like that. And you plug it into this web UI, and now you've got something that is kind of interactive, and visual, and you can expand or collapse nodes. And it gives you tooltips on different types of information and where you're spending the most time. So, yeah, it's just a nicer way to visualize that data that comes from the query plan. STEPHANIE: Gotcha. That makes sense. JOËL: So, that's a very important caveat. I don't think that's something that we mentioned on the episode. So, thank you, Tim, for highlighting that. And for all of our listeners who were intrigued by leaning into EXPLAIN ANALYZE and query plan viewers to debug your slow queries, make sure you try it out in production because you might get different results otherwise. STEPHANIE: So, yeah, that about wraps up our listener topics in recent months. On that note, Joël, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
420: Test Database Woes

The Bike Shed

Play Episode Listen Later Mar 26, 2024 28:16


Joël shares his recent project challenge with Tailwind CSS, where classes weren't generating as expected due to the dynamic nature of Tailwind's CSS generation and pruning. Stephanie introduces a personal productivity tool, a "thinking cap," to signal her thought process during meetings, which also serves as a physical boundary to separate work from personal life. The conversation shifts to testing methodologies within Rails applications, leading to an exploration of testing philosophies, including developers' assumptions about database cleanliness and their impact on writing tests. Avdi's classic post on how to use database cleaner (https://avdi.codes/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/) RSpec change matcher (https://rubydoc.info/gems/rspec-expectations/RSpec%2FMatchers:change) Command/Query separation (https://martinfowler.com/bliki/CommandQuerySeparation.html) When not to use factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Why Factories? (https://thoughtbot.com/blog/why-factories) Transcript:  STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I'm working on a new project, and this is a project that uses Tailwind CSS for its styling. And I ran into a bit of an annoying problem with it just getting started, where I was making changes and adding classes. And they were not changing the things I thought they would change in the UI. And so, I looked up the class in the documentation, and then I realized, oh, we're on an older version of the Tailwind Rails gem. So, maybe we're using...like, I'm looking at the most recent docs for Tailwind, but it's not relevant for the version I'm using. Turned out that was not the problem. Then I decided to use the Web Inspector and actually look at the element in my browser to see is it being overwritten somehow by something else? And the class is there in the element, but when I look at the CSS panel, it does not show up there at all or having any effects. And that got me scratching my head. And then, eventually, I figured it out, and it's a bit of a facepalm moment [laughs]. STEPHANIE: Oh, okay. JOËL: Because Tailwind has to, effectively, generate all of these, and it will sort of generate and prune the things you don't need and all of that. They're not all, like, statically present. And so, if I was using a class that no one else in the app had used yet, it hadn't gotten generated. And so, it's just not there. There's a class on the element, but there's no CSS definition tied to it, so the class does nothing. What you need to do is there's a rake task or some sort of task that you can run that will generate things. There's also, I believe, a watcher that you can run, some sort of, like, server that will auto-generate these for you in dev mode. I did not have that set up. So, I was not seeing that new class have any effect. Once I ran the task to generate things, sure enough, it worked. And Tailwind works exactly how the docs say they do. But that was a couple of hours of my life that I'm not getting back. STEPHANIE: Yeah, that's rough. Sorry to hear. I've also definitely gone down that route of like, oh, it's not in the docs. The docs are wrong. Like, do they even know what they're talking about? I'm going to fix this for everyone. And similarly have been humbled by a facepalm solution when I'm like, oh, did I yarn [laughs]? No, I didn't [laughs]. JOËL: Uh-huh. I'm curious, for you, when you have sort of moments where it's like the library is not behaving the way you think it is, is your default to blame yourself, or is it to blame the library? STEPHANIE: [laughs]. Oh, good question. JOËL: And the follow-up to that is, are you generally correct? STEPHANIE: Yeah. Yep, yep, yep. Hmm, I will say I externalize the blame, but I will try to at least do, like, the basic troubleshooting steps of restarting my server [laughter], and then if...that's as far as I'll go. And then, I'll be like, oh, like, something must be wrong, you know, with this library, and I turn to Google. And if I'm not finding any fruitful results, again, you know, one path could be, oh, maybe I'm not Googling correctly, but the other path could be, maybe I've discovered something that no one else has before. But to your follow-up question, I'm almost, like, always wrong [laughter]. I'm still waiting for the day when I, like, discover something that is an actual real problem, and I can go and open an issue [chuckles] and, hopefully, be validated by the library author. JOËL: I think part of what I heard is that your debugging strategy is basic, but it's not as basic as Joël's because you remember to restart the server [chuckles]. STEPHANIE: We all have our days [laughter]. JOËL: Next time. So, Stephanie, what is new in your world? STEPHANIE: I'm very excited to share this with you. And I recognize that this is an audio medium, so I will also describe the thing I'm about to show you [laughs]. JOËL: Oh, this is an object. STEPHANIE: It is an object. I got a hat [laughs]. JOËL: Okay. STEPHANIE: I'm going to put it on now. It's a cap that says "Thinking" on it [laughs] in, like, you know, fun sans serif font with a little bit of edge because the thinking is kind of slanted. So, it is designy, if you will. It's my thinking cap. And I've been wearing it at work all week, and I love it. As a person who, in meetings and, you know, when I talk to people, I have to process before I respond a lot of the time, but that has been interpreted as, you know, maybe me not having anything to say or, you know, people aren't sure if I'm, you know, still thinking or if it's time to move on. And sometimes I [chuckles], you know, take a long time. My brain is just spinning. I think another funny hat design would be, like, the beach ball, macOS beach ball. JOËL: That would be hilarious. STEPHANIE: Yeah. Maybe I need to, like, stitch that on the back of this thinking cap. Anyway, I've been wearing it at work in meetings. And then, when I'm just silently processing, I'll just point to my hat and signal to everyone what's [laughs] going on. And it's also been really great for the end of my work day because then I take off the hat, and because I've taken it off, that's, like, my signal, you know, I have this physical totem that, like, now I'm done thinking about work, and that has been working. JOËL: Oh, I love that. STEPHANIE: Yeah, that's been working surprisingly well to kind of create a bit more of a boundary to separate work thoughts and life thoughts. JOËL: Because you are working from home and so that boundary between professional life and personal life can get a little bit blurry. STEPHANIE: Yeah. I will say I take it off and throw it on the floor kind of dramatically [laughter] at the end of my work day. So, that's what's new. It had a positive impact on my work-life balance. And yeah, if anyone else has the problem of people being confused about whether you're still thinking or not, recommend looking into a physical thinking cap. JOËL: So, you are speaking at RailsConf this spring in Detroit. Do you plan to bring the thinking cap to the conference? STEPHANIE: Oh yeah, absolutely. That's a great idea. If anyone else is going to RailsConf, find me in my thinking cap [laughs]. JOËL: So, this is how people can recognize Bikeshed co-host Stephanie Minn. See someone walking around with a thinking cap. STEPHANIE: Ooh. thinkingbot? JOËL: Ooh. STEPHANIE: Have I just designed new thoughtbot swag [laughter]? We'll see if this catches on. JOËL: So, we were talking recently, and you'd mentioned that you were facing some really interesting dilemmas when it came to writing tests and particularly how tests interact with your test database. STEPHANIE: Yeah. So, I recently, a few weeks ago, joined a new client project and, you know, one of the first things that I do is start to run those tests [laughs] in their codebase to get a sense of what's what. And I noticed that they were taking quite a long time to get set up before I even saw any progress in terms of successes or failures. So, I was kind of curious what was going on before the examples were even run. And when I tailed the logs for the tests, I noticed that every time that you were running the test suite, it would truncate all of the tables in the test database. And that was a surprise to me because that's not a thing that I had really seen before. And so, basically, what happens is all of the data in the test database gets deleted using this truncation strategy. And this is one way of ensuring a clean slate when you run your tests. JOËL: Was this happening once at the beginning of the test suite or before every test? STEPHANIE: It was good that it was only running once before the test suite, but since, you know, in my local development, I'm running, like, a file at a time or sometimes even just targeting a specific line, this would happen on every run in that situation and was just adding a little bit of extra time to that feedback loop in terms of just making sure your code was working if that's part of your workflow. JOËL: Do you know what version of Rails this project was in? Because I know this was popular in some older versions of Rails as a strategy. STEPHANIE: Yeah. So, it is Rails 7 now, recently upgraded to Rails 7. It was on Rails 6 for a little while. JOËL: Very nice. I want to say that truncation is generally not necessary as of Rails...I forget if it's 5 or 6. But back in the day, specifically for what are now called system tests, the sort of, like, Capybara UI-driven browser tests, you had, effectively, like, two threads that were trying to access the database. And so, you couldn't have your test data wrapped in a transaction the way you would for unit tests because then the UI thread would not have access to the data that had been created in a transaction just for the test thread. And so, people would use tools like Database Cleaner to use a truncation strategy to clear out everything between tests to allow a sort of clean slate for these UI-driven feature specs. And then, I want to say it's Rails 5, it may have been Rails 6 when system tests were added. And one of the big things there was that they now could, like, share data in a transaction instead of having to do two separate threads and one didn't have access to it. And all of a sudden, now you could go back to transactional fixtures the way that you could with unit tests and really take advantage of something that's really nice and built into Rails. STEPHANIE: That's cool. I didn't know that about system tests and that kind of shift happening. I do think that, in this case, it was one of those situations where, in the past, the database truncation, in this case, particular using the Database Cleaner gem was necessary, and that just never got reassessed as the years went by. JOËL: That's one of the classic things, right? When you upgrade a Rails app over multiple versions, and sometimes you sort of get a new feature that comes in for free with the new version, and you might not be aware of it. And some of the patterns in the app just kind of keep going. And you don't realize, hey, this part of the app could actually be modernized. STEPHANIE: So, another interesting thing about this testing situation is that I learned that, you know, if you ran these tests, you would experience this truncation strategy. But the engineering team had also kind of played around with having a different test setup that didn't clean the database at all unless you opted into it. JOËL: So, your test database would just...each test would just keep writing to the database, but they're not wrapped in transactions. Or they are wrapped in transactions, but you may or may not have some additional data. STEPHANIE: The latter. So, I think they were also using the transaction strategy there. But, you know, there are some reasons that you would still have some data persisted across test runs. I had actually learned that the use transactional fixtures config for RSpec doesn't roll back any data that might have been created in a before context hook. JOËL: Yep, or a before all. Yeah, the transaction wraps the actual example, but not anything that happens outside of it. STEPHANIE: Yeah, I thought that was an interesting little gotcha. So, you know, now we had these, like, two different ways to run tests. And I was chatting with a client developer about how that came to be. And we then got into an interesting conversation about, like, whether or not we each expect a clean database in the first place when we write our tests or when we run our tests, and that was an area that we disagreed. And that was cool because I had not really, like, thought about like, oh, how did I even arrive at this assumption that my database would always be clean? I think it was just, you know, from experience having only worked in Rails apps of a certain age that really got onto the Database [laughs] Cleaner train. But it was interesting because I think that is a really big assumption to make that shapes how you then approach writing tests. JOËL: And there's kind of a couple of variations on that. I think the sort of base camp approach of writing Rails with fixtures, you just sort of have, for the most part, an existing set of data that's there that you maybe layer on a few extra things on. But there's base level; you just expect a bunch of data to exist in your test database. So, it's almost going off the opposite assumption, where you can always assume that certain things are already there. Then there's the other extreme of, like, you always assume that it's empty. And it sounds like maybe there's a position in the middle of, like, you never know. There may be something. There may not be something, you know, spin the wheel. STEPHANIE: Yeah. I guess I was surprised that it, you know, that was just a question that I never really asked myself prior to this conversation, but it could feel like different testing philosophies. But yeah, I was very interested in this, you know, kind of opinion that was a little bit different from mine about if you assume that your database, your test database, is not clean, that kind of perhaps nudges you in the direction of writing tests that are less coupled to the database if they don't need to be. JOËL: What does coupling to the database mean in this situation? STEPHANIE: So, I'm thinking about Rails tests that might be asserting on a change in database behavior, so the change matcher in RSpec is one that I see maybe sometimes used when it doesn't need to be used. And we're expecting, like, account to have changed the count of the number of records on it for a model have changed after doing some work, right? JOËL: And the change matcher from RSpec is one that allows you to not care whether there are existing records or not. It sort of insulates you from that. STEPHANIE: That's true. Though I guess I was thinking almost like, what if there was some return value to assert on instead? And would that kind of help you separate some side effects from methods that might be doing too much? And kind of when I start to see tests that have both or are asserting on something being returned, and then also something happening, that's one way of, like, figuring out what kind of coupling is going on inside this test. JOËL: It's the classic command-query separation principle from object-oriented design. STEPHANIE: I think another one that came to mind, another example, especially when you're talking about system tests, is when you might be using Capybara and you end up...maybe you're going through a flow that creates a record. But from the user perspective, they don't actually know what's going on at the database level. But you could assert that something was created, right? But it might be more realistic at that level of abstraction to be asserting some kind of visual element that had happened as a result of the flow that you're testing. JOËL: Yeah. I would, in fact, go so far as to say that asserting on the state of your database in a system test is an anti-pattern. System tests are sort of, by design, meant to be all about user behavior trying to mimic the experience of a user. And a user of a website is not going to be able to...you hope they're not able to SSH into [chuckles] your database and check the records that have been created. If they can, you've got another problem. STEPHANIE: I wonder if you could take this idea to the extreme, though. And do you think there is a world where you don't really test database-level concerns at all if you kind of believe this idea that it doesn't really matter what the state of it should be? JOËL: I guess there's a few different things on, like, what it matters about the state of it because you are asserting on its state sort of indirectly in a sort of higher level integration test. You're asserting that you see certain things show up on the screen in a system test. And maybe you want to say, "I do certain tasks, and then I expect to see three items in an unordered list." Those three items probably come from the database, although, you know, you could have it where they come from an API or something like that. So, the database is an implementation level. But if you had random data in your database, you might, in some tests, have four items in the list, some tests have five. And that's just going to be a flaky test, and that's going to be incredibly painful. So, while you're not asserting on the database, having control over it during sort of test setup, I think, does impact the way you assert. STEPHANIE: Yeah, that makes sense. I was suddenly just thinking about, like, how that exercise can actually tell you perhaps, like, when it is important to, in your test setup, be persisting real records as opposed to how much you can get away with, like, not interacting with it because, like, you aren't testing at that integration level. JOËL: That brings up a good point because a lot of tests probably you might need models, but you might not need persisted models to interact with them, if you're testing a method on a model that just does things based off its internal state and not any of the ActiveRecord database queries, or if you have some other service or something that consumes a model that doesn't necessarily need to query. There's a classic blog post on the thoughtbot blog about when you should not reuse. There's a classic blog post on the thoughtbot blog about when not to use FactoryBot. And, you know, we are the makers of FactoryBot. It helps set up records in your database for testing. And people love to use it all the time. And we wrote an article about why, in many cases, you don't need to create something into the database. All you need is just something in memory, and that's going to be much faster than using FactoryBot because talking to the database is expensive. STEPHANIE: Yeah, and I think we can see that in the shift from even, like, fixtures to factories as well, where test data was only persisted as needed and as needed in individual tests, rather than seeding it and having all of those records your entire test run. And it's cool to see that continuing, you know, that idea further of like, okay, now we have this new, popular tool that reduce some of that. But also, in most cases, we still don't need...it's still too much. JOËL: And from a performance perspective, it's a bit of a see-saw in that fixtures are a lot faster because they get inserted once at the beginning of your test run. So, a SQL execution at the beginning of a test run and then every test after that is just doing its thing: maybe creating a record inside of a transaction, maybe not creating any records at all. And so, it can be a lot faster as opposed to using FactoryBot where you're creating records one at a time. Every create call in a test is a round trip to the database, and those are expensive. So, FactoryBot tests tend to be more expensive than those that rely on fixtures. But you have the advantage of more control over what data is present and sort of more locality because you can see what has been created at the test level. But then, if you decide, hey, this is a test where I can just create records in memory, that's probably the best of all worlds in that you don't need anything created ahead with fixtures. You also don't need anything to be inserted using FactoryBot because you don't even need the database for this test. STEPHANIE: I'm curious, is that the assumption that you start with, that you don't need a persisted object when you're writing a basic unit test? JOËL: I think I will as much as possible try not to need to persist and only if necessary use persist records. There are strategies with FactoryBot that will allow you to also, like, build stubbed or just build in memory. So, there's a few different variations that will, like, partially do things for you. But oftentimes, you can just new up an object, and that's what I will often start with. In many cases, I will already know what I'm trying to do. And so, I might not go through the steps of, oh, new up an object. Oh no, I'm getting a I can't do the thing I need to do. Now, I need to write to the database. So, if I'm testing, let's say, an ActiveRecord scope that's filtering down a series of records, I know that's a wrapper around a database query. I'm not going to start by newing up some records and then sort of accidentally discovering, oh yeah, it does write to the database because that was pretty clear to me from the beginning. STEPHANIE: Yeah. Like, you have your mental shortcuts that you do. I guess I asked that question because I wonder if that is a good heuristic to share with maybe developers who are trying to figure out, like, should they create persisted records or, you know, use just regular instance in memory or, I don't know, even [laughs] use, like, a double [laughs]? JOËL: Yeah, I've done that quite a bit as well. I would say maybe my heuristic is, is the method under test going to need to talk to the database? And, you know, I may or may not know that upfront because if I'm test driving, I'm writing the test first. So, sometimes, maybe I don't know, and I'll start with something in memory and then realize, oh, you know, I do need to talk to the database for this. And this is for unit tests, in particular. For something more like an integration test or a system test that might require data in the database, system tests almost always do. You're not interacting with instances in memory when you're writing a system test, right? You're saying, "Given the database state is this when I visit this URL and do these things, this page reacts in such and such a way." So, system tests always write to the database to start with. So, maybe that's my heuristic there. But for unit tests, maybe think a little bit about does your method actually need to talk to the database? And maybe even almost give yourself a challenge. Can I get away with not talking to the database here? STEPHANIE: Yeah, I like that because I've certainly seen a lot of unit tests that are integration tests in disguise [laughs]. JOËL: Isn't that the truth? So, we kind of opened up this conversation with the idea of there are different ways to manage your database in terms of, do you clean or not clean before a test run? Where did you end up on this particular project? STEPHANIE: So, I ended up with a currently open PR to remove the need to truncate the database on each run of the test suite and just stick with the transaction for each example strategy. And I do think that this will work for us as long as we decide we don't want to introduce something like fixtures, even though that is actually also a discussion that's still in the works. But I'm hoping with this change, like, right now, I can help people start running faster tests [chuckles]. And should we ever introduce fixtures down the line, then we can revisit that. But it's one of those things that I think we've been living with this for too long [laughs]. And no one ever questioned, like, "Oh, why are we doing this?" Or, you know, maybe that was a need, however many years ago, that just got overlooked. And as a person new to the project, I saw it, and now I'm doing something about it [laughs]. JOËL: I love that new person energy on a project and like, "Hey, we've got this config thing. Did you know that we didn't need this as of Rails 6?" And they're like, "Oh, I didn't even realize that." And then you add that, and it just moves you into the future a little bit. So, if I understand the proposed change, then you're removing the truncation strategy, but you're still going to be in a situation where you have a clean database before each test because you're wrapping tests in transactions, which I think is the default Rails behavior. STEPHANIE: Yeah, that's where we're at right now. So, yeah, I'm not sure, like, how things came to be this way, but it seemed obvious to me that we were kind of doing this whole extra step that wasn't really necessary, at least at this point in time. Because, at least to my knowledge [laughs], there's no data being seeded in any other place. JOËL: It's interesting, right? When you have a situation where this was sort of a very popular practice for a long time, a lot of guides mentioned that. And so, even though Rails has made changes that mean that this is no longer necessary, there's still a long tail of apps that will still have this that may be upgraded later, and then didn't drop this, or maybe even new apps that got created but didn't quite realize that the guide they were following was outdated, or that a best practice that was in their head was also outdated. And so, you have a lot of apps that will still have these sort of, like, relics of the past. And you're like, "Oh yeah, that's how we used to do things." STEPHANIE: So yeah, thanks, Joël, for going on this journey with me in terms of, you know, reassessing my assumptions about test databases. I'm wondering, like, if this is common, how other people, you know, approach what they expect from the test database, whether it be totally clean or have, you know, any required data for common flows and use cases of your system. But it does seem that little in between of, like, maybe it is using transactions to reset for each example, but then there's also some persistence that's happening somewhere else that could be a little tricky to manage. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
415: Codebase Calibration

The Bike Shed

Play Episode Listen Later Feb 6, 2024 30:54


Stephanie has a delightful and cute Ruby thing to share: Honeybadger, the error monitoring service, has created exceptionalcreatures.com, where they've illustrated and characterized various common Ruby errors into little monsters, and they're adorable. Meanwhile, Joël encourages folks to submit proposals for RailsConf. Together, Stephanie and Joël delve into the nuances of adapting to and working within new codebases, akin to aligning with a shared mental model or vision. They ponder several vital questions that every developer faces when encountering a new project: the balance between exploring a codebase to understand its structure and diving straight into tasks, the decision-making process behind adopting new patterns versus adhering to established ones, and the strategies teams can employ to assist developers who are familiarizing themselves with a new environment. Honeybadger's Exceptional Creatures (https://www.exceptionalcreatures.com/) RailsConf CFP coaching sessions (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform) HTTP Cats (https://http.cat/) Support and Maintenance Episode (https://bikeshed.thoughtbot.com/409) Transcript:  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I have a delightful and cute Ruby thing to share I'd seen just in our internal company Slack. Honeybadger, the error monitoring service, has created a cute little webpage called exceptionalcreatures.com, where they've basically illustrated and characterized various common Ruby errors into little monsters [laughs], and I find them adorable. I think their goal is also to make it a really helpful resource for people encountering these kinds of errors, learning about them for the first time, and figuring how to triage or debug them. And I just think it's a really cool way of, like, making it super approachable, debugging and, you know, when you first encounter a scary error message, can be really overwhelming, and then Googling about it can also be equally [chuckles] overwhelming. So, I just really liked the whimsy that they kind of injected into something that could be really hard to learn about. Like, there are so many different error messages in Ruby and in Rails and whatever other libraries you're using. And so, that's kind of a...I think they've created a one-stop shop for, you know, figuring out how to move forward with common errors. And I also like that it's a bit of a collective effort. They're calling it, like, a bestiary for all the little creatures [laughs] that they've discovered. And I think you can, like, submit your own favorite Ruby error and any guidance you might have for someone trying to debug it. JOËL: That's adorable. It reminds me a little bit of HTTP status codes as cat memes site. It has that same energy. One thing that I think is really interesting is that because it's Honeybadger, they have stats on, like, frequency of these errors, and a lot of these ones are tied to...I think they're picking some of the most commonly surfaced errors. STEPHANIE: Yeah, there's little, like, ratings, too, for how frequently they occur, kind of just like, I don't know, Pokémon [laughs] [inaudible 02:31]. I think it's really neat that they're using something like a learning from their business or maybe even some, like, proprietary information and sharing it with the world so that we can learn from it. JOËL: I think one thing that's worth specifying as well is that these are specific exception classes that get raised. So, they're not just, like, random error strings that you see in the wild. They don't often have a whole lot of documentation around them, so it's nice to see a dedicated page for each and a little bit of maybe how this is used in the real world versus maybe how they were designed to be used. Maybe there's a line or two in the docs about, you know, core Ruby when a NoMethodError should be raised. How does NoMethodError actually get used, you know, in real life, and the exceptions that Honeybadger is capturing. That's really interesting to see. STEPHANIE: Yeah, I like how each page for the exception class, and I'm glad you made that distinction, is kind of, like, crowdsourced guidance and information from the community, so I think you could even, you know, contribute to it if you wanted. But yeah, just a fun, little website to bring you some delight when you're on your next head-smacking, debugging adventure [laughs]. JOËL: And I love that it brings some joy to the topic, but, honestly, I think it's a pretty good reference. I could see myself linking to this anytime I want to have a deeper discussion on exceptions. So, maybe there's a code review, and maybe I want to suggest that we raise a different error than the one that we're doing. I could see myself in that GitHub comment being like, "Oh, instead of, you know, raising an exception here, why don't we instead raise a NoMethodError or something like that?" And then link to the bestiary page. STEPHANIE: So, Joël, what's new in your world? JOËL: So, just recently, RailsConf announced their call for proposals. It's a fairly short period this year, only about three-ish weeks long. So, I've been really encouraging colleagues to submit and trying to be a resource for people who are interested in speaking at conferences. We did a Q&A session with a fellow thoughtboter, Aji Slater, who's also a former RailsConf speaker, about what makes for a good talk, what is it like to submit to a call for proposals, you know, kind of everything from the process from having an idea all the way to stage presence and delivering. And there's a lot of great questions that got asked and some good discussion that happened there. STEPHANIE: Nice. Yeah, I think I have noticed that you are doing a lot more to help, especially first-time speakers give their first conference talk this year. And I'm wondering if there's anything you've learned or any hopes and dreams you have for kind of the amount of time you're investing into supporting others. JOËL: What I'd like to see is a lot of people submitting proposals; that's always a great thing. And, a proposal, even if it doesn't get accepted, is a thing that you can resubmit. And so, having gone through the effort of building a proposal and especially getting it maybe peer-reviewed by some colleagues to polish your idea, I think is already just a really great exercise, and it's one that you can shop around. It's one that you can maybe convert into a blog post if you need to. You can convert that into some kind of podcast appearance. So, I think it's a great way to take an idea you're excited about and focus it, even if you can't get into RailsConf. STEPHANIE: I really like that metric for success. It reminds me of a writer friend I have who actually was a guest on the show, Nicole Zhu. She submits a lot of short stories to magazines and applications to writing fellowships, and she celebrates every rejection. I think at the end of the year, she, like, celebrates herself for having received, you know, like, 15 rejections or something that year because that meant that she just went for it and, you know, did the hard part of doing the work, putting yourself out there. And that is just as important, you know, if not more than whatever achievement or goal or the idea of having something accepted. JOËL: Yeah, I have to admit; rejection hurts. It's not a fun thing to go through. But I think even if you sort of make it to that final stage of having written a proposal and it gets rejected, you get a lot of value out of that journey sort of regardless of whether you get accepted or not. So, I encourage more people to do that. To any of our listeners who are interested, the RailsConf call for proposals goes through February 13th, 2024. So, if you are listening before then and are inspired, I recommend submitting. If you're unsure of what makes for a good CFP, RailsConf is currently offering coaching sessions to help craft better proposals. They have one on February 5th, one on February 6th, and one on February 7th, so those are also options to look into if this is maybe your first time and you're not sure. There's a signup form. We'll link to it in the show notes. STEPHANIE: So, another update I have that I'm excited to get into for the rest of the episode is my recent work on our support and maintenance team, which I've talked about on the show before. But for any listeners who don't know, it's a kind of sub-team at thoughtbot that is focused on helping maintain multiple client projects at a time. But, at this point, you know, there's not as much active feature development, but the work is focused on keeping the codebase up to date, making any dependency upgrades, fixing any bugs that come up, and general support. So, clients have a team to kind of address those things as they come up. And when I had last talked about it on the podcast, I was really excited because it was a bit of a different way of working. I felt like it was very novel to be, you know, have a lot of different projects and domains to be getting into. And knowing that I was working on this team, like, short-term and, you know, it may not be me in the future continuing what I might have started during my rotation, I thought it was really interesting to be optimizing towards, like, completion of a task. And that had kind of changed my workflow a bit and my process. JOËL: So, now that you've been doing work on the support and maintenance team for a while and you've kind of maybe gotten more comfortable with it, how are you generally feeling about this idea of sort of jumping into new codebases all the time? STEPHANIE: It is both fun and more challenging than I thought it would be. I tend to actually really enjoy that period of joining a new team or a project and exploring, you know, a codebase and getting up to speed, and that's something that we do a lot as consultants. But I think I started to realize that it's a bit of a tricky balance to figure out how much time should I be spending understanding what this codebase is doing? Like, how much of the application do I need to be understanding, and how much poking around should I be doing before just trying to get started on my first task, the first starter ticket that I'm given? There's a bit of a balance there because, on one hand, you could just immediately start on the task and kind of just, you know, have your blinders [chuckles] on and not really care too much about what the rest of the code is doing outside of the change that you're trying to make. But that also means that you don't have that context of why certain things are the way they are. Maybe, like, the way that you want to be building something actually won't work because of some unexpected complexity with the app. So, I think there, you know, needs to be time spent digging around a little bit, but then you could also be digging around for a long time [chuckles] before you feel like, okay, I finally have enough understanding of this new codebase to, like, build a feature exactly how a seasoned developer on the team might. JOËL: I imagine that probably varies a little bit based on the task that you're doing. So, something like, oh, we want to upgrade this codebase to Ruby 3.3, probably requires you to have a very different understanding of the codebase than there's a bug where submitting a comment double posts it, and you have to dig into that. Both of those require you to understand the application on very different levels and kind of understand different mental models of what the app is doing. STEPHANIE: Yeah, absolutely. That's a really good point that it can depend on what you are first asked to work on. And, in fact, I actually think that is a good guidepost for where you should be looking because you could develop a mental model that is just completely unrelated [chuckles] to what you're asked to do. And so, I suppose that is, you know, usually a good place to start, at least is like, okay, I have this first task, and there's some understanding and acceptance that, like, the more you work on this codebase, the more you'll explore and discover other parts of it, and that can be on a need to know kind of basis. JOËL: So, I'm thinking that if you are doing something like a Ruby upgrade or even a Rails upgrade, a lot of what you care about the app is going to be on a more mechanical level. So, you want to know what gems you're using. You want to know what different patterns are being used, maybe how callbacks are happening, any particular features that are version-specific that are being used, things like that. Whereas if you're, you know, say, fixing a bug, you might care a lot more about some of the product-level concerns. What are we actually trying to do here? What is the expected user experience? How does this deviate from that? What were the underlying mental models of the developers? So, there's almost, like, two lenses you can look at the code. Now, I almost want to make this a two-dimensional thing, where you can look at it either from, like, a very kind of mechanical lens or a product lens in one axis. And then, on the other axis, you could look at it from a very high-level 10,000-foot view and maybe zoom in a little bit where you need, versus a very localized view; here's where the bug is happening on this page, and then sort of zoom out as necessary. And I could see different sorts of tasks falling in different quadrants there of, do I need a more mechanical view? Do I need a more product-focused view? And do I need to be looking locally versus globally? STEPHANIE: Wow. I can't believe you just created a Cartesian graph [laughs] for this problem on the fly. But I love it because I do think that actually lines up with different strategies I've taken before. It's like, how much do you even look at the code before deciding that you can't really get a good picture of it, of what the product is, without just poking around from the app itself? I actually think that I tend to start from the code. Like, maybe I'll see a screenshot that someone has shared of the app, you know, like a bug or something that they want me to fix, and then looking for that text in the code first, and then trying to kind of follow that path, whereas it's also, you know, perfectly viable to try to see the app being used in production, or staging, or something first to get a better understanding of some of the business problems it's trying to solve. JOËL: When you jump into a new codebase, do you sort of consciously take the time to plan your approach or sort of think about, like, how much knowledge of this new codebase do I need before I can, like, actually look at the problem at hand? STEPHANIE: Ooh, that's kind of a hard question to answer because I think my experience has told me enough times that it's never what I think it's going to [laughs] be, not never, but it frequently surprises me. It has surprised me enough times that it's kind of hard to know off the bat because it's not...as much as we work in frameworks that have opinions and conventions, a lot of the work that happens is understanding how this particular codebase and team does things and then having to maybe shift or adjust from there. So, I think I don't do a lot of planning. I don't really have an idea about how much time it'll take me because I can't really know until I dive in a little bit. So, that is usually my first instinct, even if someone is wanting to, like, talk to me about an approach or be, like, "Hey, like, how long do you think this might take based on your experience as a consultant?" This is my first task. Oftentimes, I really can't say until I've had a little bit of downtime to, in some ways, like, acquire the knowledge [chuckles] to figure that out or answer that question. JOËL: How much knowledge do you like to get upfront about an app before you dive into actually doing the task at hand? Are there any things, like, when you get access to a new codebase, that you'll always want to look at to get a sense of the project before you look at any tickets? STEPHANIE: I actually start at the model level. Usually, I am curious about what kinds of objects we're working with. In fact, I think that is really helpful for me. They're like building blocks, in order for me to, like, conceptually understand this world that's being represented by a codebase. And I kind of either go outwards or inwards from there. Usually, if there's a model that is, like, calling to me as like, oh, I'll probably need to interact with, then I'll go and seek out, like, where that model is created, maybe through controllers, maybe through background jobs, or something like that, and start to piece together entry points into the application. I find that helpful because a lot of the times, it can be hard to know whether certain pages or routes are even used at all anymore. They could just be dead code and could be a bit misleading. I've certainly been misled [chuckles] more than once. And so, I think if I'm able to pull out the main domain objects that I notice in a ticket or just hear people talk about on the team, that's usually where I gravitate towards first. What about you? Do you have a place you like to start when it comes to exploring a new codebase like that? JOËL: The routes file is always a good sort of overview of, like, what is going on in the app. Scanning the models directory is also a great start in a Rails app to get a sense of what is this app about? What are the core nouns in our vocabulary? Another thing that's good to look for in a codebase is what are the big types of patterns that they tend to use? The Rails ecosystem goes through fads, and, over time, different patterns will be more popular than others. And so, it's often useful to see, oh, is this an app where everything happens in service objects, or is this an app that likes to rely on view components to render their views? Things like that. Once you get a sense of that, you get a little bit of a better sense of how things are architected beyond just the basic MVC. STEPHANIE: I like that you mentioned fads because I think I can definitely tell, you know, how modern an app is or kind of where it might be stuck in time [chuckles] a little bit based on those patterns and libraries that it's heavily utilizing, which I actually find to be an interesting and kind of challenging position to be in because how do you approach making changes to a codebase that is using a lot of patterns or styles from back in the day? Would you continue following those same patterns, or do you feel motivated to introduce something new or kind of what might be trendy now? JOËL: This is the boring answer, but it's almost never worth it to, like, rewrite the codebase just to use a new pattern. Just introducing the new pattern in some of the new things means there are now two patterns. That's also not a great outcome for the team. So, without some other compelling reason, I default to using the established patterns. STEPHANIE: Even if it's something you don't like? JOËL: Yes. I'm not a huge fan of service objects, but I work in plenty of codebases that have them, and so where it makes sense, I will use service objects there. Service objects are not mutually exclusive with other things, and so sometimes it might make sense to say, look, I don't feel like I can justify a service object here. I'll do this logic in a view, or maybe I'll pull this out into some other object that's not a service object and that can live alongside nicely. But I'm not necessarily introducing a new pattern. I'm just deciding that this particular extraction might not necessarily need a service object. STEPHANIE: That's an interesting way to describe it, not as a pattern, but as kind of, like, choosing not to use the existing [chuckles] pattern. But that doesn't mean, like, totally shifting the architecture or even how you're asking other people to understand the codebase. And I think I'm in agreement. I'm actually a bit of a follower, too, [laughs], where I want to, I don't know, just make things match a little bit with what's already been created, follow that style. That becomes pretty important to me when integrating with a team in a codebase. But I actually think that, you know, when you are calibrating to a codebase, you're in a position where you don't have all that baggage and history about how things need to be. And maybe you might be empowered to have a little bit more freedom to question existing patterns or bring some new ideas to the team to, hopefully, like, help the code evolve. I think that's something that I struggle with sometimes is feeling compelled to follow what came before me and also wanting to introduce some new things just to see what the team might think about them. JOËL: A lot of that can vary depending on what is the pattern you want to introduce and sort of what your role is going to be on that team. But that is something that's nice about someone new coming onto a project. They haven't just sort of accepted that things are the way they are, especially for things that the team already doesn't like but doesn't feel like they have the energy to do anything better about it. So, maybe you're in a codebase where there's a ton of Ruby code in your ERB templates, and it's not really a pattern that you're following. It's just a thing that's there. It's been sort of the path of least resistance for a long time, and it's easier to add more lines in there, but nobody likes it. New person joins the team, and their naive exuberance is just like, "We can fix it. We can make it better." And maybe that's, you know, going back and rewriting all of your views. That's probably not the best use of their time. But it could be maybe the first time they have to touch one of these views, cleaning up that one and starting a conversation among the team. "Hey, here are some patterns that we might like to clean up some of these views instead," or "Here are maybe some guidelines for anything new that we write that we want to do to keep our views clean," and sort of start moving the needle in a positive direction. STEPHANIE: I like the idea of moving the needle. Even though I tend to not want to stir the pot with any big changes, one thing that I do find myself doing is in a couple of places in the specs, just trying to refactor a bit away from using lets. There were some kind of forward-thinking decisions made before when RSpec was basically going to deprecate using the describe block without prepending it with their module, so just kind of throwing that in there whenever I would touch a spec and asking other people to do the same. And then, recently, one kind of, like, small syntax thing that I hadn't seen before, and maybe this is just because of the age of the codebases in which I'm working, the argument forwarding syntax in Ruby that has been new, I mean, it's like not totally new anymore [laughs], but throwing that in there a little every now and then to just kind of shift away from this, you know, dated version of the code kind of towards things that other people are seeing and in newer projects. JOËL: I love harnessing that energy of being new on a project and wanting to make things better. How do you avoid just being, you know, that developer, though, that's new, comes in, and just wants to change everything for the sake of change or for your own personal opinions and just kind of moves things around, stirs the pot, but doesn't really contribute anything net positive to the team? Because I've definitely seen that as well, and that's not a good first contribution or, you know, contribution in general as a newer team member. How do we avoid being that person while still capitalizing on that energy of being someone new and wanting to make a positive impact? STEPHANIE: Yeah, that's a great point, and I kind of alluded to this earlier when I asked, like, oh, like, even if you don't like an existing syntax or pattern you'll still follow it? And I think liking something a different way is not a good enough reason [chuckles]. But if you are able to have a good reason, like I mentioned with the RSpec prepending, you know, it didn't need to happen now, but if we would hope to upgrade that gem eventually, then yeah, that was a good reason to make that change as opposed to just purely aesthetic [laughs]. JOËL: That's one where there is pretty much a single right answer to. If you plan to keep staying up to date with versions of RSpec, you will eventually need to do all these code changes because, you know, they're deprecating the old way. Getting ahead of that gradually as we touch spec files, there's kind of no downside to it. STEPHANIE: That's true, though maybe there is a person who exists out there who's like, "I love this old version of RSpec, and I will die on this hill that we have to stay on [laughs] it." But I also think that I have preferences, but I'm not so attached to them. Ideally, you know, what I would love to receive is just, like, curiosity about like, "Oh, like, why did you make this change?" And just kind of share my reasoning. And sometimes in that process, I realize, you know, I don't have a great reason, and I'll just say, "I don't have a great reason. This is just the way I like it. But if it doesn't work for you, like, tell me, and I'll consider changing it back. [chuckles]" JOËL: Maybe that's where there's a lot of benefit is the sort of curiosity on the part of the existing team and sort of openness to both learn about existing practices but also share about different practices from the new teammate. And maybe that's you're coming in, and you have a different style where you like to write tests, maybe without using RSpec's let syntax; the team is using it. Maybe you can have a conversation with the team. It's almost certainly not worth it for you to go and rewrite the entire test suite to not use let and be like, "Hey, first PR. I made your test better." STEPHANIE: Hundreds of files changed, thousands [laughs] of lines of code. I think that's actually a good segue into the question of how can a team support a new hire or a new developer who is still calibrating to a codebase? I think I'm curious about this being different from onboarding because, you know, there are a lot of things that we already kind of expect to give some extra time and leeway for someone who's new coming in. But what might be some ways to support a new developer that are less well known? JOËL: One that I really like is getting them involved as early as possible in code review because then they get to see the patterns that are coming in, and they can be involved in conversations on those. The first PR you're reviewing, and you see a bunch of tests leaning heavily on let, and maybe you ask a question, "Is this a pattern that we're following in this codebase? Did we have a particular motivation for why we chose this?" And, you know, and you don't want to do it in a sort of, like, passive-aggressive way because you're trying to push something else. It has to come from a place of genuine curiosity, but you're allowing the new teammate to both see a lot of the existing patterns kind of in very quick succession because you see a pretty good cross-section of those when you review code. And also, to have conversations about them, to ask anything like, "Oh, that's unusual. I didn't know we were doing that." Or, "Hey, is this a pattern that we're doing kind of just local to this subsystem, or is this something that's happening all the way? Is this a pattern that we're using and liking? Is this a thing that we were doing five years ago that we're phasing out, but there's still a few of them left?" Those are all, I think, great questions to ask when you're getting started. STEPHANIE: That makes a lot of sense. It's different from saying, "This is how we do things here," and expecting them to adapt or, you know, change to fit into that style or culture, and being open to letting it evolve based on the new team, the new people on the team and what they might be bringing to the table. I like to ask the question, "What do you need to know?" Or "What do you need to be successful?" as opposed to telling them what I think they need [laughs]. I think that is something that I actually kind of recently, not regret exactly, but I was kind of helping out some folks who were going to be joining the team and just trying to, like, shove all this information down their throats and be like, "Oh, and watch out for these gotchas. And this app uses a lot of callbacks, and they're really complex." And I think I was maybe coloring their [chuckles] experience a little bit and expecting them to be able to drink from the fire hose, as opposed to trusting that they can see for themselves, you know, like, what is going on, and form opinions about it, and ask questions that will support them in whatever they are looking to do. When we talked earlier about the four different quadrants, like, the kind of information they need to know will differ based off of their task, based off of their experience. So, that's one way that I am thinking about to, like, make space for a new developer to help shape that culture, rather than insisting that things are the way they are. JOËL: It can be a fine balance where you want to be open to change while also you have to remain kind of ruthlessly pragmatic about the fact that change can be expensive. And so, a lot of changes you need to be justified, and you don't want to just be rewriting your patterns for every new employee or, you know, just to follow the latest trends because we've seen a lot of trends come and go in the Rails ecosystem, and getting on all of them is just not worth our time. STEPHANIE: And that's the hard truth of there's always trade-offs [laughs] in software development, isn't that right? JOËL: It sure is. You can't always chase the newest shiny, as fun as that is. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.

Smart Software with SmartLogic
Web Development Frameworks: Elixir and Phoenix vs. Ruby on Rails with Owen Bickford & Dan Ivovich

Smart Software with SmartLogic

Play Episode Listen Later Dec 7, 2023 41:41


On today's episode, Elixir Wizards Owen Bickford and Dan Ivovich compare notes on building web applications with Elixir and the Phoenix Framework versus Ruby on Rails. They discuss the history of both frameworks, key differences in architecture and approach, and deciding which programming language to use when starting a project. Both Phoenix and Rails are robust frameworks that enable developers to build high-quality web apps—Phoenix leverages functional programming in Elixir and Erlang's networking for real-time communication. Rails follows object-oriented principles and has a vast ecosystem of plug-ins. For data-heavy CRUD apps, Phoenix's immutable data pipelines provide some advantages. Developers can build great web apps with either Phoenix or Rails. Phoenix may have a slight edge for new projects based on its functional approach, built-in real-time features like LiveView, and ability to scale efficiently. But, choosing the right tech stack depends heavily on the app's specific requirements and the team's existing skills. Topics discussed in this episode: History and evolution of Phoenix Framework and Ruby on Rails Default project structure and code organization preferences in each framework Comparing object-oriented vs functional programming paradigms CRUD app development and interaction with databases Live reloading capabilities in Phoenix LiveView vs Rails Turbolinks Leveraging WebSockets for real-time UI updates Testing frameworks like RSpec, Cucumber, Wallaby, and Capybara Dependency management and size of standard libraries Scalability and distribution across nodes Readability and approachability of object-oriented code Immutability and data pipelines in functional programming Types, specs, and static analysis with Dialyzer Monkey patching in Ruby vs extensible core language in Elixir Factors to consider when choosing between frameworks Experience training new developers on Phoenix and Rails Community influences on coding styles Real-world project examples and refactoring approaches Deployment and dev ops differences Popularity and adoption curves of both frameworks Ongoing research into improving Phoenix and Rails Links Mentioned in this Episode: SmartLogic.io (https://smartlogic.io/) Dan's LinkedIn (https://www.linkedin.com/in/divovich/) Owen's LinkedIn (https://www.linkedin.com/in/owen-bickford-8b6b1523a/) Ruby https://www.ruby-lang.org/en/ Rails https://rubyonrails.org/ Sams Teach Yourself Ruby in 21 Days (https://www.overdrive.com/media/56304/sams-teach-yourself-ruby-in-21-days) Learn Ruby in 7 Days (https://www.thriftbooks.com/w/learn-ruby-in-7-days---color-print---ruby-tutorial-for-guaranteed-quick-learning-ruby-guide-with-many-practical-examples-this-ruby-programming-book--to-build-real-life-software-projects/18539364/#edition=19727339&idiq=25678249) Build Your Own Ruby on Rails Web Applications (https://www.thriftbooks.com/w/build-your-own-ruby-on-rails-web-applications_patrick-lenz/725256/item/2315989/?utm_source=google&utm_medium=cpc&utm_campaign=low_vol_backlist_standard_shopping_customer_acquisition&utm_adgroup=&utm_term=&utm_content=593118743925&gad_source=1&gclid=CjwKCAiA1MCrBhAoEiwAC2d64aQyFawuU3znN0VFgGyjR0I-0vrXlseIvht0QPOqx4DjKjdpgjCMZhoC6PcQAvD_BwE#idiq=2315989&edition=3380836) Django https://github.com/django Sidekiq https://github.com/sidekiq Kafka https://kafka.apache.org/ Phoenix Framework https://www.phoenixframework.org/ Phoenix LiveView https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html#content Flask https://flask.palletsprojects.com/en/3.0.x/ WebSockets API https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API WebSocket connection for Phoenix https://github.com/phoenixframework/websock Morph Dom https://github.com/patrick-steele-idem/morphdom Turbolinks https://github.com/turbolinks Ecto https://github.com/elixir-ecto Capybara Testing Framework https://teamcapybara.github.io/capybara/ Wallaby Testing Framework https://wallabyjs.com/ Cucumber Testing Framework https://cucumber.io/ RSpec https://rspec.info/

The Bike Shed
408: Work Device Management

The Bike Shed

Play Episode Listen Later Nov 28, 2023 32:57


Joël recaps his time at RubyConf! He shares insights from his talk about different aspects of time in software development, emphasizing the interaction with the audience and the importance of post-talk discussions. Stephanie talks about wrapping up a long-term client project, the benefits of change and variety in consulting, and maintaining a balance between project engagement and avoiding burnout. They also discuss strategies for maintaining work-life balance, such as physical separation and device management, particularly in a remote work environment. Rubyconf (https://rubyconf.org/) Joël's talk slides (https://speakerdeck.com/joelq/which-time-is-it) Flaky test summary slide (https://speakerdeck.com/aridlehoover/the-secret-ingredient-how-to-understand-and-resolve-just-about-any-flaky-test?slide=170) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Well, as of this recording, I have just gotten back from spending the week in San Diego for RubyConf. STEPHANIE: Yay, so fun. JOËL: It's always so much fun to connect with the community over there, talk to other people from different companies who work in Ruby, to be inspired by the talks. This year, I was speaking, so I gave a talk on time and how it's not a single thing but multiple different quantities. In particular, I distinguish between a moment in time like a point, a duration and amount of time, and then a time of day, which is time unconnected to a particular day, and how those all connect together in the software that we write. STEPHANIE: Awesome. How did it go? How was it received? JOËL: It was very well received. I got a lot of people come up to me afterwards and make a variety of time puns, which those are so easy to make. I had to hold myself back not to put too many in the talk itself. I think I kept it pretty clean. There were definitely a couple of time puns in the description of the talk, though. STEPHANIE: Yeah, absolutely. You have to keep some in there. But I hear you that you don't want it to become too punny [laughs]. What I really love about conferences, and we've talked a little bit about this before, is the, you know, like, engagement and being able to connect with people. And you give a talk, but then that ends up leading to a lot of, like, discussions about it and related topics afterwards in the hallway or sitting together over a meal. JOËL: I like to, in my talks, give little kind of hooks for people who want to have those conversations in the hallway. You know, sometimes it's intimidating to just go up to a speaker and be like, oh, I want to, like, dig into their talk a little bit. But I don't have anything to say other than just, like, "I liked your talk." So, if there's any sort of side trails I had to cut for the talk, I might give a shout-out to it and say, "Hey, if you want to learn more about this aspect, come talk to me afterwards." So, one thing that I put in this particular talk was like, "Hey, we're looking at these different graphical ways to think about time. These are similar to but not the same as thinking of time as a one-dimensional vector and applying vector math to it, which is a whole other side topic. If you want to nerd out about that, come find me in the hallway afterwards, and I'd love to go deeper on it." And yeah, some people did. STEPHANIE: That's really smart. I like that a lot. You're inviting more conversation about it, which I know, like, you also really enjoy just, like, taking it further or, like, caring about other people's experiences or their thoughts about vector math [laughs]. JOËL: I think it serves two purposes, right? It allows people to connect with me as a speaker. And it also allows me to feel better about pruning certain parts of my talk and saying, look, this didn't make sense to keep in the talk, but it's cool material. I'd love to have a continuing conversation about this. So, here's a path we could have taken. I'm choosing not to, as a speaker, but if you want to take that branch with me, let's have that afterwards in the hallway. STEPHANIE: Yeah. Or even as, like, new content for yourself or for someone else to take with them if they want to explore that further because, you know, there's always something more to explore [chuckles]. JOËL: I've absolutely done that with past talks. I've taken a thing I had to prune and turned it into a blog post. A recent example of that was when I gave a talk at RailsConf Portland, which I guess is not so recent. I was talking about ways to deal with a test suite that's making too many database requests. And talking about how sometimes misusing let in your RSpec tests can lead to more database requests than you expect. And I had a whole section about how to better understand what database requests will actually be made by a series of let expressions and dealing with the eager versus lazy and all of that. I had to cut it. But I was then able to make a blog post about it and then talk about this really cool technique involving dependency graphs. And that was really fun. So, that was a thing where I was able to say, look, here's some content that didn't make it into the talk because I needed to focus on other things. But as its own little, like, side piece of content, it absolutely works, and here's a blog post. STEPHANIE: Yeah. And then I think it turned into a Bike Shed episode, too [laughs]. JOËL: I think it did, yes. I think, in many ways, creativity begets creativity. It's hard to get started writing or producing content or whatever, but once you do, every idea you have kind of spawns new ideas. And then, pretty soon, you have a backlog that you can't go through. STEPHANIE: That's awesome. Any other highlights from the conference you want to shout out? JOËL: I'd love to give a shout-out to a couple of talks that I went to, Aji Slater's talk on the Enigma machine as a German code machine from World War II and how we can sort of implement our own in Ruby and an exploration of object-oriented programming was fantastic. Aji is just a masterful storyteller. So, that was really great. And then Alan Ridlehoover's talk on dealing with flaky tests that one, I think, was particularly useful because I think it's one of the talks that is going to be immediately relevant on Monday morning for, like, every developer that was in that room and is going back to their regular day job. And they can immediately use all of those principles that Alan talked about to deal with the flaky tests in their test suite. And there's, in particular, at the end of his presentation, Alan has this summary slide. He kind of broke down flakiness across three different categories and then talked about different strategies for identifying and then fixing tests that were flaky because of those reasons. And he has this table where he sort of summarizes basically the entire talk. And I feel like that's the kind of thing that I'm going to save as a cheat sheet. And that can be, like, I'm going to link to this and share it all over because it's really useful. Alan has already put his slides up online. It's all linked to that particular slide in the show notes because I think that all of you would benefit from seeing that. The talks themselves are recorded, but they're not going to be out for a couple of weeks. I'm sure when they do, we're going to go through and watch some and probably comment on some of the talks as well. So, Stephanie, what is new in your world? STEPHANIE: Yeah. So, I'm celebrating wrapping up a client project after a nine-month engagement. JOËL: Whoa, that's a pretty long project. STEPHANIE: Yeah, that's definitely on the longer side for thoughtbot. And I'm, I don't know, just, like, feeling really excited for a change, feeling really, you know, proud of kind of, like, all of the work that we had done. You know, we had been working with this client for a long time and had been, you know, continuing to deliver value to them to want to keep working with us for that long. But I'm, yeah, just looking forward to a refresh. And I think that's one of my favorite things about consulting is that, you know, you can inject something new into your work life at a kind of regular cadence. And, at least for me, that's really important in reducing or, like, preventing the burnout. So, this time around, I kind of started to notice, and other people, too, like my manager, that I was maybe losing a bit of steam on this client project because I had been working on it for so long. And part of, you know, what success at thoughtbot means is that, like, we as employees are also feeling fulfilled, right? And, you know, what are the different ways that we can try to make sure that that remains the case? And kind of rotating folks on different projects and kind of making sure that things do feel fresh and exciting is really important. And so, I feel very grateful that other people were able to point that out for me, too, when I wasn't even fully realizing it. You know, I had people checking in on me and being like, "Hey, like, you've been on this for a while now. Kind of what I've been hearing is that, like, maybe you do need something new." I'm just excited to get that change. JOËL: How do you find the balance between sort of feeling fulfilled and maybe, you know, finding that point where maybe you're feeling you're running out of steam–versus, you know, some projects are really complex, take a while to ramp up; you want to feel productive; you want to feel like you have contributed in a significant way to a project? How do you navigate that balance? STEPHANIE: Yeah. So, the flip side is, like, I also don't think I would enjoy having to be changing projects all the time like every couple of months. That maybe is a little too much for me because I do like to...on our team, Boost, we embed on our team. We get to know our teammates. We are, like, building relationships with them, and supporting them, and teaching them. And all of that is really also fulfilling for me, but you can't really do that as much if you're on more shorter-term engagements. And then all of that, like, becomes worthwhile once you're kind of in that, like, maybe four or five six month period where you're like, you've finally gotten your groove. And you're like, I'm contributing. I know how this team works. I can start to see patterns or, like, maybe opportunities or gaps. And that is all really cool, and I think also another part of what I really like about being on Boost. But yeah, I think what I...that losing steam feeling, I started to identify, like, I didn't have as much energy or excitement to push forward change. When you kind of get a little bit too comfortable or start to get that feeling of, well, these things are the way they are [laughs], -- JOËL: Right. Right. STEPHANIE: I've now identified that that is kind of, like, a signal, right? JOËL: Maybe time for a new project. STEPHANIE: Right. Like starting to feel a little bit less motivated or, like, less excited to push myself and push the team a little bit in areas that it needs to be pushed. And so, that might be a good time for someone else at thoughtbot to, like, rotate in or maybe kind of close the chapter on what we've been able to do for a client. JOËL: It's hard to be at 100% all the time and sort of always have that motivation to push things to the max, and yeah, variety definitely helps with that. How do you feel about finding signals that maybe you need a break, maybe not from the project but just in general? The idea of taking PTO or having kind of a rest day. STEPHANIE: Oh yeah. I, this year, have tried out taking time off but not going anywhere just, like, being at home but being on vacation. And that was really great because then it was kind of, like, less about, like, oh, I want to take this trip in this time of year to this place and more like, oh, I need some rest or, like, I just need a little break. And that can be at home, right? Maybe during the day, I'm able to do stuff that I keep putting off or trying out new things that I just can't seem to find the time to do [chuckles] during my normal work schedule. So, that has been fun. JOËL: I think, yeah, sometimes, for me, I will sort of hit that moment where I feel like I don't have the ability to give 100%. And sometimes that can be a signal to be like, hey, have you taken any time off recently? Maybe you should schedule something. Because being able to refresh, even short-term, can sort of give an extra boost of energy in a way where...maybe it's not time for a rotation yet, but just taking a little bit of a break in there can sort of, I guess, extend the time where I feel like I'm contributing at the level that I want to be. STEPHANIE: Yeah. And I actually want to point out that a lot of that can also be, like, investing in your life outside of work, too, so that you can come to work with a different approach. I've mentioned the month that I spent in the Hudson Valley in New York and, like, when I was there, I felt, like, so different. I was, you know, just, like, so much more excited about all the, like, novel things that I was experiencing that I could show up to work and be like, oh yeah, like, I'm feeling good today. So, I have all this, you know, energy to bring to the tasks that I have at work. And yeah, so even though it wasn't necessarily time off, it was investing in other things in my life that then brought that refresh at work, even though nothing at work really changed [laughs]. JOËL: I think there's something to be said for the sort of energy boost you get from novelty and change, and some of that you get it from maybe rotating to a different project. But like you were saying, you can change your environment, and that can happen as well. And, you know, sometimes it's going halfway across the country to live in a place for a month. I sometimes do that in a smaller way by saying, oh, I'm going to work this morning from a coffee shop or something like that. And just say, look, by changing the environment, I can maybe get some focus or some energy that I wouldn't have if I were just doing same old, same old. STEPHANIE: Yeah, that's a good point. So, one particularly surprising refresh that I experienced in offboarding from my client work is coming back to my thoughtbot, like, internal company laptop, which had been sitting gathering dust [laughs] a little bit because I had a client-issued laptop that I was working in most of the time. And yeah, I didn't realize how different it would feel. I had, you know, gotten everything set up on my, you know, my thoughtbot computer just the way that I liked it, stuff that I'd never kind of bothered to set up on my other client-issued laptop. And then I came back to it, and then it ended up being a little bit surprising. I was like, oh, the icons are smaller on this [laughs] computer than the other computer. But it definitely did feel like returning to home, I think, instead of, like, being a guest in someone else's house that you haven't quite, like, put all your clothes in the closet or in the drawers. You're still maybe, like, living out of a suitcase a little bit [laughs]. So yeah, I was kind of very excited to be in my own space on my computer again. JOËL: I love the metaphor of coming home, and yeah, being in your own space, sleeping in your own bed. There's definitely some of that that I feel, I think, when I come back to my thoughtbot laptop as well. Do you feel like you get a different sense of connection with the rest of our thoughtbot colleagues when you're working on the thoughtbot-issued laptop versus a client-issued one? STEPHANIE: Yeah. Even though on my client-issued computer I had the thoughtbot Slack, like, open on there so I could be checking in, I wasn't necessarily in, like, other thoughtbot digital spaces as much, right? So, our, like, project management tools and our, like, internal company web app, those were things that I was on less of naturally because, like, the majority of my work was client work, and I was all in their digital spaces. But coming back and checking in on, like, all the GitHub discussions that have been happening while I haven't had enough time to catch up on them, just realizing that things were happening [laughs] even when I was doing something else, that is both cool and also like, oh wow, like, kind of sad that I [chuckles] missed out on some of this as it was going on. JOËL: That's pretty similar to my experience. For me, it almost feels a little bit like the difference between back when we used to be in person because thoughtbot is now fully remote. I would go, usually, depending on the client, maybe a couple of days a week working from their offices if they had an office. Versus some clients, they would come to our office, and we would work all week out of the thoughtbot offices, particularly if it was like a startup founder or something, and they might not already have office space. And that difference and feeling the connection that I would have from the rest of the thoughtbot team if I were, let's say, four days a week out of a client office versus two or four days a week out of the thoughtbot office feels kind of similar to what it's like working on a client-issued laptop versus on a thoughtbot-issued one. STEPHANIE: Another thing that I guess I forgot about or, like, wasn't expecting to do was all the cleanup, just the updating of things on my laptop as I kind of had it been sitting. And it reminded me to, I guess, extend that, like, coming home metaphor a little bit more. In the game Animal Crossing, if you haven't played the game in a while because it tracks, like, real-time, so it knows if you haven't, you know, played the game in a few months, when you wake up in your home, there's a bunch of cockroaches running around [laughs], and you have to go and chase and, like, squash them to clean it up. JOËL: Oh no. STEPHANIE: And it kind of felt like that opening my computer. I was like, oh, like, my, like, you know, OS is out of date. My browsers are out of date. I decided to get an internal company project running in my local development again, and I had to update so many things, you know, like, install the new Ruby version that the app had, you know, been upgraded to and upgrade, like, OpenSSL and all of that stuff on my machine to, yeah, get the app running again. And like I mentioned earlier, just the idea of like, oh yeah, this has evolved and changed, like, without me [laughs] was just, you know, interesting to see. And catching myself up to speed on that was not trivial work. So yeah, like, all that maintenance stuff still got to do it. It's, like, the digital cleanup, right? JOËL: Exactly. So, you mentioned that on the client machine, you still had the thoughtbot Slack. So, you were able to keep up at least some messages there on one device. I'm curious about the experience, maybe going the other way. How much does thoughtbot stuff bleed into your personal devices, if at all? STEPHANIE: Barely. I am very strict about that, I think. I used to have Slack on my phone, I don't know, just, like, in an earlier time in my career. But now I have it a rule to keep it off. I think the only thing that I have is my calendar, so no email either. Like, that is something that I, like, don't like to check on my personal time. Yeah, so it really just is calendar just in case I'm, like, out in the morning and need to be, like, oh, when is my first meeting? But [laughs] I will say that the one kind of silly thing is that I also refuse to sign into my Google account for work. So, I just have the calendar, like, added to my personal calendar but all the events are private. So, I can't actually see what the events are [laughs]. I just know that I have something going on at, like, 10:00 a.m. So, I got to make sure I'm back home by then [laughs], which is not so ideal. But at the risk of being signed in and having other things bleed into my personal devices, I'm just living with that for now [laughs]. JOËL: What I'm hearing is that I could put some mystery events on your calendar, and you would have a fun surprise in the morning because you wouldn't know what it is. STEPHANIE: Yeah, that is true [laughs]. If you put, like, a meeting at, like, 8:00 a.m., [laughs] then I'm like, oh no, what's this? And then I arrive, and it's just, like [laughs], a fun prank meeting. So, you know, you were talking about how you were at the conference this week. And I'm wondering, how connected were you to work life? JOËL: Uh, not very. I tried to be very present in the moment at the conference. So, I'm, you know, connected to all the other thoughtboters who were there and connecting with the attendees. I do have Slack on my phone, so if I do need to check it for something. There was a little bit of communication that was going on for different things regarding the conference, so I did check in for that. But otherwise, I tried to really stay focused on the in-person things that are happening. I'm not doing any client work during those days that I'm at RubyConf, and so I don't need to deal with anything there. I had my thoughtbot laptop with me because that's what I used to give my presentation. But once the presentation was done, I closed that laptop and didn't open it again, and, honestly, that felt kind of good. STEPHANIE: Yeah, that is really nice. I'm the same way, where I try to be pretty connected at conferences, and, like, I will actually redownload Slack sometimes just for, like, coordinating purposes with other folks who are there. But I think I make it pretty clear that I'm, like, away. You know, like, I'm not actually...like, even though I'm on work time, I'm not doing any other work besides just being present there. JOËL: So, you mentioned the idea of work time. Do you have, like, a pretty strict boundary between personal time and work time and, like, try not to allow either to bleed into each other? STEPHANIE: Yeah. I can't remember if I've mentioned this on the show. I think I have, but I'm going to again because one of my favorite things that I picked up from The Bike Shed back when Chris Toomey and Steph Viccari were hosting the show is Chris had, like, a little ritual that he would do every day to signal that he was done with work. He would close his laptop and say, "Schedule shutdown complete," I think. And I've started adopting it because then it helps me be like, I'm not going to reopen my laptop after this because I have said the words. And even if I think of something that I maybe need to add to my to-do list, I will, instead of opening my computer and adding to my, like, whatever digital to-do list, I will, like, write it down on a piece of paper instead for the sake of, you know, not risking getting sucked back into, you know, whatever might be going on after the time that I've, like, decided that I need to be done. JOËL: So, you have a very strict divisioning between work time and personal time. STEPHANIE: Yeah, I would say so. I think it's important for me because even when I take time off, you know, sometimes folks might work a half day or something, right? I really struggle with having even a half day feel like, once I'm done with work, having that feel like okay, like, now I'm back in my personal time. I'd much prefer not working the entire day at all because that is kind of the only way that I can feel like I've totally reclaimed that time. Otherwise, it's like, once I start thinking about work stuff, it's like I need a mental boundary, right? Because if I'm thinking about a work problem, or, like, an interaction or, like, just anything, it's frustrating because it doesn't feel like time in my own brain [laughs] is my own. What do work and personal time boundaries look like for you? JOËL: I think it's evolved over time. Device usage is definitely a little bit more blurry for me. One thing that I have started doing since we've gone fully remote as the pandemic has been winding down and, you know, you can do things, but we're still working from home, is that more days than not, I work from home during the day, and then I leave my home during the evening. I do a variety of social activities. And because I like to be sort of present in the moment, that means that by being physically gone, I have totally disconnected because I'm not checking emails or anything like that. Even though I do have thoughtbot email on my phone, Gmail allows me to like log into my personal account and my thoughtbot account. I have to, like, switch between the two accounts, and so, that's, like, more work than I would want. I don't have any notifications come in for the thoughtbot account. So, unless I'm, like, really wanting to see if a particular email I'm waiting for has come in, I don't even look at it, ever. It's mostly just there in case I need to see something. And then, by being focused in the moment doing social things with other people, I don't find too much of a temptation to, like, let work life bleed into personal life. So, there's a bit of a physical disconnect that ends up happening by moving out of the space I work in into leaving my home. STEPHANIE: Yeah. And I'm sure it's different for everyone. As you were saying that, I was reminded of a funny meme that I saw a long time ago. I don't think I could find it if I tried to search for it. But basically, it's this guy who is, you know, sitting on one side of the couch, clearly working. And he's kind of hunched over and, like, typing and looking very serious. And then he, like, closes his laptop, moves over, like, just slides to the other side of the couch, opens his laptop. And then you see him, like, lay back, like, legs up on the coffee table. And it's, like, work computer, personal computer, but it's the same computer [laughs]. It's just the, like, how you've decided like, oh, it's time for, you know, legs up, Netflix watching [laughs]. JOËL: Yeah. Yeah. I'm curious: do you use your thoughtbot computer for any personal things? Or is it just you shut that down; you do the closing ritual, and then you do things on a separate device? STEPHANIE: Yeah, I do things on a separate device. I think the only thing there might be some overlap for are, like, career-related extracurriculars or just, like, development stuff that I'm interested in doing, like, separate from what I am paid to do. But that, you know, kind of overlaps a little bit because of, like, the tools and the stuff I have installed on my computer. And, you know, with our investment time, too, that ends up having a bit of a crossover. JOËL: I think I'm similar in that I'll tend to do development things on my thoughtbot machine, even though they're not necessarily thoughtbot-related, although they could be things that might slot into something like investment time. STEPHANIE: Yeah, yeah. And it's because you have all your stuff set up for it. Like, you're not [laughs] trying to install the latest Ruby version on two different machines, probably [laughs]. JOËL: Yeah. Also, my personal device is a Windows machine. And I've not wanted to bother learning how to set that up or use the Windows Subsystem for Linux or any of those tools, which, you know, may be good professional learning activities. But that's not where I've decided to invest my time. STEPHANIE: That makes sense. I had an interesting conversation with someone else today, actually, about devices because I had mentioned that, you know, sometimes I still need to incorporate my personal devices into work stuff, especially, like, two-factor authentication. And specifically on my last client project...I have a very old iPhone [laughs]. I need to start out by saying it's an iPhone 8 that I've had for, like, six or seven years. And so, it's old. Like, one time I went to the Apple store, and I was like, "Oh, I'm looking for a screen protector for this." And they're like, "Oh, it's an iPhone 8. Yikes." [laughs] This was, you know, like, not too long ago [laughs]. And the multi-factor authentication policy for my client was that, you know, we had to use this specific app. And it also had, like, security checks. Like, there's a security policy that it needed to be updated to the latest iOS. So, even if I personally didn't want to update my iOS [laughs], I felt compelled to because, otherwise, I would be locked out of the things that I needed to do at work [laughs]. JOËL: Yeah, that can be a challenge sometimes when you're adding work things to personal devices, maybe not because it's convenient and you want to, but because you don't have a choice for things like two-factor auth. STEPHANIE: Yeah, yeah. And then the person I was talking to actually suggested something I hadn't even thought about, which is like, "Oh, you know, if you really can't make it work, then, like, consider having that company issue another device for you to do the things that they're, like, requiring of you." And I hadn't even thought of that, so... And I'm not quite at the point where I'm like, everything has to be, like, completely separate [laughs], including two-factor auth. But, I don't know, something to consider, like, maybe that might be a place I get to if I'm feeling like I really want to keep those boundaries strict. JOËL: And I think it's interesting because, you know, when you think of the kind of work that we do, it's like, oh, we work with computers, but there are so many subfields within it. And device management and, just maybe, corporate IT, in general, is a whole subfield that is separate and almost a little bit alien. Two, I feel like me, as a software developer, I'm just aware of a little bit...like, I've read a couple of articles around...and this was, you know, years ago when the trend was starting called Bring Your Own Device. So, people who want to say, "Hey, I want to use my phone. I want to have my work email on my phone." But then does that mean that potentially you're leaking company memos and things? So, how do you secure that kind of thing? And everything that IT had to think through in order to allow that, the pros and cons. So, I think we're just kind of, as users of that system, touching the surface of it. But there's a lot of thought and discussion that, as an industry, the kind of corporate IT folks have gone through to struggle with how to balance a lot of those things. STEPHANIE: Yeah, yeah. I bet there's a lot of complexity or nuance there. I mean, we're just talking about, like, ways that we do or don't mix work and personal life. And for that kind of work, you know, that's, like, the job is to think really thoroughly about how people use their devices and what should and shouldn't be permissible. The last thing that I wanted to kind of ask about in terms of device management or, like, work and personal intermixing is the idea of being on call and your device being a way for work to reach you and that being a requirement, right? I feel very lucky to obviously not really be in that position. As consultants, like, we're not usually so embedded into a team that we're then brought into, like, an on-call rotation, and I think that's good for me. Like, I don't think that that is something I'd be interested in doing anytime soon. Do you have any experience with that? JOËL: I have not been on a project where I've had to be on call, and I think that's generally true for most of us at thoughtbot who are doing software development. I know those who are doing more kind of platformy SRE-type things are on call. And, in fact, we have specifically hired people in different regions around the world so that we can provide 24-hour coverage for that kind of thing. STEPHANIE: Yeah. And I imagine kind of like what we're talking about with work device management looks even different for that kind of role, where maybe you do need a lot more access to things, like, wherever you might be. JOËL: And maybe the answer there is you get issued a work-specific device and a work phone or something like that, or an old-school work pager. STEPHANIE: [laughs] JOËL: PagerDuty is not just a metaphoric thing. Back in the day, they used actual pagers. STEPHANIE: Yeah, that would be very funny. JOËL: So yeah, I can't speak to it from personal experience, but I could imagine that maybe some of the dynamics there might be a little bit different. And, you know, for some people, maybe it's fine to just have an app on your phone that pings you when something happens, and you have to be on call. And you're able to be present while waiting, like, in case you get pinged, but also let it go while you're on call. I can imagine that's, like, a really weird kind of, like, shadow, like, working, not working experience that I can't really speak to because I have not been in that position. STEPHANIE: Yeah. As you were saying that, I also had the thought that, like, our ability to step away from work and our devices is also very much dependent on, like, a company culture and those types of factors, right? Where, you know, it is okay for me to not be able to look at that stuff and just come back to it Monday morning, and I am very grateful [laughs] for that. Because I recognize that, like, not everyone is in that position where there might be a lot more pressure or urgency to be on top of that. But right now, for this time in my life, like, that's kind of how I like to work. JOËL: I think it kind of sits at the intersection of a few different things, right? There's sort of where you are personally. It might be a combination, like, personality and maybe, like, mental health, things like that, how you respond to how sharp or blurry those lines between work and personal life can be. Like you said, it's also an element of company culture. If there's a company culture that's really pushing to get into your personal life, maybe you need firmer boundaries. And then, finally, what we spent most of this episode talking about: technical solutions, whether that's, like, physically separating everything such that there are two devices. And you close down your laptop, and you're done for the day. And whether or not you allow any apps on your personal phone to carry with you after you leave for the day. So, I think at the intersection of those three is sort of how you're going to experience that, and every person is going to be a little bit different. Because those three...I guess I'm thinking of a Venn diagram. Those three circles are going to be different for everyone. STEPHANIE: Yeah, that makes complete sense. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
401: Making the Right Thing Easy

The Bike Shed

Play Episode Listen Later Sep 12, 2023 31:24


Stephanie has another debugging mystery to share. Earlier this year, Joël mentioned that he was experimenting with a bookmark manager to keep track of helpful and interesting articles. He's happy to report that it's working very well for him! Together, they discuss tactics to ensure the easiest route also upholds app health and aids fellow developers. They explore streamlining test fixes over mere re-runs and how to motivate desired actions across teams and individuals. Raindrop.io (https://raindrop.io/) Railway Oriented Programming (https://fsharpforfunandprofit.com/rop/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I have another debugging mystery to share with the group. I was working on a bug fix and was trying to figure out what went wrong with some plain text that we're rendering from a controller with ERB. And I was looking at the ERB file, and I was like, great, like, I see the method in question that I need to go, you know, figure out why it's not returning what we think it's supposed to return. I went down to go check out that method. I read through it ran the tests. Things were looking all fine and dandy, but I did know that the bug was specific to a particular, I guess, type of the class that the method was being called on, where this type was configured via a column in the database. You know, if it was set to true or not, then that signaled that this was a special thing. You know, I made sure that the test case for that specific type of object was returning what we thought it was supposed to. But strangely, the output in the plain text was different from what our method was returning. And I was really confused for a while because I thought, surely, it must be the method that is the problem here [laughs]. But it turns out, in our controller, we were actually doing a side effect on that particular type of object if it were the case. So, after it was set to an instance variable, we called another method that essentially overwrote all of its associations and really changed the way that you would interact with that method, right? And that was the source of the bug is that we were expecting the associations to return what we, you know, thought it would, but this side effect was very subvertly changing that behavior. JOËL: Would it be fair to call this a classic mutation bug? STEPHANIE: Yeah. I didn't know there was a way to describe that, but that sounds exactly right. It was a classic mutation bug. And I think the assumption that I was making was that the controller code was, hopefully, just pretty straightforward, right? I was thinking, oh, it's just rendering plain text, so not [laughs] much stuff could really be happening in there, and that it must be the method in question that was causing the issues. But, you know, once I had to revisit that assumption and took a look at the controller code, I was like, oh, that is clearly incorrect. And from there, I was able to spot some, you know, suspicious-looking lines that led me to that line that did the mutation and, ultimately, the answer to our mystery. JOËL: Was the mutation happening directly in the controller? Or was this a situation where you're passing this object to a method somewhere, and that method, you know, in another object or some other file is doing the mutation on the record that you passed in? STEPHANIE: Yeah, that's a great question. I think that could have been a very likely situation. But, in this particular case, it was a little more obvious just in the controller code, which was nice, right? Because then I didn't have to go digging into all of these other functions that may or may not be the ones that is doing the mutation. But the thing that was really interesting is that it seems like that method that does this mutating is pretty key to the type of object that we're working with, as in most places it's used, that mutation happens. And yet, it's kind of separated from the construction of that object. And I was, I think, a little bit surprised that it wasn't super obvious that throughout the application, this is the way that we treat this special object. And I had wished that maybe that was a bit more cohesive or that it was kind of clear that this is how we use this in our domain. And it was, like, lacking that bit of clarity around how things are used in practice as opposed to trying to keep those in isolation. JOËL: Yeah. Because now you have sort of two different diverging use cases for using this object that are incompatible, one that's trying to use the object as is and the other that's depending on this mutation. Now you can't have both. STEPHANIE: Right. As far as I can tell, in most cases, you know, we're using it with a mutation, and maybe there is a good reason for those ideas to be separate. But it certainly did not make my life easier trying to solve this particular bug. JOËL: I'm hearing you mention the idea of ideas being separate; definitely kind of triggers some pathways in my modeling brain, where I'm already thinking, oh, maybe this should be a decorator, or maybe this is just a straight-up transformation. We just have a completely different kind of object rather than mutating the underlying record. STEPHANIE: Yeah, definitely. I think there could have been some alternative paths taken. At the time, that was kind of decided as the way forward for how to treat this particular domain object. So, that's my fun, little mystery where I got to, you know, play the detective role for a bit. Joël, what's new in your world? JOËL: So, earlier this year, on an episode of The Bike Shed, I mentioned that I was experimenting with a bookmark manager to kind of keep track of articles that I find are helpful that I might want to reference later on. I wasn't sure if it was going to be worthwhile and mentioned that I'd report back. I'm happy to report that it is working very well for me. So, the tool is Raindrop.io. They have an app. They have a website. And I have been sort of slowly filling it with some of my most commonly referenced articles. I had pulled some in initially, and then kind of over time, when I find myself referencing an article using my old-fashioned approach, which was just remember the keywords in the title and Google them, now I'll do that, link the article to someone else, and then add it to Raindrop so that the next time I'm starting to look for things, I have these resources available. And I try to curate a little bit by doing things like tagging them and categorizing them so that when I need references for something, I don't just have to go through my personal memory and be like, oh yeah, what articles do I have on data modeling that I think might be a good fit here? Instead, I can just go to the data modeling section of Raindrop and be like, oh yeah, these five are, like, my favorites that I link to all the time. STEPHANIE: Wow. We did a whole episode on how to search recently. And we totally forgot to mention things like bookmark managers or curating your own little catalog of go-to articles. In some ways, now you have to search within your bookmark manager [laughs]. JOËL: That's true. Sometimes, it's searching if I'm looking for a particular article, and sometimes, it's more browsing where I'm looking at a category, or a tag, or something like that, which maybe would have been another interesting distinction to explore on the How to Search episode. I do want to give a shout-out to the most recent article that I looked up in my bookmark manager here, Railway-Oriented Programming. It's an article on how to deal with pieces of code that can error and how to sort of compose those sorts of methods. So, you now have a whole sort of chain of different functions that can error or not in different sorts of ways. And it uses this really powerful metaphor of railway with different types of junctions and how you might try to, like, fit them all together so that everything connects nicely. And it's just a really beautiful metaphor. And I was doing some work on error handling, in particular, and I wanted to reference something and that was a great resource for that piece of work. STEPHANIE: Very cool. I'm really intrigued. I love a good metaphor. I am curious: is this programming language and framework agnostic and not about Rails? JOËL: Actually, so this is written on an F# blog. So, the code is all in F#. And it leans a little bit into some functional programming concepts, but the metaphor is more generic. So, it's a really fun way to think about when you're programming, and you're not just going through the happy path. But what are all the side branches that you might have to deal with, and how do those side branches come back into the flow of your program? STEPHANIE: Very cool. I can also see some really excellent visuals here if you were to use this metaphor as a way of understanding complexity. JOËL: Absolutely. In fact, this article has some pretty amazing visuals, so strong recommend. We'll link it in the show notes. STEPHANIE: So, a few episodes ago, we talked about code ownership at scale because my client project that I'm working on is for a company with hundreds of developers. So, it's quite a big codebase, quite a big team. One of the main issues that I, at least, struggle with on a day to day is flaky tests in CI. When I, you know, I'm wanting to merge a change, I often have to run the test suite a few times to get to green and be able to merge and deploy. And this is an interesting topic to me because when you're really trying to just get your changes through and mark that ticket as done, it's very tempting to just hit that, you know, retry button and let it sit and just hope for the best, as opposed to maybe investigate a little further about why that test was flaking and see if there's something that you could do about it. So, I wanted to talk to you about the idea of making the right thing easy or how, at both a team level and an individual level, we can set ourselves and our team up for success rather than shoulder this burden [laughs] and just assume that things are the way they are. JOËL: That's a really powerful question. Because I think by default, oftentimes, the less helpful thing is the path of least resistance, so, in this case, hitting that rerun button on a test suite, which I've absolutely done. But there's a lot of other situations in our work where, just sort of by default, the path of least resistance is the thing that's maybe less helpful for the team. STEPHANIE: Ooh, I noticed that you kind of reframed what I said. I was using the term, you know, the right thing, but you then reworded it into the helpful thing. And that actually gets me thinking about these words are kind of subjective, right? What is helpful to someone could be different to what's helpful to someone else. And I'm kind of curious about your definition of the helpful thing. JOËL: Yeah, I mean, sometimes it's very easy to sort of bring absolutes and [inaudible 11:26] judgments to code, you know, when we talk about writing good code, and being good programmers, being good at our jobs, not doing the bad things. And I think that sort of absolutism sometimes can, like, be very restrictive, and kind of takes us down paths that are not optimal for ourselves, for our teams, for our products. So, I'd like to think a little bit more relativistically have a little bit more of elasticity in the way that we formulate some of these ideas. STEPHANIE: Yeah. I really like that reframing, and I appreciate the nuance there. I think for me, when I think of doing the helpful thing, I'm hoping to ease the day-to-day workflow for other developers because that also includes myself, right? Like, I've certainly been there feeling frustrated or just kind of tired of retrying [laughs] the test suite over and over again. I'm also thinking about helpful, as in what will be helpful for future developers regarding the product? And can we make it robust now so that we're not dealing with bug reports later for things that we maybe we're trying to throw under the rug or just kind of glance over? Do you have any other guiding principles around what is helpful and what's not? JOËL: I think that the time horizon you mentioned is really interesting because you have to balance sometimes short-term value versus kind of long-term. And is it worth it to maybe not fix that flaky test today so that we can ship as soon as possible? Or is it worth investing a little bit of time today so that tomorrow or next week is better? If you're a solo developer on a tiny project, that might be of personal benefit only. But on a larger team, you know, that might benefit not just you but a larger amount of people. STEPHANIE: I just imagined the trolley problem [laughs] a little bit about, you know, the future developers and whose lives, not lives but whose happiness, the developer happiness you'll save [laughs] when you're on that track and making the decision about benefit now versus benefit later. JOËL: No connection to railway-oriented programming, by the way. STEPHANIE: I am really interested in also talking about barriers to doing the helpful thing or taking that extra step, right? Because I think identifying those barriers is really important to then, hopefully, break them down so that we are creating that path of least resistance. JOËL: That's interesting that you mentioned these as barriers because I think, in my mind, I was thinking about the same idea but from, like, a completely mirror perspective, the idea of incentives. Why do incentives push you in one way versus the other? STEPHANIE: I like that a lot. I think maybe there are, like, two different levers, right? Or maybe they are two sides of the same coin, where you do have incentive, and then you also have things that disincentivize you. JOËL: This reminds me a little bit of the idea of the tragedy of the commons. So, in the case of flaky tests, everybody as a whole on your project has a worse experience. And project velocity slows when there are flaky tests. But you, as an individual developer, are incentivized to ship features quickly and efficiently. And the fastest way you can get your individual feature to production is by hitting that rerun button. Even though, collectively as a whole, every time we do that, instead of fixing the flakiness, we're adding a tiny, little bit of extra slowness that will accrete over time. STEPHANIE: Yeah. It's kind of difficult to imagine really the negative impact that it's having collectively, right? You're kind of like, oh, I'm feeling this pain, but you're not always, like, really hearing about it from others, and we might just be silently suffering together [laughs]. I do think that once it's been identified as like, oh, like, we're all actually, like, really impacted by this, okay, great, let's make it a priority. And so, now, let's say I am slightly incentivized to go and investigate the flakiness of a test rather than hit the rerun button this time around. I am wanting to talk about those barriers I was referring to a little bit because I've been in this position where I'm like, okay, like, I have some extra time today. So, why don't I look into this? But then I go down that path, and now I'm looking at a test written by someone many years ago, you know, I don't know this person, and I don't know this domain. I don't know who to talk to to figure out even where to start. I may or may not feel equipped with the right tools to be able to address it. And then, I think the biggest challenge is not feeling like it matters, right? Once I'm hitting this barrier, I'm like, is it worth the effort? At that point, maybe a little bit demoralized because, well, this is just one, and we have so many other flaky tests, like, what's one more? JOËL: That's really interesting that you mentioned that sort of morale factor because it's absolutely the case on every flaky test suite I've seen. I think that kind of points to almost, like, an exponential cost to ignoring the problem. If someone fixed it early, yeah, it's slightly annoying, but you get it done. You fix it, and then you move on. When it feels like there's now this insurmountable pile of these and that any work you do here doesn't bring you any closer to the goal because it's effectively infinite, yeah, now, there is no incentive at all to do that work. STEPHANIE: So, to avoid that, we talked about incentives, right? And I'm kind of curious: what ways have you seen or experienced that did make you feel motivated to take that extra step or at least try to avoid that point of thinking that nothing matters? [laughs] JOËL: So, I'm going to start with, I think, what's maybe a classic developer answer, and I'm curious to get your thoughts on it because I think I have very mixed feelings about this. The idea of programmer discipline—we just need to kind of take more pride in our craft and pursue excellence, choose to do the right thing, even when it's hard every day. Because I hear a lot of that in our communities. How do you feel about that sort of maybe a bit of a mindset change? And how effective has that been? STEPHANIE: Whoa, yeah, that's a really great point because I think I also feel quite conflicted about it. Because sometimes I can find it in myself to be, like, you know, I have the energy today to want to uphold, like, a certain level of quality that would make me feel good about doing my best work. And then, there are other days where I am, you know, just tired, [laughs] or feeling a little bit lazy, feeling just not confident that it will be worth it. Because there's also, I think, some external forces, right? I've certainly been in the position where we were only rewarded or celebrated for shipping fast. That was the praise we were getting at retros. But then that actually really disincentivized me from wanting to do the helpful thing when the time came, right? When I'm in my development process, and I'm at, like, that crossroads. Because I'm like, well, I've been trying to do the right thing, but, like, no one is celebrating me for it. What if it takes away time from doing the thing that is considered successful? And, like, does it have an impact on me and my job? So, you can, you know, kind of go down that spiral pretty quickly. And, in that case, like, no amount of personal, like, individual [laughs] feelings of responsibility can really overcome those consequences if, like, you're working on a team where that is just simply not valued, and there are other people who have authority to impart consequences, I suppose. JOËL: Yeah. It's, you know, you have that maybe some amount of personal motivation. You feel like you're swimming upstream. And so, maybe the question then is, how do we reorient maybe some of the incentive structures in a team to try to make it so that if you are, let's just say, chasing some of those extrinsic rewards and you're trying to get praise from your teammates, or move forward in your career, whatever it is that is rewarded and valued on your team, how can that be harnessed to push people in a direction to get some of these extra tasks done? STEPHANIE: Yeah. I will say, though, it is a little bit of both. And I think that was maybe why we're both kind of conflicted about it. Because on the individual level and, in general, knowing what my values are and, like, wanting to do good work and uphold quality, there are times where I don't always behave according to those values. Like I said, I am feeling tired, or maybe it's, like, almost 5:00 o'clock, and I just want to, you know, push my thing [laughs] so I can go and go on with my day. What has helped me is having an accountability buddy. Maybe it's in code review, or maybe it's when I'm pairing, or maybe just talking about a problem that we're facing, right? And I might get into that sort of lizard-brain mentality of just wanting to do the easy thing. But as soon as someone else points it out or is, like, you know, like, "That's not quite aligning with what I know your, like, hopes and goals are for this project or how you want to do your work," usually, that's enough to be like, "Yeah, you're right." [laughs] And I'll take another pass at it. JOËL: And I think we've kind of come back full circle here in that you mentioned that sort of the lizard brain side of you wants to just do the easy thing with the implication that, in this case, the easy thing is the thing that's maybe not useful to the team long term. What if we could restructure things a little bit so that the easy thing was the most beneficial thing for the team? And now you're not having to use discipline to fight the lizard brain side of you, but you're actually working with it. One thing I've seen teams do with flaky tests is to not necessarily fix them immediately. Like, maybe you do rerun them, but when they happen, create a ticket for them and put them into the planning board. And so, now, these are things that get prioritized. They're things that might be fairly quick to do. So, it might be a fun, like, fast ticket for someone to pick up at the end of the week. It now counts towards velocity if tracked, so people, hopefully, get rewarded for doing that work. Is that something that you've seen? And how effective do you think that is in maybe making fixing flaky tests easier thing for a team? STEPHANIE: Yeah, that is actually something that we are doing on my team right now, and I think it's great. I like the idea of tracking it, right? Because then you could also see over time, like, whether we have been getting better at reducing flaky tests, you know, then it's also really clear when someone is preoccupied with that work, and they don't get assigned something else. One way that I've seen it not work as well as we hoped is when other work consistently gets prioritized over those flaky test tickets, and, you know, sometimes it happens. I think that's also information, though, about the team and how the team is spending its time compared to how the team thinks it should be ideally spending its time where, you know, we can say all we want, like, yes, like, we really want to make sure our test suite is robust. But when other things are consistently getting prioritized over it, then you can point to it and say, do we actually believe this if we're consistently not behaving according to that belief? I found that challenging to have that conversation. But I do think that the concreteness of adding it to our workload for a given period of time is at least providing information rather than it being things that, like, developers are doing one-off or kind of just on their own time. JOËL: Another solution that I've seen people do, and this is a classic developer solution, is tooling. If you have better tooling around a particular problem, sometimes you can shift that cost, that time of work it takes so that it's easier to do the best thing rather than to not...or at least make it easy enough that it's less of a big decision, like, oh, do I really want to invest that much work into something? And a classic example of this, I think, is when you're trying to get into more of a test-driven development workflow. Having a test suite that's fast and, really importantly, having a near-instant way to run a test from your code editor makes a big difference in terms of adopting that workflow. Because if the cost of running a test is too high, then yeah, the easy path is to just say, you know what? I'm not going to run a test. I'm going to just run it once at the end after I've written 100 lines of code or an entire feature. And now I am not getting the benefits of TDD. And that might kind of get into a negative cycle where because I'm not seeing the benefits, I do it even less. And then eventually, I'm just, like, you know what? Forget this whole thing. STEPHANIE: I think that also applies to testing in general, where if it, you know, is feeling really challenging. I have definitely seen people start to get into that mindset of, is this worth my time to do at all? And it's a very slippery slope, I think. That almost makes me think about, like, okay, like, what are some other ways to lift that task up and to elevate it into something that's worthy of saying, like, "Hey, like, that was really hard, and you did a great job. And that was really awesome that you persevered through that challenging thing." Sharing the pain points is really important, not only to, like I mentioned earlier, to, like, communicate that, oh, maybe, like, more people than you think are going through the same thing, but also to be able to identify when someone went out of their way to do the helpful thing and seeing that someone was willing to do it because, for them, being helpful is important. When I see someone on my team take on kind of a difficult task to make things easier for others and then share about it, I feel really inspired because I think, wow, like, that could be me as well. You know, there's that saying that many hands make light work. And I also think that's true of tackling these kinds of barriers, where if we all feel this collective responsibility or, like, wanting to help out the group, it ends up literally being easier. JOËL: I think something I'm hearing here as well is the value of giving praise. If somebody goes out of their way to make life easier for the rest of your team, give them a shout-out, whether that's in your team's Slack channel, maybe it's at an all-hands meeting, and you shout out some work that they've done or, you know, you put their name on a slide in your slide deck at some point, or whatever it is the mechanism within your team. Having a way to shout out people who've done some of this work that can be sometimes a little bit thankless is a great way to motivate seeing more of that. STEPHANIE: Yeah. I was just thinking that it can be really powerful because, like you said, a lot of it is thankless, but also, we may not even realize the impact it's having on others until you give that shout-out and express, like, "Wow, like, that change, you know, I've been bothered by this issue for so long and, you know, that really made an impact on me," just keeping that cycle of gratitude going. JOËL: So, I think we've kind of identified three maybe main areas of ways where you can help to incentivize these behaviors. You can do it through kind of process. We talked about the example of pulling the flaky tests as actual cards to be worked through on a board. It can be technical by introducing some tooling that makes it much easier to do the work that you're trying to do. And it can be personal by praising people, preferably in public, for taking that extra step. And I think all three of those can be part of a strategy to make it easier or more attractive for people to do work that benefits the team as a whole, even if they don't see an immediate return to that on a per-day level. STEPHANIE: Yeah. I like that we kind of talked about these three different categories. And people in all different types of roles can, hopefully, take something from what we've shared, right? If you are a manager or leader of a team, maybe you can investigate your processes. If you're an individual contributor and you notice your colleague doing something that you, you know, kept meaning to but just didn't have time for, recognize that work. It really does take a holistic approach, but I think an impact can be made at every level. JOËL: Agreed. I think for managers and more senior team members, that's really almost, like, part of their job description is to think about these kinds of things. How can we incentivize this work? How can we shift the team in a particular direction? There's a particular onus on them to do this right, to think about this, to model some of this behavior. STEPHANIE: Yeah, absolutely. JOËL: For someone who's really senior on a team also, they're often the ones who are tasked or who maybe take the initiative to build some of this more complex tooling so that these tasks are easier for more junior people. Maybe that's tinkering with some things and building an editor plugin that makes it easier to do some work. Maybe it's building a Rails generator so that the proper files get generated that maybe people wouldn't think to have when they're building certain work. Maybe it's building an RSpec matcher to make it easier for people to test some of the nuances of what we're hoping to do, catching some of these edge cases. Whatever it is, sometimes there are things that the more junior members of our teams aren't aware of, and having a senior person take time out of their day to build these things so that now the entire team can be more productive can be a really helpful thing to do. STEPHANIE: Yeah, that's a great point. And I think that also comes from having a pulse on what people are struggling with, right? So, you know, oh, it would be good to invest my energy into building a script to make this manual process easier because I keep hearing about people having issues with it or it being a challenge. So, I would even recommend posing the question of, like, how do people feel about being able to fix that flaky test, right? Like, is it intimidating? What are those barriers? Because your team knows best about what that experience is like. And if that is not something on your radar, maybe there are opportunities to incorporate it into where you're evaluating team morale and happiness. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Bike Shed
397: Dependency Graphs

The Bike Shed

Play Episode Listen Later Aug 15, 2023 42:53


Stephanie is consciously trying to make meetings better for herself by limiting distractions. A few episodes ago, Joël talked about a frustrating bug he was chasing down and couldn't get closure on, so he had to move on. This week, that bug popped up again and he chased it down! AND he got to use binary search to find its source–which was pretty cool! Together, Stephanie and Joël discuss dependency graphs as a mental model, and while they apply to code, they also help when it comes to planning tasks and systems. They talk about coupling, cycles, re-structuring, and visualizations. Ruby Graph Library (https://github.com/monora/rgl) Graphviz (https://graphviz.org/) Using a Dependency Graph to Visualize RSpec let (https://thoughtbot.com/blog/using-a-dependency-graph-to-visualize-rspec-let) Mermaid.js (https://mermaid.js.org/) Strangler Fig pattern (https://martinfowler.com/bliki/StranglerFigApplication.html) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I'm always trying to make meetings better for me [chuckles], more tolerable or more enjoyable. And in meetings a lot, I find myself getting distracted when I don't necessarily want to be. You know, oftentimes, I really do want to try to pay attention to just what I'm doing in that meeting in the moment. In fact, just now, I was thinking about the little tidbit I had shared on a previous episode about priorities, where really, you know, you can only have one priority [laughs] at a time. And so, in that moment, hopefully, my priority is the meeting that I'm in. But, you know, I find myself, like, accidentally opening Slack or, like, oh, was I running the test suite just a few minutes before the meeting started? Let me just go check on that really quick. And, oh no, there's a failure, oh God, that red is really, you know, drawing my eye. And, like, could I just debug it really quick and get that satisfying green so then I can pay attention to the meeting? And so on and so forth. I'm sure I'm not alone in this [laughs]. And I end up not giving the meeting my full attention, even though I want to be, even though I should be. So, one thing that I started doing about a year ago is origami. [laughs] And that ended up being a thing that I would do with my hands during meetings so that I wasn't using my mouse, using my keyboard, and just, like, looking at other stuff in the remote meeting world that I live in. So, I started with paper stars, made many, many paper stars, [laughs] and then, I graduated to paper cranes. [laughs] And so, that's been my origami craft of choice lately. Then now, I have little cranes everywhere around the house. I've kind of created a little paper crane army. [laughs] And my partner has enjoyed putting them in random places around the house for me [laughs] to find. So, maybe I'll open a cabinet, and suddenly, [laughs] a paper crane is just there. And I think I realized that I've actually gotten quite good at doing these crafts. And it's been interesting to kind of be putting in the hours of doing this craft but also not be investing time, like, outside of meetings. And I'm finding that I'm getting better at this thing, so that seemed pretty cool. And it is mindless enough that I'm mentally just paying attention but, yeah, like, building that muscle memory to perfecting the craft of origami. JOËL: I'm curious, for your army of paper cranes, is there a standard size that you make, or do you have, like, a variety of sizes? STEPHANIE: I have this huge stack of, like, 500 sheets of origami paper that are all the same size. So, they're all about, let's say, two or three inches large. But I think the tiny ones I've seen, really small paper cranes, maybe that would be, like, the next level to tackle because working with smaller paper seems, you know, even more challenging. JOËL: I'd imagine the ratio of, like, paper thickness to the size of the thing that you're making is different. STEPHANIE: At this point, they say that if you make 1,000, then you bring good luck. I think I'm well on my way [laughs] to hopefully being blessed with good luck in this household of my little paper crane army. JOËL: It's interesting that you mentioned the power of having something tactile to do with your hands during a meeting, and I definitely relate to that. I feel like it's so easy, even, like, mindlessly, to just hit Command-Tab when I'm doing things on a screen. Like, my hands are on the keyboard. If I'm not doing something, I'm just going to mindlessly hit Command-Tab. It's kind of like on your phone sometimes. I don't know if you do this, like, just scrolling side to side. You're not actually doing anything. You just want motion with your fingers. STEPHANIE: Yes. I know exactly what you're talking about. And it's funny because it's a bit of a duality where, you know, when you are in your development workflow, you want things to be as quick and convenient as possible, so that Command-Tab, you know, is very easy. It's just built in, and that helps speed up your, you know, day-to-day work. But then it's also that little bit of mindlessness, I think, that can get you down the distraction path. When I was first looking for something to do with my hands, to have, like, a little tactile thing to keep me focused in meetings, I did explore getting one of those fidget cubes; I have to say. [laughs] It's just a little toy, you know, that comes with a bunch of different settings for you to fidget with. There's, like, a ball you can roll, you know, with your thumb, or maybe some buttons to click, and it gives you that really satisfying tactile experience. And I know they work really well for a lot of people, but I've really enjoyed the, I guess, the unexpected benefits [chuckles] of getting better at a hobby [laughs] while spending my time at my work. Joël, what is new with you? JOËL: So, a few episodes ago, I talked about a really kind of frustrating bug that I was chasing down that was due to some, like, non-determinism in the environment. And it kind of came, and then it went away. And I wasn't able to get sort of closure on that and had to move on. Well, this week, that bug popped up again, and this time, I was actually able to chase it down. So, that felt really exciting. And I got to use binary search to try to find the source of it, which made me feel really cool. STEPHANIE: Oooh, do tell. What ended up being the issue? JOËL: I'm connecting to an external Snowflake data warehouse, and ActiveRecord tries to fetch the schema and crashes as part of that with some cryptic error that originates from the C extension ODBC Ruby driver package. I figured out that it's probably something to do with, like, a particular table name or something in the table metadata when we're pulling this schema that we're not happy about. But I don't know which table is the one that it's not happy with. Well, this time, I was able to figure out, by reading through some of the documentation, that I can pull subsets of the schema. So, I can pull the first n values of that schema, and it won't crash. It only crashes if I try to fetch the entire set, which is what is happening under the hood. At that point, you know, I could fetch each row individually, but there's hundreds of these. So, you know, I try, okay, what happens if I try to fetch 1,000 of these? Is it going to crash? Because it's a massive system. So, yes, I get a crash. So, I know that a table less than a thousandth in the list of tables is what's causing the problems. So, okay, fetch 500 halfway in between there. It's still going to crash. Okay, 250, 125. I then kind of keep halving all the time until I find one that doesn't crash. And now I know that it is somewhere between the last crash and this one. So, I think it was between 125 and 250. And now I can say, okay, well, let's fetch the first, you know, maybe 200 tables, okay, that crashes. And I keep halving that space until you finally find it. And then, like, okay, so it's this one right here. Now, the problem is the bad table actually crashes. So, I think it ended up being, like, number 175 or something like that. So, I never get to see the actual table itself. But because the list of tables is in alphabetical order, and I can see because I can fetch the first 174 and it succeeds, so I can tell what the previous 5, 6, you know, previous 174 are. I can pretty easily go and look at the actual database and the list of tables and say, okay, well, it's in the same order. And the next one is this one, and hey, look, there is some metadata there that has some very long fields that are longer than one might expect, specifically going over a potentially implied 256-character limit. That seems somewhat suspicious. And, oh, if we remove this table, all of a sudden, everything works. STEPHANIE: Wow, binary search, an excellent debugging tool [laughs] when you have no idea, you know, what could possibly be causing your issue. JOËL: It's such a cool tool. Like, I'm always so happy when I get a chance to use it. The problem is, you need a way to be able to answer the question, like, have I found it? Yes or no? Or, generally, is it greater or less than this current position? STEPHANIE: Well, that's really exciting that you ended up figuring out how to solve the bug. I know last time we talked about it, you kind of had left off in a space of, hopefully, we won't run into this issue again because it's no longer happening. But it seems like you were also set up this time around to be able to debug once it cropped up again. JOËL: Yes. So, binary search is really cool. It's got this, like, very, like, fancy computer science name. But in reality, it's a fairly simple, straightforward technique that I use fairly frequently in my development. And there's another kind of computer sciency fancy-sounding concept that I use all the time. You've all heard me reference this multiple times on the show. You're right; we're finally doing it. This is the dependency graph episode. STEPHANIE: Woo. [laughter] It's time. I'm excited to really dig into it because, you know, as someone who has heard you talk about it a lot, you know, and is maybe a little less familiar with graph theory and how, you know, it can be applied to my day to day work, I'm really excited to dig into a little bit about, you know, what a regular developer needs to know about dependency graphs to add to their toolbox of skills. JOËL: So, I think at its core, the idea of a dependency graph is that you have a group of entities, some of which depend on each other. They can't do a task, or they can't be created unless some other subtasks or dependent actions take place. And so, we have a sort of formal structural way of describing these things. Visually, we often draw these things out where each of the pieces is like a little bubble or a circle, and then we draw arrows towards the things that it depends on. So, if A cannot be done without B being done first, we draw an arrow from A to B. That's kind of how it is in the abstract. More concretely, this kind of thing shows up constantly throughout the work that we do because a lot of what we do as developers is managing things that are connected to each other or that depend on each other. We build complex systems out of smaller components that all rely on each other. STEPHANIE: Yeah, I think it's interesting because I use the word dependency, you know, very frequently when talking about normal work that I'm doing, you know, dependencies as in libraries, right? That we've pulled into our application, or dependencies, like, talking about other classes that are referenced in this class that I'm working in. And I never really thought about what could be explored further or, like, what could be learned from really digging into those connections. JOËL: It's a really powerful mental model. And, like you said, dependencies exist all over our work, and we often use that word. So, you mentioned something like packages, where your application depends on Rails, which in turn depends on ActiveRecord, which in turn depends on a bunch of other things. And so, you've got this whole chain of maybe immediate dependencies, and then those dependencies have dependencies, and those dependencies have dependencies, and it kind of, like, grows outward from there. And in a very kind of simplistic model, you might think, oh, well, it's more, like, a kind of a tree structure. But oftentimes, you'll have things like branches on one side that connect back to branches on the other. And now you've got something that's no longer really tree-like. It's more of a sort of interconnected web, and that is a graph. STEPHANIE: I think understanding the dependencies of your system has also become more important to me as I learn about things that can go wrong when I don't know enough about what my system is, you know, relying on that I had kind of taken for granted previously. I'm especially thinking about packages like we were mentioning, and, you know, not realizing that your application is dependent on this other library, right? That's brought in by a gem that you're using. And there's maybe, like, a security issue, right? With that. And suddenly, you have this problem on your hands that you didn't realize before. And I know that that has been more of a common discussion now in terms of security practices, just being more aware of all the things that you are depending on as really our work becomes more and more interconnected with the things available to us with open source. JOËL: I think where understanding the graph-like nature of this becomes really important is when you're doing something like an upgrade. So, let's say you do have a gem that has a security problem, and you want to upgrade it to fix that security issue. But the upgrade that includes the security patch is also a breaking upgrade. And so, now everything else in your system that depends on that gem or on that package is going to break unless you have them in a version that is compatible with the new version of that gem. And so, you might have to then go downstream and upgrade those packages in a way that's compatible with your app before you can bring in the security patch. And a lot of that can be done automatically by Bundler. Bundler is software that is built around navigating dependency graphs like that and finding versions that are compatible with each other. But sometimes, your code will need to change in order to upgrade one of these downstream gems so that you can then pull in the upgrade from the gem that needs a security patch. And so, understanding a little bit of that graph is going to be important to safely upgrading that gem. STEPHANIE: So, I know another application of dependency graphs that you have thought about and written a blog post for is RSpec let declarations and how a lot of the time when we are using let, you know, we are likely calling other variables defined by let. And so, when you are encountering a test file, it can be really hard to grok what data is being set up in your test. JOËL: Yeah, so that is really interesting because you can define something that will get executed in a lazy fashion if it gets referenced. But then not only is the let lazy and will not trigger unless it's referenced, but a let can reference other lets, which are also lazy, and only get triggered if they get referenced. So, you might have a bunch of lets defined in any order you want throughout a file, and they're all kind of interconnected with these references to each other. But they only get triggered if something calls it directly or it's in this, like, chain of dependencies. And getting a grasp on what actually gets created, which lets will actually execute, which ones don't in a file can quickly get out of hand. And so, thinking of this in terms of a dependency graph has been a really helpful mental model for me to understand what's going on in a complex test file. STEPHANIE: Yeah, absolutely. Especially when sometimes the lets are coming from all over the place, you know, maybe a describe block hundreds of lines away, or even a completely different file if you are using a shared context that's being pulled in. So, I can see why this was a complex problem that could be made a little simpler with plotting out a dependency graph. And in preparation for this episode, I was doing a little bit of my own exploration on this because I certainly know, you know, the pain of trying to figure out what is being executed in my tests when there are a lot of lets that reference each other. And in the blog post, you kind of gave a little step-by-step of how you could start with creating a dependency graph for the test that you're working with. And I was really curious if this process could be automated because, you know, I do enjoy, you know, pulling out the pen and paper [chuckles] every now and then. But I'm not, like, a particularly visual person. God forbid I, like, draw a circle, but then, like, don't have enough space for the rest of the circles. [laughs] So, I was really hoping for a tool that could do this for me, especially if, you know, you do, you have a lot of tests that you have to try to understand in a relatively short amount of time. And so, I ended up doing something kind of hacky with RSpec and overriding let definitions to automate this process. JOËL: That's really cool. So, is the tool that you're trying to build something where you feed it in a spec file, and it gives you some kind of graphical representation like an SVG or something as output? STEPHANIE: Yeah. I did consider that approach first, where you feed in the file, but then I ended up going with something more dynamic where you are running the test, and then as it gets executed, tracing the let definitions and then registering them to build your dependency graph. JOËL: So, you've got some sort of internal modeling that describes a dependency graph. And then, somehow, you're going to turn that, you know, a series of Ruby objects into some kind of visual. STEPHANIE: Yeah, exactly. And the bulk of that work was actually done with a library called RGL, which stands for just Ruby Graph Library. [laughs] And what's nice is that it has a really easy interface for plugging in the vertices and edges of the dependency graph that you want to build. And then, it is already hooked up with Graphviz to, you know, write the SVG to a file. And so, I ended up really just having to build up an array of my dependencies and the connections to each other and then feed it into the constructor of the graph. JOËL: And for all of our listeners, you mentioned Graphviz. That is a third-party tool that can be installed on your machine that can generate these SVG diagrams from...I believe it has its own sort of syntax. So, you create, I believe it's dot, D-O-T, so dot dot file. And based off of that, it generates all sorts of things, but SVG being potentially one of them. STEPHANIE: Yeah. The nice thing was that I actually didn't end up having to use the DSL of Graphviz because the RGL gem was doing them for me. JOËL: Nice. So, it plugs in directly. STEPHANIE: Yeah, exactly. And I was really curious about using this gem because I, you know, just wanted to write Ruby, especially to plug into other things that are already in Ruby. And I found that surprisingly easy, thanks to all of the RSpec config options that they make available to you, including an option to extend the example group class, which is actually where let and let bang is defined. And so, I ended up overriding those classes and using, you know, the name of the let that you're defining and then the block to basically register the dependencies. And I also ended up exploring a little bit with using Ruby's built-in parser to figure out in the block that's being passed to the let, what parts of that block could potentially be a reference to another let. JOËL: That's really cool. Did you get any fun results from that? STEPHANIE: I did. It worked pretty well in being able to capture all of the let declarations, and other lets that it references. And so, I was able to successfully, you know, like, generate a visual dependency graph of all of the lets, so that was really neat. The part that I was really kind of excited about trying next, though I didn't end up having time to yet, was figuring out which of those let values are executed by way of the let bang, right? Which is eager or what is referenced in the test that then gets executed as well. And so, the RGL library is pretty neat and has some formatting options, too, with the Graphviz output. So, you can change the font color or styling options for different, you know, nodes and edges. And so, I was really curious to pursue this further, maybe, and use it to show exactly what gets evaluated now that I have successfully mapped my let graph. JOËL: Right. Because the whole point of this exercise is that not the entire graph is going to get evaluated. The underlying question is, what data actually gets created when my test runs? And so, you build out this whole dependency graph, and then you can follow a few simple rules to say, okay, this branch gets called, this branch gets called, this series of things gets called. And okay, this subset of let blocks trigger, and therefore this data has been created for my given test. STEPHANIE: Yeah. Though I will say that even where I got so far to, just seeing all of the let definitions in a spec file was really helpful to have a better understanding, you know, if I do have to add a test in here, and I'm thinking about reaching for a pre-existing let declaration, to be like, oh, like, it actually, you know, goes on to reference all of these other things that may be factories [chuckles] that are created might make me, you know, think twice, or just have a little better understanding of what I'm really dealing with. JOËL: Right. The idea that when you're calling out to a let, or a factory, or something else that's just a node in a large graph, you're not necessarily referencing just one thing. You might actually be referencing the head of a very long chain of things that maybe you don't intend to trigger the whole thing. STEPHANIE: Yeah, exactly. JOËL: So, in that sense, having a sort of visual or at least an idea of the graph can give you a much better sense of the cost of certain operations that you might have to do. STEPHANIE: The cost of the operations certainly, especially when, you know, you are working in a legacy codebase, and you, you know, like, maybe don't know how everything plays together or is connected. And it's very tempting to just reach for [chuckles] the things that have been, you know, created or built for you. And I'm certainly guilty of that sometimes on this client project, where the domain is so complex, and there are so many associated models. And I'm like, well, like, let me just, you know, use this let that already, you know, has a factory set up for what I think I need for this test. But then realizing, oh, actually, like, it is creating all these things, and do I really need them? I think it can be really challenging to unravel all of that in your head. And so, with this very scrappy tool that I [chuckles] built for my own purposes, you know, maybe it makes it, like, one step easier to try to fully understand what I'm working with and maybe do something different. JOËL: One aspect that I think is really powerful about dependency graphs is that it takes this kind of, like, abstract concept that we oftentimes have an intuitive sense around, the idea that we have different components that depend on each other, and it shows it to us visually on, like, a 2D plane. And that can be really helpful to get an understanding or an overview of a system. You mentioned that RGL uses Graphviz to generate some SVGs. A visual tool that I've been using to draw some of my dependency graphs has been mermaid.js. It has a syntax that's, like, a text-based syntax, but it's almost visual in that you have a piece of text and name of a node. And then, you'll draw a little ASCII arrow, you know, two dashes and a greater than sign to say this thing depends on, and then write another name, and just have a row, like, a bunch of entries to say; A depends on B. A also depends on C. C depends on D, and so on, and, like, build up that list. And then Mermaid will just generate that diagram for you. STEPHANIE: Yeah. I've used Mermaid a few times. One really helpful use that I had for it was diagramming out a bunch of React components that I had and wanting to understand the connections between them. And I think you can even paste the Mermaid syntax into your GitHub pull request description, and it'll render as the graph image. JOËL: Yeah, that's what's really cool is that Mermaid syntax has become embedded in a lot of other places in the past few years. So, it's really easy to embed graphs now into all sorts of things. You mentioned GitHub. It works in pull requests descriptions, comments, I think pretty much anywhere that Markdown is accepted. So, you could put one in your README if you wanted. Another place that I use a lot, Obsidian, my note-taking tool, allows me to embed graphs directly in there, which is really much nicer than previously; sometimes, when I wanted to express something as a visual, I would use some sort of drawing tool to do something and export an image, and then embed that in my note. But now I can just put in this text, and it will automatically render that as a diagram. And part of what's really nice about that is that then it's really easy for me to go and change that if I'm like, oh, but actually, I want to add one more connection in here. I don't have to re go back to, hopefully, a file that I've saved somewhere and, like, change an image file and re-export it. I just, you know, I add one line of text to my note, and it just works. STEPHANIE: That's awesome. Yeah, the ability to change it seems really useful. So, we've talked a little bit about tools for creating a visual aid for understanding our dependencies. And now that we have our graph, maybe we might have some concerning observations about what we see, especially when perhaps some of our dependencies are pointing back to each other. JOËL: Yes. So, I think you're referencing cycles, in particular. That would be the formal term for it. And those are really interesting. They happen in dependency graphs. And I would say, in many cases, they can be a bit of a smell. There's definitely situations where they're fine. But there are things that you look at, and you're like, okay, this is going to be a more complex kind of tricky bit of the graph to work with. Some cases, you just straight up can't have them. So, I want to say that the way RSpec lets are set up, you cannot write code that produces cycles. But you might have...I think Ruby allows classes to reference each other in such a way that it creates a cycle, and not all languages do that. So, Elm and F#, I believe, require that modules cannot reference each other. The fancy term for this is a dependent acyclic graph, or DAG, which basically just means that there are no cycles in that graph. STEPHANIE: Yeah. What you said about classes referencing each other is very interesting because I've definitely seen that. And then, if I have to go about changing something, maybe even it's just the class name, right? Now there's no way in which I can really make just one change. I have to kind of do it all in one go. JOËL: I think that's a common property of a cycle, and a graph is that changes that happen somewhere in that cycle often need to be all shipped together as one piece. You can't break it up into smaller chunks because everything depends on everything else. So, it has to be kind of boxed together and shipped as one thing. STEPHANIE: And you'd mentioned that cycles, you know, can be a bit of a code smell. And if the goal is to be able to break it up so that it is a little bit more manageable to work with, how would you go about breaking a cycle? JOËL: So, I think breaking a cycle is going to vary a little bit based on your problem domain. So, are you modeling a series of classes that are referencing each other? Is this a function call graph? Is this even, like, a series of tasks that you're trying to do? But typically, what you want to do is make sure that eventually, at some point, like, something doesn't loop back to referencing something higher up in your hierarchy. And so, oftentimes, it ends up being about what is allowed to know about what? Do you have higher-level concepts that can know and depend on lower-level concepts but not vice versa? And again, we are talking about this a little bit at the abstract level. But in terms of, let's say, different code modules, or classes, or something like that, commonly, you might say, well, we want some sort of layering where we have almost, like, more primitive types of classes at the bottom. And they don't get to know about anything above them. But the ones above that might be more complex that are composed of smaller pieces know about the ones below them. And you might have multiple layers kind of like that that all kind of point down, but nothing points up. STEPHANIE: That is a very common heuristic. [chuckles] I think you were basically just describing how I also understand creating React components, where you want to separate your presentational ones from your functional ones. And, yeah, it makes a lot of sense that as soon as you start adding that complexity of, you know, those primitive classes at the bottom, starting to, you know, point to things higher up or to know about things higher up, that is where a cycle may be accidentally introduced. JOËL: It's interesting just how many design principles that we have in software. If you dig into them a little bit, you find out that they're about decoupling things, and oftentimes, it's specifically breaking up cycles. So, one way that you might have something like this that actually has dependency in the name, the dependency inversion principle, where what you're effectively doing is you're taking one of those dependency arrows, and you're flipping it the other way. So, instead of A depending on B, you're flipping it. Now B depends on A, and that can be enough to break a cycle. STEPHANIE: So, one thing I've picked up from our conversations about dependency graphs is that oftentimes, you know, when you're trying to figure out where to start, you want to look for those areas or those nodes where there's nothing else that depends on it. JOËL: Yeah. I think you have those nodes that, if this were a tree, you would call them the leaf nodes. In the case of a graph, I'm not sure if that's technically correct, but they don't depend on anything. They're kind of your base case. And so, you can, you know, if it's a function, you can run it. If it's a file, you can load it; if it's a class, also you can load it up and not have to do anything else because it has no dependencies. And knowing that those are there, I think, can be really useful in terms of knowing an order you might want to execute something in. And this is really interesting for one of my favorite uses of a graph, which is breaking down a series of tasks that you need to do. So, commonly, you might say, okay, I have a large task I need to do. I break it down into a series of subtasks. And, you know, maybe I draw out, like, a bulleted list and, you know, task 1, 2, 3, 4, 5. The problem is that they're not necessarily just a flat list. They all have, like, orders, like dependencies between each other. So, maybe one has to happen before 2, but it also has to happen before 3, which needs to happen before two, and, like, there's all these interconnections. And then, you find out that you can't ship them independently the way you thought initially. So, by building up a graph, you end up with something that shows you exactly what depends on what. And then, like you said, the parts that are really interesting where you can start doing work are the ones that have no dependencies themselves. Other things might depend on them, but they have no dependencies. Therefore, they can be safely built, shipped, deployed to production, and they can be done independently of the other subtasks. STEPHANIE: Yeah. I was also thinking about things that could be done in parallel as well. So, if you do have multiple of those items with no dependencies, like, that is a really good way to be able to break up that work and, yeah, identify things that are not blocked. JOËL: For a complex set of tasks, it's great to see, okay, these two pieces have no dependencies. We can have them be done in parallel, shipped independently. And then you can just kind of keep repeating that process. Because once all of the tasks that have no dependencies have been done, well, you can almost, like, remove them from the graph and see, okay, what's the new set of things that have no dependencies? And then, keep doing that until you've eventually done the whole graph. And that may sound like, oh okay, we're just kind of using a little bit of intuition and working through the graph. It turns out that this is a, like, actual, like, formal thing. When it comes to graphs, it's a traversal algorithm called topological sort is the fancy name for it, and it basically, yeah, it goes through that. It gives you a list of nodes in order where each node that you're given has no dependencies that have not been evaluated yet. So, it works from effectively to use our tree terminology, from the leaf nodes to the root, potentially roots plural, of the graph, and each step is independent. So that's a lot of, like, fancy terminology, and getting a little bit of, like, computer science graph theory into here. So, my, like, general heuristic is that graphs should be evaluated from the bottom up when you're trying to evaluate each piece independently. So, when you do that, you get to do each piece independently, as opposed to if you're evaluating from the top down. So, starting from the one thing that depends on everything else, well, it can't be shipped until all of its dependencies have been shipped. And all the transitional dependencies can't be shipped until their dependencies have been shipped. And so, you end up being not able to ship anything until you've built the entire graph. And that's when you end up with, you know, a 2,000-line PR that took you multiple weeks and might be buggy. And it's going to take a long time to review. And it's just not what anybody wants. STEPHANIE: I'm glad you brought this up because I think this is where I am really curious to get better at because oftentimes, when I am breaking down a complex task, it's quite hard for me to see all of the steps that need to happen. And so, you know, you maybe start out with that, like, top-level node, like, the task that needs to be done as you understand it immediately. And it's really hard to actually identify the dependencies and, like, the smaller pieces along the way. And because you're not able to identify that, you think that you do have to just do it all in one go. JOËL: Yeah, that sort of root node is typically the overarching task, the goal of what you want to do. And a common, I think, scenario for something like this would be, let's say, you're doing a Rails upgrade. And so, that root node is upgrade Rails. And a common thing that you might want to do is say, okay, let's go to the gem file, upgrade Rails, see what breaks, and then just keep fixing those things. That's working from the top down. And you're going to be in a long-running branch, and you're going to keep fixing things, fixing things, fixing things until you have found all the things but done all the things. And then you do a big bang upgrade that may have taken you weeks. As opposed to if you're working from the bottom up, you try to figure out, okay, what are all the subtasks? And that might take some exploration. You might not know upfront. But then you might say, okay, here, I can upgrade RSpec versus a dependency, or I need to change the interface of this class and ship all these pieces one at a time. And then, the final step is flipping that upgrade in the gem file, saying, okay, now I've upgraded Rails from 4 to 5, or whatever the version is that you're trying to do. STEPHANIE: I think you've really hit the nail on the head when it comes to trying to do something but not knowing what subtasks may compose of it and getting into that problem of, you know, having not broken it down, like, enough to really see all the dependencies. And, you know, maybe this is a conversation [chuckles] for another episode, but the skill of breaking up those tasks and exploring what those dependencies are, and being able to figure them out upfront before you start to just do that upgrade and then see what happens, that's definitely an area that I want to keep investing in. And I'm sure other people would be really curious about, too, to help them make their jobs easier. JOËL: I think one tip that I've learned that's really fun and that connects into all of this is sometimes you do end up with a cycle in your dependencies of tasks. A technique for breaking that up is a pattern that I have pitched multiple times on the show: the strangler fig pattern. And part of why it's so powerful is that it allows you to work incrementally by breaking up some of these cycles in your dependency graph. And one of the lessons that I've learned from that is that just because you have sort of an initial set of subtasks and you have a graph of them doesn't mean that you can't change them. If you're following strangler fig, what you're actually doing is introducing one or more new subtasks to that graph. But the way you introduce them breaks up that cycle. So, you can always add new tasks or split up existing ones as you get a better understanding of the work you need to do. It's not something that is fixed or set in stone upfront. STEPHANIE: Yeah, that's a really great tip. I think next time, what I really want to explore, you know, your heuristic of going from bottom up, yeah, sure, it sounds all fine and dandy. But how to get to a point where you're able to see everything at the bottom, right? And, like, when you are tasked, or you do start with the thing at the top, like, the end goal. Yeah, I'm sure that's something we'll explore [chuckles] another day. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Bike Shed
396: Build vs. Buy

The Bike Shed

Play Episode Listen Later Aug 8, 2023 33:57


Joël has been fighting a frustrating bug where he's integrating with a third-party database, and some queries just crash. Stephanie shares her own debugging story about a leaky stub that caused flaky tests. Additionally, they discuss the build vs. buy decision when integrating with third-party systems. They consider the time and cost implications of building their own integration versus using off-the-shelf components and conclude that the decision often depends on the specific needs and priorities of the project, including how quickly a solution is needed and whether the integration is core to the business's value proposition. Ruby class instance variables (https://www.codegram.com/blog/understanding-class-instance-variables-in-ruby/) Build vs Buy by Josh Clayton (https://thoughtbot.com/blog/build-vs-buy-considerations-for-new-products) Sustainable Rails (https://sustainable-rails.com/) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: My world has been kind of frustrating recently. I've been fighting a really frustrating bug where I'm integrating with a third-party database. And there are queries that just straight-up crash. Any query that instantiates an instance of an ActiveRecord object will just straight-up fail. And that's because before, we make the actual query, almost like a preflight query that fetches the schema of the database, particularly the list of tables that the database has, and there's something in this schema that the code doesn't like, and everything just crashes. Specifically, I'm using an ODBC connection. I forget exactly what the acronym stands for, Open Database connection, maybe? Which is a standard put up by Microsoft. The way I'm integrating it via Ruby is there's a gem that's a C extension. And somewhere deep in the C extension, this whole thing is crashing. So, I've had to sort of dust off some C a little bit to look through. And it's not super clear exactly why things are crashing. So, I've spent several days trying to figure out what's going on there. And it's been really cryptic. STEPHANIE: Yeah, that does sound frustrating. And it seems like maybe you are a little bit out of your depth in terms of your usual tools for figuring out a bug are not so helpful here. JOËL: Yeah, yeah. It's a lot harder to just go through and put in a print or a debug statement because now I have to recompile some C. And, you know, you can mess around with some things by passing different flags. But it is a lot more difficult than just doing, like, a bundle open and binding to RB in the code. My ultimate solution was asking for help. So, I got another thoughtboter to help me, and we paired on it. We got to a solution that worked. And then, right before I went to deploy this change, because this was breaking on the staging website, I refreshed the website just to make sure that everything was breaking before I pushed the fix to see that everything is working. This is a habit I've picked up from test-driven development. You always want to see your test break before you see it succeed. And this is a situation where this habit paid off because the website was just working. My changes were not deployed. It just started working again. Now it's gotten me just completely questioning whether my solution fixes anything. The difficulty is because I am integrating [inaudible 03:20] third-party database; it's non-deterministic. The schema on there is changing rather frequently. I think the reason things are crashing is because there's some kind of bad data or data that the ODBC adapter doesn't like in this third-party system. But it just got introduced one day; everything started breaking, and then somehow it got removed, and everything is working again without any input or code changes on my end. So, now I don't trust my fix. STEPHANIE: Oh no. Yeah, I would struggle with that because your reality has come crashing down, [laughs] or how you understood reality. That's tough. Where do you think you'll go from here? If it's no longer really an issue in this current state of the schema, is it worth pursuing further at this time? JOËL: So, that's interesting because it turns into a prioritization problem. And for this particular project, with the deadlines that we have, we've decided it's not worth it. I've opened up a PR with my fix, with some pretty in-depth documentation for why I thought that was the fix and what I think the underlying problem is. If this shows up again in the next few days, I'll have that PR that I can pull in and see if it fixes things, and if it doesn't, I'll probably just close that PR, but it'll be available for us if we ever run into this again. I've also looked at a few potential mitigating situations. Part of the problem is that this is a, like, massive system. The Rails app that I'm using really doesn't need to deal with this massive database. I think there's, you know, almost 1,000 tables, and I really only care about a subset of tables in, like, one underlying schema. And so, I think by reducing the permissions of my database user to only those tables that I care about, there's a lower chance of me triggering something like this. STEPHANIE: Interesting. What you mentioned about, you know, having that PR continue to exist will be really helpful for future folks who might come across the same problem, right? Because then they can see, like, all of the research and investigation you've already done. And you may have already done this, but if you do think it's a schema issue, I'm curious about whether the snapshot of the schema could be captured from when it was failing to when it has magically gotten fixed. And I wonder if there may be some clues there for some future investigator. JOËL: Yeah. I'm not sure what our backup situation is because this is a third-party system, so I'd have to figure out what things are like in the admin interface there. But yeah, if there is some kind of auditing, or snapshots, or backups, or something there, and I have rough, you know, if I know it's within a 24-hour period, maybe there's something there that would tell me what's happening. My best guess is that there's some string that is longer than expected or maybe being marked as a CHAR when it should be a VARCHAR, or maybe something that's not a non-UTF-8 encoded character, or something weird like that. So, I never know exactly what was wrong in the schema. There's some weird string thing happening that's causing the Ruby adapter to blow up. STEPHANIE: That also feels so unsatisfying [laughs] for you. I could imagine. JOËL: Yeah, there's no, like, clean resolution, right? It's a, well, the bug is gone for now. We're trying to make it less likely for it to pop up again in the future. I'm trying to leave some documentation for the next person who's going to come along, and I'm moving forward, fingers crossed. Is that something you've ever had to do on one of your projects? STEPHANIE: Given up? Yes. [laughter] I think I have definitely had to learn how to timebox debugging and have some action items for when I just can't figure it out. And, you know, like we mentioned, leaving some documentation for the next person to pick up, adding some additional logging so that maybe we can get more clues next time. But, you know, realizing that I do have to move on and that's the best that I can do is really challenging. JOËL: So, you used two words here to describe the situation: one was giving up, and the other one was timebox. I think I really like the idea of describing this as timeboxing. Giving up feels kind of like, defeatist. You know, there's so many things that we can do with our time, and we really have to be strategic with how we prioritize. So, I like the idea of describing this as a timeboxing situation. STEPHANIE: Yeah, I agree. Maybe I should celebrate every time that I successfully timebox something [laughs] according to how I planned to. [laughs] JOËL: There's always room to extend the timebox, right? STEPHANIE: [laughs] It's funny you bring up a debugging mystery because I have one of my own to share today. And I do have to say that it ended up being resolved, [chuckles] so it was a win in my book. But I will call this the case of the leaky stub. JOËL: That sounds slightly scary. STEPHANIE: It really was. The premise of what we were trying to figure out here was that we were having some flaky tests that were failing with a runtime error, so that was already kind of interesting. But it was quickly determined it was flaky because of the tests running in a certain order, so-- JOËL: Classic. STEPHANIE: Right. So, I knew something was happening, and any tests that came after it were running into this error. And I was taking a look, and I figured out how to recreate it. And we even isolated to the test itself that was running before everything else, that would then cause some problems. And so, looking into this test, I saw that it was stubbing the find method on an ActiveRecord model. JOËL: Interesting. STEPHANIE: Yeah. And the stubbed value that we were choosing to return ended up being referenced in the tests that followed. So, that was really strange to me because it went against everything I understood about how RSpec cleans up stubs between tests, right? JOËL: Yeah, that is really strange. STEPHANIE: Yeah, and I knew that it was referencing the stub value because we had set a really custom, like, ID value to it. So, when I was seeing this exact ID value showing up in a test that seemed totally unrelated, that was kind of a clue that there was some leakage happening. JOËL: So, what did you do next? STEPHANIE: The next discovery was that the error was actually raised in the factory setup for the failing tests and not even getting to running the examples at all. So, that was really strange. And digging into the factories was also its own adventure because there was a lot of complexity in the factories. A lot of them used hooks as well that then called some application code. And it was a wild goose chase. But ultimately, I realized that in the factory setup, we were calling some application code for that model where we had stubbed the find, and it had used the find method to memoize a class instance variable. JOËL: Oh no. I can see where this is going. STEPHANIE: Yeah. So, at some point, our model.find() returned our, you know, stub value that we had wanted in the previous test. And it got cached and just continued to leak into everything else that eventually would try to call that memoized method when it really should have tried to do that look-up for a separate record. JOËL: And class instance variables will persist between tests as long as they're on the same thread, right? STEPHANIE: Yeah, as far as I understand it. JOËL: That sounds like a really frustrating journey. And then that moment when you see the class instance variable, and you're like, oh no, I can't believe this is happening. STEPHANIE: Right? It was a real recipe for disaster, I think, where we had some, you know, really complicated factories. We had some sneaky caching issues, and this, you know, totally seemingly random runtime error that was being raised. And it was a real wild goose chase because there was not a lot of directness in going down the debugging path. I feel like I went around all over the codebase to get to the root of it. And, in the end, you know, we were trying to come up with some takeaways. And what was unfortunate was that you know, like, normally, stubbing find can be okay if you are, you know, really wanting to make sure that you are returning your mocked value that you may have, like, stubbed some other stuff on in your test. But because of all this, we were like, well, should we just not stub find on this really particular model? And that didn't seem particularly sustainable to make as a takeaway for other developers who want to avoid this problem. So, in the end, I think we scoped the stub to be a little more specific with the arguments that we wanted to target. And that was the way that we went forward with the particular flaky test at hand. JOËL: It sounds like the root cause of the problem was not so much the stub as it was the fact that this value is getting cached at the class level. Is that right? STEPHANIE: Yes and no. It seems like a real pain for running the tests. But I'm assuming that it was done for a good reason in production, maybe, maybe not. To be fair, I think we didn't need to cache it at all because it's calling a find, which is, you know, should be pretty quick and doesn't need to be cached. But who knows? It's hard to tell. It was really old code. And I think we were feeling also a little nervous to adjust something that we weren't sure what the impact would be. JOËL: I'm always really skeptical of caching. Caching has its place. But I think a lot of developers are a little too happy to introduce one, especially doing it preemptively that, oh well, we might need a cache here, so why not? Let's add that. Or even sometimes, just as a blind solution to any kind of slowness, oh, the site is slow; let's throw a cache here and hope for the best. And the, like, bedrock, like, rule zero of any kind of performance tuning is you've got to measure before and after and make sure that the change that you introduce actually makes things better. And then, also, is it better enough speed-wise that you're willing to pay any kind of costs associated to maintaining the code now that it's more complex? And a lot of caches can have some higher carrying costs. STEPHANIE: Yeah, that's a great point. This debugging mystery an example of one of them. JOËL: How long did it take you to figure out the solution here? STEPHANIE: So, like you, I actually was on a bit of the incorrect path for a little while. And it was only because this issue affected a different flaky test that someone else was investigating that they were able to connect the dots and be like, I think these, you know, two issues are related. And they were the ones who ultimately were able to point us out to the offending test if you will. So, you know, it took me a few days. And I imagine it took the other developer a few days. So, our combined effort was, like, over a week. JOËL: Yep. So, for all our listeners out there, you just heard that Stephanie and I [laughs] both went on multi-day debugging journeys. That happens to everyone. Just because we've been doing this job for years doesn't mean that every bug is, like, a thing that we figure out immediately. So, separately from this bug that I've been working on, a big issue that's been front of mind for me on this project has been the classic build versus buy decision. Because we're integrating with a third-party system, we have to look at either building our own integration or trying to use some off-the-shelf components. And there's a few different levels of this. There are some parts where you can actually, like, literally buy an integration and think through some of the decisions there. And then there's some situations where maybe there's an open-source component that we can use. And there's always trade-offs with both the commercial and the open-source situation. And we have to decide, are we willing to use this, or do we want to build our own? And those have been some really interesting discussions to have. STEPHANIE: Yeah. I think you actually expanded this decision-making problem into a build versus buy versus open source because they are kind of, you know, really different solutions with different outcomes in terms of, you know, maintenance and dependencies, right? And that all have, like, a little bit of a different way to engage with them. JOËL: Interesting. I think I tend to think of the buy category, including both like commercial off-the-shelf software and also open-source off-the-shelf software, things that we wouldn't build custom for ourselves but that are third-party components that we can pull in. STEPHANIE: Yeah, that's interesting because I had a bit of a different mental model because, in my head, when you're buying a commercial solution, you, you know, are maybe losing out on some opportunities for customization or even, like, forking it on your own. So, with an open-source solution, there could be an aspect of making it work for you. Whereas for a commercial solution, you really become dependent on that other company and whether they are willing to cater [laughs] to your needs or not. JOËL: That's fair. For something that's closed-source where you don't actually have access to the code, say it's more of a software as a service situation, then, yeah, you're kind of locked in and hoping that they can provide the needs that you have. On the flip side, you are generally paying for some level of support. The quality of that varies sometimes from one vendor to another. But if something goes wrong, usually, there's someone you can email, someone you can call, and they will tell you how to fix the problem, or they will fix it on their end. STEPHANIE: For the purposes of this conversation, should we talk about the differences, you know, building yourself or leaning on an existing built-out solution for you? JOËL: The project I'm working on is integrating with a Snowflake data warehouse, which is an external place that stores data accessible through something SQL-like. And one of the things that's attractive about this is that you can pull in data from a variety of different sources, transform it, and have it all stored in a kind of standardized structure that you can then integrate with. So, for pulling data in, you can build your own sort of ingestion pipeline, if you want, with code, and their APIs, and things. But there are also third-party vendors that will give you kind of off-the-shelf components that you can use for a lot of popular other data sources that you might want to pull. So, you're saying; I want to pull from this external service. They've probably got a pre-built connector for it. They can also do things like pull from an arbitrary Postgres database on some other server if that's something you have access to. It becomes really attractive because all you need to do is create an account on this website, plug in a few, like, API keys and URLs. And, all of a sudden, data is just flowing from one third-party system into your Snowflake data Warehouse, and it all just kind of works. And you don't have to bother with APIs, or ODBC, or any of that kind of stuff. STEPHANIE: Got it. Yeah, that does sound convenient. As you were talking about this, I was thinking about how if I were in the position of trying to decide how to make that integration happen, the idea of building it would seem kind of scary, especially if it's something that I don't have a lot of expertise in. JOËL: Yeah, so this was really interesting. In the beginning of the project, I looked into a little bit of what goes into building these, and it's fairly simple in terms of the architecture. You just need something that writes data files to typically something like an S3 bucket. And then you can point Snowflake to periodically pull from that bucket, and you write an import script to, you know, parse the columns and write them to the right tables in the structure that you want inside Snowflake. Where things get tricky is the actual integration on the other end. So, you have some sort of third-party service. And now, how do you sort of, on a timer maybe, pull data from that? And if there are data changes that you're synchronizing, is it just all append-only data? Or are you allowing the third-party service to say, "Hey, I deleted this record, and you should reflect that in Snowflake?" Or maybe dealing with an update. So, all of these things you have to think about, as well as synchronization. What you end up having to do is you probably boot up some kind of small service and, you know, maybe this is a small Ruby app that you have on Heroku, maybe this is, like, an AWS Lambda kind of thing. And you probably end up running this every so many seconds or so many minutes, do some work, potentially write some files to S3. And there's a lot of edge cases you have to think about to do it properly. And so, not having to think about all of those edge cases becomes really enticing when you're looking to potentially pay a third party to do this for you. STEPHANIE: Yeah, when you used the words new service, I bristled a little bit [laughs] because I've definitely seen this happen maybe on a bit of a bigger scale for a tool or solution for some need, right? Where some team is formed, or maybe we kind of add some more responsibilities to an existing team to spin up a new service with a new repo with its own pipeline, and it becomes yet another thing to maintain. And I have definitely seen issues with the longevity of that kind of approach. JOËL: The idea of maintaining a fleet of little services for each of our integrations seemed very unappealing to me, especially given that setting something like this up using the commercial approach probably takes 30 minutes per third-party service. There's no way I'm standing up an app and doing this whole querying every so many minutes, and getting data, and transforming it, and writing it to S3, and addressing all the edge cases in 30 minutes. And it's building something that's robust. And, you know, maybe if I want to go, like, really low tech, there's something fun I could do with, like, a Zapier hook and just, like, duct tape a few services together and make this, like, a no-code solution. I still don't know that it would have the robustness of the vendor. And I don't think that I could do it in the same amount of time. STEPHANIE: Yeah. I like the keyword robustness here because, at first, you were saying, like, you know, this looked relatively small in scope, right? The code that you had to write. But introducing all of the variables of things that could go wrong [laughs] beyond the custom part that you actually care about seemed quite cumbersome. JOËL: I think there's also, at this point, a lot of really interesting prioritization questions. There are money questions, but there are also time questions you have to think about. So, how much dev time do we want to devote upfront to building out these integrations? And if you're trying to move fast and get a proof of concept out, or even get, like, an MVP out in front of customers, it might be worth paying more money upfront to a third-party vendor because it allows you to ship something this week rather than next month. STEPHANIE: Yeah. The "How soon do you need it?" is a very good question to ask. Another one that I have learned to include in my arsenal of, you know, evaluating this kind of stuff comes from a thoughtbot blog by Josh Clayton, where he, you know, talks about the build versus buy problem. And his takeaway is that you should buy when your business is not dependent on it. JOËL: When it's not part of, like, the core, like, value-add that your business is doing. Why spend developer time on something that's not, like, the core thing that your product is when you can pay someone else to do it for you? And like we said earlier, a lot of that time ends up being sunk into edge cases and robustness and things like that to the point where now you have to build an expertise in a, like, secondary thing that your business doesn't really care about. STEPHANIE: Yeah, absolutely. I think this is also perhaps where very clear business goals or a vision would come in handy as well. Because if you're considering building something that doesn't quite support that vision, then it will likely end up continuing to be deprioritized over the long term until it becomes this thing that no one is accountable for maintaining and caring for. And just causes a lot of, honestly, morale issues is what I've seen when some service that was spun up to try to solve a particular problem is kind of on its last legs and has been really neglected, and no one wants to work on it. But it ends up causing issues for the rest of the development team. But then they're also really focused on initiatives that actually do provide the business value. That is a really hard balancing act that I've seen teams struggle with. JOËL: Earlier this year, we were talking about the book Sustainable Rails. And it really hammers home the idea of a carrying cost for the code, and I think that's exactly what we're talking about here. And that carrying cost can be time and money. But I like that you also mentioned the morale effects. You know, that's a carrying cost that just sort of depresses the productivity of your team when morale is low. STEPHANIE: Yeah, absolutely. I'm curious if we could discuss some of the carrying costs of buying a solution and where you've seen that become tricky. JOËL: The first thing to look at is the literal cost, the money aspect of things. And I think it's a really interesting situation for the business models for these types of Snowflake connectors because they typically charge by the amount of data that you're transmitting, so per row of data that you're transmitting. And so, that cost will fluctuate depending on whether the third-party service you're integrating with is, like, really chatty or not. When you contrast that to building, building typically has a relatively fixed cost. It's a big upfront cost, and then there's some maintenance cost to go with it. So, if I'm building some kind of integration for, let's say, Shopify, then there's the cost I need to build up front to integrate that. And if that takes me, I don't know, a week or two weeks, or however long it is, you know, that's a pretty big chunk of time. And my time is money. And so, you can actually do the math and say, "Well, if we know that we're getting so many rows per day at this rate from the commercial vendor, how many weeks do we have to pay for the commercial one before we break even and it becomes more expensive than building it upfront, just in terms of my time?" And sometimes you do that math, and you're like, wow, you know, we could be going on this commercial thing for, like, two years before we break even. In that case, from a purely financial point of view, it's probably worth paying for that connector. And so, now it becomes really interesting. You say, okay, well, which are the connectors that we have that are low volume, and which are the ones that are high volume? Because each of them is going to have a different break-even point. The ones that you break even after, you know, three or four weeks might be the ones that become more interesting to have a conversation about building. Whereas some of the others, it's clearly not worth our time to build it ourselves. STEPHANIE: The way you described this problem was really interesting to me because it almost sounds like you found the solution somewhere in the middle, potentially, where, you know, you may try building the ones that are highest priority, and you end up learning a lot from that experience, right? That could make it easier or at least, like, set you up to consider doing that moving forward in the future if you find, like, that is what is valuable. But it's interesting to me that you kind of have the best of both worlds of, like, getting the commercial solutions now for the things that are lower value and then doing what you can to get the most out of building a solution. JOËL: Yeah. So, my final recommendation ended up being, let's go all commercial for now. And then, once we've built out something, and because speed is also an issue here, once we've built out something and it's out with customers, and we're starting to see value from this, then we can start looking at how much are we paying per week for each of these connectors? And is it worth maybe going back and building our own for some of these higher-volume connectors? But starting with the commercial one for everything. STEPHANIE: Yeah, I actually think that's generally a pretty good path forward because then you are also learning about how you use the commercial solution and, you know, which features of it are critical so that if you do eventually find yourselves, like, maybe considering a shift to building in-house, like, you could start with a more clear MVP, right? Because you know how your team is using an existing product and can focus on the parts that your business are dependent on. JOËL: Yeah, it's that classic iterative development style. I think here it's also kind of inspired by a strategy I typically use for performance, which is make it work before you try to make it fast. And, actually, make it work, then profile, then measure, find the hotspots, and then focus on making those things fast. So, in this case, instead of speed, we're talking about money. So, it's make it work, then profile, find the parts that are expensive, and make the trade-off of, like, okay, is it worth investing into making that part less expensive in terms of resources? STEPHANIE: I like that as a framework a lot. JOËL: A lot of what we do as programmers is optimization, right? And sometimes, we're optimizing for execution time. Sometimes we're optimizing for memory cost, and sometimes we're optimizing for dollars. STEPHANIE: Yeah, that's really interesting because, with the buy solution, you know very clearly, like, how much the thing will cost. Whereas I've definitely seen teams go down the building route, and it always takes longer than expected [laughs], and that is money, right? In terms of the developer's time, for sure. JOËL: Yeah, definitely, like, add some kind of multiplier when you're budgeting out that build alternative because, quite likely, there are some edge cases that you haven't thought about that the commercial partner has, and you will have to spend more time on that than you expected. STEPHANIE: Yeah, in addition to whatever opportunity cost of not working on something that is driving revenue for the business right now. JOËL: Exactly. STEPHANIE: So, the direction of this conversation ended up going kind of towards, like, what is best for the team at, like, a product and company level. But I think that we make these decisions a lot more frequently, even when it comes to whether we pull in a gem or, you know, use an open-source tool or not. And I would be really interested in discussing more of that in another episode. JOËL: Yeah. That gets into some controversial takes, right? It's the evergreen topic of: do we build it ourselves, or do we pull in some kind of third-party package? STEPHANIE: Something for the future to look forward to. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Mob Mentality Show
Unleashing the Power of Living Documentation and FOSS with Steven Baker

The Mob Mentality Show

Play Episode Listen Later May 29, 2023 45:17


Get ready to explore the dynamic world of living documentation and open source software in this episode with Steven Baker. Join us as we uncover the benefits and challenges of executable documentation and how it helps to keep your team's knowledge in sync. Discover the origins of the "describe/it" pattern and RSpec, and learn how to infuse domain into your tests for enhanced understanding. Next we turn our attention to the implications of GPL and "Copy Left" licenses, and gain valuable tips on understanding your responsibilities within the FOSS licensing landscape. We also touch on the fascinating intersection of AI and open source software, avoiding open source developer burnout, and seeking help within the FOSS community. But that's not all – are you curious about remote work from remote places? We briefly discuss the joys of remote work. Don't miss out on this eye-opening conversation filled with practical advice, experiences, and valuable insights. Video and Show Notes: https://youtu.be/2UPXM8B3BnQ  

The Bike Shed
382: Domain-Specific Languages

The Bike Shed

Play Episode Listen Later May 2, 2023 36:09


Joël has been integrating a third-party platform into a testing pipeline...and it has not been going well. Because it's not something she usually keeps up-to-date with, Stephanie is excited to learn about more of the open-source side of things in Ruby, what's new in the Ruby tooling world, and what folks are thinking about regarding the future of the language. Today's topic is inspired by an internal thoughtbot Slack thread about writing a custom matcher for Rspec. Stephanie and Joël contrast DSLs vs. Object APIs and also talk about: CanCanCan vs Pundit RSpec DSL When is a DSL helpful? Why not use both DSLs & Object APIs? Extensibility When does a DSL become a framework? This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. RubyKaigi 2023 (https://rubykaigi.org/2023/) Mystified by RSpec's DSL? by Jason Swett (https://www.codewithjason.com/mystified-rspecs-dsl-parentheses-can-add-clarity/) Building Custom RSpec Matchers with Regular Objects (https://thoughtbot.com/blog/building-custom-rspec-matchers-with-regular-objects) FactoryBot (https://github.com/thoughtbot/factory_bot) Writing a Domain-Specific Language in Ruby by Gabe Berke-Williams (https://thoughtbot.com/blog/writing-a-domain-specific-language-in-ruby) Capybara (https://teamcapybara.github.io/capybara/) Acceptance Tests at a Single Level of Abstraction (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction) CanCanCan (https://github.com/CanCanCommunity/cancancan) Pundit (https://www.capvidia.com/products/pundit) Discrete Math and Functional Programming (https://www.amazon.com/Discrete-Mathematics-Functional-Programming-VanDrunen/dp/1590282604) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been integrating a third-party platform into our testing pipeline for my client. It has not been going well. We've been struggling a little bit, mostly just because tests just kind of crash. Our testing pipeline is pretty complex. It's a lot of one script, some environment variables, does a few things, shells out to another script, which is in a different language. Does a few more things, shells out to another script, maybe calls out to rake, calls out to a shell script. There are four or five of these in a chain, and it's a bit of a mess. Somewhere along in there, something is not compatible with this third-party service that we're trying to integrate with. I was pairing this week with a colleague. And we were able to reproduce a situation where we were able to get a failure under some conditions and a success under other conditions. So these are basically, if we run the whole chain of scripts that call each other from the beginning, we know we get a failure. And if we skipped entirely the chain of scripts that set up things and then just manually try to invoke a third-party service, that works. And so now we know that there's something in between that's incompatible, and now it's just about narrowing things down. There are a few different approaches we could take. We could try to sort of work our way forward. We know a known point where it breaks and then just try to start the chain one step further and see where it fails. We could try to get fancy and do a binary search, like split it in half and then half and half again. We ended up doing it the other way, where we started at the end. We had our known good point and then just stepping one step back and saying, okay, now we introduce the last script in the chain. Does that work? Okay, that pass is great. Let's go one step further; two scripts up in the chain. And at some point, we find, okay, here's the one script that fails. Now, what is it within this script? And it was a really fun debugging session where we were just narrowing things down until we found the source of the bug. STEPHANIE: Wow, that sounds pretty complicated. It just seems like there are so many layers going on. And it was really challenging to pinpoint where the source of the issue was. JOËL: Definitely. I think all the layers made it really complicated. But having a process that we could follow and then kind of narrowing it down made it almost mechanical to figure out where the bug was once we got to a point where we had a known good point and a known bad point. STEPHANIE: Yeah, that makes sense. Kind of sounds like if you are using git bisect or something like that to narrow down the scope of where the issue could be. I'm curious because this is like a bunch of shell scripts and rake tasks or commands or whatever. What would have made this debugging process easier? JOËL: I think having fewer scripts in this chain. STEPHANIE: [laughs] That's fair. JOËL: We don't need so many scripts that call out to each other in different languages trying to share data via environment variables. So we've got a bit of a Rube Goldberg machine, and we're trying to patch in yet another piece in there. STEPHANIE: Yeah, that's really tough. I was curious if there was, I don't know, any logging or any other clues that you were getting along the way because I know from experience how painful it is to debug that kind of code. JOËL: It's interesting because I feel like normally logging is something that's really useful. In this particular case, we run into an exception at some point. So it's more of under what conditions does the exception happen? The important thing was to find that there is a point where it breaks, and there's a point where it doesn't, and realizing that if we ran some of these commands just directly without going through the whole pipeline, that things did work and that we were not triggering that exception. So all of a sudden, now that tells us, okay, something in our pipeline is wrong. And then we can just start narrowing things down. So yeah, adventures in debugging. Sometimes it's really frustrating, but then when you have a good process, and you find the bug, it's incredibly satisfying. STEPHANIE: I like that you used a process that can be applied to many different problems, in this particular case, debugging a testing pipeline. Maybe not something that we do every day, but certainly, it comes up, and now we have tools to address those kinds of issues as well. JOËL: So my week has been up and down with all of this debugging. What's been new in your world? STEPHANIE: I've been doing some travel planning because I'm going to RubyKaigi in Japan. JOËL: Whoa. STEPHANIE: This is actually going to be my first international conference, so I'm really looking forward to that. I just have never been compelled to travel abroad to go to a tech conference. But I'm really looking forward to going to RubyKaigi because now I've been to the U.S.-based conferences a few times. And I'm excited to see how things are different at an international conference and specifically a RubyKaigi because, obviously, there's a lot of really cool Ruby work happening over there in Japan. So I'm excited to learn about more of the open-source side of things of Ruby, what's new in the Ruby tooling world, and just what folks are thinking about in terms of the future of the language. That's not something I normally keep super up-to-date on. But I'm excited to be around people who do think and talk about these things a lot and maybe get some new insights into my own work. JOËL: Do you find that you tend to keep up more with some of the frameworks like Rails rather than the underlying language itself? STEPHANIE: Yeah, that's a good question. I do think because the framework changes a little more frequently, new releases are kind of more applicable to the work that I'm doing. Whereas language updates or upgrades are a little bit less top of mind for me because the point is that it doesn't have to change [laughs] all that much, and we can continue to work with things as expected and not be disrupted. So it is definitely like a whole new world for me, but I'm really looking forward to it. I think it will be really interesting and just kind of a whole other space to explore that I haven't really because I've usually been focused on more of the web development and industry work side of things. JOËL: What's a Ruby feature that either is coming out in the future or that came out in the last couple of releases that got you really excited? STEPHANIE: I think the conversation about typing in Ruby is something that has been on my radar but has also been ebbing and flowing over time. And I did see a few talks at RubyKaigi this year that are going to talk about how to introduce gradual typing in Ruby. And now that it has been out for a little bit and people have been using it, how people are feeling about it, pros and cons, and kind of where they're going to take it or not take it from there. JOËL: Have you done much TypeScript? STEPHANIE: I have been working more in TypeScript recently but did spend most of my front-end work coding days in JavaScript. And so that transition itself was pretty challenging for me where I suddenly felt a language that I did know pretty well. I was having to be in that...in somewhat of a beginner's mindset again. Even just reading the code itself, there were just so many new things to be looking at in terms of the syntax. And it was a difficult but ultimately pretty rewarding experience because the way I thought about JavaScript afterwards was much more refined, I think. JOËL: Types definitely, I think, change the way you think about code; at least, that's been my experience. STEPHANIE: Yeah, absolutely. I haven't gotten the pleasure to work with types in Ruby just yet, but I've just heard different experiences. And I'm excited to see what experts have to say about it. JOËL: That's the fun of going to a conference. STEPHANIE: Absolutely. So yeah, if any listeners are also headed to RubyKaigi, yeah, look out for me. JOËL: I was recently having a conversation with someone about the fact that a lot of languages provide ways to sort of embed many languages within them. So the Lisp family of languages are really big into macros and metaprogramming. Some other languages are big into giving you the ability to build your own ASTs or have really strong parsing capabilities so that you can produce your own, again, mini-language. And Ruby does this as well. It's pretty popular among the Ruby community to build DSLs, Domain-Specific Languages using some of Ruby's built-in abilities. But it seems to be a sort of universal need or at the very least a universal desire among programmers. Have you ever found yourself as a code author wanting to embed a sort of smaller language within your application? STEPHANIE: I don't think I have, to be honest. It's a very interesting question. Because I think the motivation to build your own mini-language using Ruby would have to be you'd have to have a really good reason for it, and in my experience, I haven't quite encountered that yet. Because, yeah, it seems like a lot of upfront work, a lot of overhead to introduce something like that, especially if it's not necessarily either a really, really particular domain that others might find a use for, or it just doesn't end up seeming worthwhile if I can just write regular, old Ruby code. JOËL: I think you're not alone. I think the Ruby community has been kind of a bit of a pendulum here where several years ago, everything that could be made into a DSL was. Now the pendulum kind of has been swinging the other way. And we see DSLs, but they're not quite as frequent. For those who maybe have not experienced a DSL or aren't quite familiar with the concept, how would you describe the idea? STEPHANIE: I think I would describe domain-specific languages as a bit of a mini-language that is created for a very particular problem space in mind to make development for that domain easier. Oftentimes, I've also kind of seen people describe the benefit of DSLs as being able to read that language as if it were plain English. And so, in my head, I have kind of, at least in the Ruby world, right? We see that a lot in different gems. RSpec, for example, has its own internal DSL, and many people really enjoy it because it took the domain of testing. And the way you write it kind of is how you might read or understand it in English. And so it's a bit easier to talk about what you're expecting in your tests. JOËL: Yeah, it's so high-level and minimal and domain-specific that it almost stops feeling like it's a programming language and can almost feel like it's a high-level configuration for this very particular domain, sometimes even to the point where the idea is that a non-programmer could read it and understand what's going on. STEPHANIE: I think RSpec is actually one of the first Ruby DSLs that you might encounter when you're learning Ruby for the first time. And I've definitely seen developers who are new to Ruby, you know, they're writing code, and they're like, okay, I'm ready to write a test now. And the project uses RSpec because that's what most of us use in our Rails applications. And then they see, like you said, almost a configuration language, and they are really confused. They're not really sure what they're reading. They struggle with the syntax a lot. And it ends up being a point of frustration when they're first starting out if they're not just copying and pasting other existing RSpec tests. I'm curious if you've seen that before. JOËL: I've definitely seen that. And it's a little bit ironic because oftentimes, an argument for DSL is that it makes things simpler that you don't even have to know Ruby; you can just write it. It's simpler. It's easier to write. It's easier to understand. And to a certain extent, maybe that's true. But for someone who does know Ruby and doesn't know your particular little domain language, now they're encountering something that they don't know. And they're having to learn it, and they're having to struggle with it. And it might behave a little bit weirdly compared to how Ruby normally works. And so sometimes it doesn't make it easier for adoption. But it does look really good in a README. STEPHANIE: That's totally fair. I think the other thing that's interesting about RSpec is that a lot of it is really just stylistic. I actually read a blog post by Jason Swett and the headline of it was "Mystified by RSpec's DSL? Some parentheses can add clarity." And he basically goes on to tell us that really RSpec is just leaning on some of Ruby's syntactic sugar of omitting parentheses for method calls. And if you just add the parentheses back in your it blocks or your describes, it can read a lot more like regular Ruby. And you might have a better time understanding what's going on when you realize that we're just passing our descriptors as arguments along with some blocks. JOËL: That's ironic given that oftentimes, the goal of these is to make it look like not Ruby. STEPHANIE: I agree; it is ironic. [laughs] MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: I think another drawback that I've seen with DSLs is that they oftentimes are more limited in their capabilities. So if the designer of the gem didn't explicitly think of your use case, then oftentimes, it can be really hard to extend or to support edge cases that are not specifically designed for that language in the way that plain Ruby is often much more flexible. STEPHANIE: Yeah, that's really interesting because when a gem does have some kind of DSL, a lot of effort probably went into making that the main interface that you would work with or you would use. And when that isn't working for your use case, the design of the underlying objects may or may not be helpful for the changes that you want to make. JOËL: I think it's interesting that you mentioned the underlying objects because those are often sort of not meant for public consumption when you're building a gem that's DSL forward. I think, in many cases, my ideal gem would make those underlying objects the primary interface and then maybe offer DSL as a kind of nice-to-have layer on top for those situations that maybe aren't as complex where writing things in the domain language might actually be quite nice. But keeping those underlying objects as the interface, it's nice to use and well-documented for the majority of people. STEPHANIE: Yeah, I like that too because then you can get the best of both worlds. So speaking of trying to make a DSL work for you, have you ever experienced having to kind of work around the DSL to get the functionality you were hoping to achieve? JOËL: So I think we're talking about the idea of having both a DSL and the underlying objects. And RSpec is a great example of this with their custom matchers. RSpec itself is a DSL, but then they also offer a DSL to allow you to create custom matchers. And it's not super well documented. I always forget how to define them, and so I oftentimes don't bother. It's just kind of too much of a pain for something that doesn't always provide that much value. But if it were easy, I would probably do it more. Eventually, I realized that you could use just regular Ruby objects as custom matchers. And they just seemed to respond to certain methods, just regular old objects and polymorphism. And all of a sudden, now I'm back into all of the tools and mechanisms that I am familiar with, like the back of my hand. I can write objects all day. I can TDD them. I can apply any patterns that I want to if I'm doing something really complicated. I can extract helpers. All of that works really well with the knowledge that I already have without having to sink a lot of time into trying to learn the built-in DSL. So, for the most part, now, when I define custom matchers, I'll often jump directly to creating a regular object and making it conform to the matcher interface rather than relying on the DSL for that. So once we go back to the test, now we're back in DSL land. Now we're no longer talking in terms of objects so much. We'll have some nice methods and they will all kind of read like English. So to pull a recent example that I worked on, I might say something like expect this policy object method to conform to this truth table. STEPHANIE: That's a really interesting example. It actually kind of sounds like it hits the sweet spot of what you were describing earlier in the sense that it has a really nice DSL, but also, you can create your own objects, and that has an interface that you can implement. And yes, have your cake and eat it too. [laughs] But the idea that then you're kind of converting it back to the DSL because that is just what we know, and it has become so normalized. I was talking earlier about okay; when is a DSL worthwhile? When is the use case a good reason to implement it? And especially for gems that I think that are really popular that we as a Ruby community have collectively used most of the time on our projects because we have oftentimes a lot of the same problems that we're solving. It seems like this has become its own shared language, right? JOËL: Yeah, there are definitely some DSLs that we all end up learning because they're just so prominent in the Ruby community, even Rails itself ships with several built-in DSLs. STEPHANIE: Yeah, absolutely. FactoryBot is another one, too. It is a gem by thoughtbot. And actually, in preparation to talk about DSLs with you today, I scoured our blog and found a really great blog post, "Writing a Domain-Specific Language in Ruby" by Gabe Berke-Williams. And it is basically like, here's how to write something like FactoryBot and creating your own little mini Ruby DSL for something that would be very similar to what FactoryBot does for fixtures. JOËL: That's a great resource, and we'll make sure to link that in the show notes. We've been talking about some of the limitations of DSLs or some aspects of them maybe that we personally don't like. What are maybe examples of DSLs that you do enjoy working with? STEPHANIE: Yeah, I have an example for this one. I really enjoy using Capybara's DSL for acceptance testing. I did have to go down the route of writing some custom selectors for...I just had some HTML elements within kind of a complicated table and was trying to figure out how to write some selectors so that I could write the test as if it were in, you know, quote, unquote, "plain English" like, within this table, expect some value. And that was an interesting journey. But I think that it really helped me have a better understanding of accessibility of just the underlying building blocks of the page that I was working with. And, yeah, I really appreciate being able to read those tests from a user perspective and kind of know exactly what they're doing when they're interacting with this virtual browser without having to run it in headful mode and see it for myself. JOËL: It's always great when a DSL can give you that experience of abstracting enough to where it makes the code delightful to work with while also not having too high a cost to learn or being too restrictive in what it allows you to do. Would you make a difference between something that's a DSL versus maybe just code that's written at a higher level of abstraction? So maybe to get back to your example with Capybara, it's really nice to have these nice custom matchers and all of these things to work with HTML pages. If I'm writing, let's say, a helper method at the bottom of a test, I don't think that feels quite like it's a DSL yet. But it's definitely a higher level than specifying CSS selectors. So would you make a difference between those two things? STEPHANIE: That's a good question. I think it's one of those you know it when you see it kind of questions because it just depends on the amount of abstraction, like you mentioned, and maybe even metaprogramming. That takes something from the core language to morph into what you could qualify as a separate language. What do you think about this? JOËL: Yeah, part of me almost wonders if this exists kind of on a continuum, and the boundary might be a little bit fuzzy. I think there might be some other qualifications that come with it as well. Even though DSLs are typically higher-level helpers, it's usually more than just that. There are also sort of slightly different semantics in the way that you would tend to use them to the point where while they may be just Ruby methods, we don't use them like Ruby methods, and even to the point that we don't think of them as Ruby methods. To go back to that article you mentioned from Jason, where just reminding people, hey, if you put params on this, all of a sudden, it helps you remember, oh, it's just a Ruby method instead of being like, oh, this is a language keyword or something. STEPHANIE: Yeah, I wonder if there's also something to the idea of domain specificity where it should be self-service within the domain that you're working. And then it has limitations once you are trying to do something separate from the domain. JOËL: Right, it's an element of focus to this. And I think it's probably also a language is not just one helper; it's a collection typically. So it's probably a series of high-level helpers, potentially. They might not be methods, even though that is ultimately one of the primary interfaces we use to run code in Ruby. So it's a collection of methods that are high-level, but the collection itself is focused. And oftentimes, they're meant to be used in a way where it's not just a traditional method call. STEPHANIE: Right. There's some amount of you bringing to the table your own use case in how you use those methods. JOËL: Yeah, so it might be mimicking a language keyword. It might be mimicking the idea of a configuration. We see that a little bit with ActiveRecord and some of the, let's say, the association and validation APIs. Those kind of feel like, yes, they're embedded in a class, but they feel like either keywords or even just straight-up configuration where you set key-value pairs of things to configure how a particular class is going to work. STEPHANIE: Yeah, that's true for a lot of things in Rails, too, if we're talking about routes and initializers as well. JOËL: So I've complained about some things I don't like about DSLs. I really like the routing DSL in Rails. STEPHANIE: Why is that? JOËL: I think it's very compact and readable. And that's an element that's really nice about DSLs is that it can make things feel very readable and, oftentimes, we read code more often than we write it. And routes have...I was going to say fewer edge cases, but I have seen some really gnarly route files that are pretty awful to work with, especially if you're mostly writing RESTful controllers, and I would recommend that people do. It's really nice to just be able to skim through a route file and be like, oh, these are the resources in my app and the actions I can do on each resource. And here are the ones that are nested. STEPHANIE: Yeah, it almost sounds like a DSL can provide guardrails towards the recommended way of tackling that particular domain. The routes DSL really discourages you from doing anything too complicated because they are encouraging you to follow the Rails convention. And so I think that goes back to the specificity piece of if you've written a DSL, it's because you've thought very deeply about this particular domain and how common problems show up and how you would want people to be empowered by the language rather than inhibited by it. JOËL: I think, thinking more about that, the word that comes to mind is declarative. When you read code that's written with DSLs, typically, it's very declarative. It's more just describing a thing as opposed to either procedural, a series of commands to do, or even OO, where you're composing objects and sending messages to each other. And so problems that lend themselves to being implemented through more descriptive and declarative approaches probably are really good candidates for a DSL. STEPHANIE: Yeah, I like that a lot because when we talk about domains, we're not necessarily talking about a business domain, which is kind of the other way that some folks think about that word. We're talking about a problem space. And the idea of the language being declarative to describe the problem space makes a lot of sense to me because you want it to be flexible enough for different use cases but all within the idea of testing or browser navigation or whatever. JOËL: Yeah. I feel like there's a lot of... there are probably more problems that can be converted to declarative solutions than might initially kind of strike you. Sometimes the problem isn't quite as bounded. And so when you want customizations that are not supported by your DSL, then it kind of falls apart. So I think a classic situation that might feel like something declarative is authorization. Authorization are a series of rules for who can access what, and it would seem like this is a great case for a DSL. Wouldn't it be great to have just one file you can just kind of skim, and we can just see all of the access rules? Access rules that are basically asking to be done declaratively. And we have gems like that. The original CanCan gem and then the successor CanCanCan are trying to follow that approach. Have you used either of those gems? STEPHANIE: I did use the CanCanCan gem a while ago. JOËL: What was your experience with that style of authorization? STEPHANIE: It has been a while but I do remember having to check that original file of like all the different authorizations kind of repeatedly coming back to it to remember, okay, for this rule, what should be allowed to happen here? JOËL: So I think that's definitely one of the benefits is that you have all of your rules stored in one place, and you can kind of scan through the list. My experience, though, is that in practice, it often kind of balloons up and has all of these edge cases in it. And in some earlier versions, I don't know if that's still a problem today, it could even be difficult to accomplish certain things. If you're going to say that access to this particular object depends not on properties of that object itself but on some custom join or association or something like that, that could be really clunky to do or sometimes impossible depending on how esoteric it is or if there's some really complex custom logic to do. And once you're doing something like that, you don't really want to have that logic in your...in this case, it would be the abilities file but inside because that's not really something you express via the DSL anymore. Now you're dropping into OO or procedural world. STEPHANIE: Right. It seems a bit far removed from where we do actually care about the different abilities, especially for one-off cases. JOËL: That is interesting because I feel like there's a bit of a read-versus write-situation happening there as well. It's particularly nice to have, I think, everything in one abilities file for reading and for auditing. I've definitely been in code where there's like three or four ways to authorize, and they're all being used inconsistently, and that's not nice at all. On the other hand, it can be hard with DSL sometimes to customize or to go beyond the rules that are built in. In the case of authorization, you've effectively built a little mini-rules engine. And if you don't have a good way for people to add custom rules without just embedding procedural code into your abilities file, it's going to quickly get out of hand. STEPHANIE: Yeah, that makes sense. On the topic of authorization, you did mention an example earlier when you were writing a policy object. JOËL: I've generally found that that's been my go-to pattern for authorization. I enjoy the Pundit gem that provides some kind of light scaffolding around working with policy objects, but it's a general pattern, and you can absolutely write your own. You don't need a gem for that. Now we're definitely not in the DSL world. We're not doing this declaratively. We're leaning very heavily on OO and saying we're just going to create objects. They talk to each other. They can do anything that any Ruby object can do and as simple or as complex as they need to be. So you have the full power of Ruby and all the patterns that you're used to using. The downside is it is a little bit harder to read and to kind of just audit what's happening in terms of permission because there's no high-level overview anymore. Now you've just got to look through a bunch of classes. So maybe that's the trade-off, flexibility, extensibility versus more declarative style and easy overview. STEPHANIE: That makes a lot of sense because we were talking earlier about guardrails. And because those boundaries do exist, that might not give us the flexibility we want compared to just writing regular Ruby objects. But yeah, we do get the benefit of, like you said, auditing, and at least if we don't try to do some really gnarly, custom stuff, [laughs] something that's easier to read and comprehend. JOËL: And, again, maybe that's where in the best of both worlds situation, you say, hey, I'm creating some form of rules engine, whether it's for describing routes, or authorization, permissions, or users can build custom business rules for a product or something like that. And it's all object-based under the hood. And then, we provide a DSL to make it nice to work with these rules. If a programmer using our gem wants to write a custom rule that just really extends what the ones we shipped can do, allow them to do that via the object API. We have all the objects available to you that underlie the DSL. Add more rules yourself. And then maybe those can be plugged back into the DSL like we saw with the RSpec and custom matchers. Or maybe you have to say, okay, if I have a custom rule object, now I have to just stay in the object space. And I think both of those solutions are okay. But now you've sort of kept those two worlds separate and still allowed people to extend. STEPHANIE: I like that as contributing to the language because language is never static. It changes over time. And that's a way that people can continue to evolve a language that may have been originally written at a certain time and place. JOËL: Moving on from DSLs, we got some listener feedback recently from James, who was listening to our episode on discrete math. And James really appreciated the episode and wanted to share a resource with us. This is the book "Discrete Math and Functional Programming" by Thomas VanDrunen. It's an introduction to discrete math as a theoretical concept taught side by side with the very practical aspect of learning to use the language standard ML, and both of those factor into each other. So you're kind of learning a little bit of theory and some practice, at the same time, getting to implement some discrete math concepts in standard ML to get a feel for them. Yeah, I've not read this book, but I love the concept of pairing a theoretical piece and a practical piece. So I'll drop a link to it in the show notes as well. Thank you, James. STEPHANIE: Yeah, thanks, James. And I guess this is just a little reminder that if our listeners have any feedback or questions they want to write in about, you can reach us at hosts@bikeshed.fm. JOËL: On that note. Shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Bike Shed
377: Error Handling

The Bike Shed

Play Episode Listen Later Mar 28, 2023 45:06


Joël is a mentor for RailsConf and got matched with a speaker. Stephanie has been having trouble stepping away from her work. It's frustrating when chasing down a bug because something's gone wrong, and you spend a whole afternoon figuring out where it is. Joël and Stephanie discuss error handling as a possible solution. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Mis en Place Writing (https://www.swyx.io/writing-mise-en-place) Errors accumulate at boundaries (https://thoughtbot.com/blog/testing-your-edge-cases) Retryable errors (https://thoughtbot.com/blog/handling-errors-when-working-with-external-apis) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So recently, RailsConf has closed out their CFP, and they've started sending out acceptances and rejections for proposals. And one thing that they do that I think is really nice is that they offer first-time speakers the ability to get matched with a speaker-mentor, somebody else who has given talks before that can help them prep their talk, listen to them rehearse, that kind of thing. And so they had put out a call for mentors last week. I responded to that, and I got matched with a speaker today. STEPHANIE: Cool. Is this your first time being a speaker-mentor? JOËL: First time for RailsConf. I've done it for another conference before. STEPHANIE: That's really exciting. What do you like about playing that role? JOËL: So I very much like prepping and giving talks myself. And I really value if there's something that I'm excited about sharing it, helping others build up that skill as well. So I think it's a great opportunity. I also remember what it was like when I was a first-time speaker and just how very nervous I was and not sure. So I think having someone who can play that role is an opportunity to have a really powerful impact in what's oftentimes, I want to say, a monumental moment. But it's kind of like a milestone marker moment in someone's career, the first time I gave a talk at a conference. So you get to help them to make that moment the best it can be. STEPHANIE: I love that, yeah. You make a really great point that after you've been speaking for a while, you maybe might forget what it felt like to give your first talk and how big of a deal it is. And in general, I think one thing I really love about Ruby Central conferences is how supportive they are of first-time speakers. So even in the CFP, they mentioned that they welcome first-time speakers and want to make sure to accept talks from those folks and then provide them support through this mentor program. And yeah, it just makes me feel really happy. JOËL: Do you remember your first talk? STEPHANIE: I do. So my very first talk I gave virtually at RubyConf in 2021. And then last year was actually my first in-person talk. And I remember even though it was technically my second talk; it was really my first talk in front of an audience. And I saw speakers in the Slack workspace asking questions about the AV setup, and I didn't even think to consider that in my preparation. So it was nice. Even though I didn't get set up with a mentor, to share a space with other experienced speakers and see what kinds of things they were asking about or what kinds of things they were sharing in that Slack space was helpful for me. JOËL: So when you do a proposal, do you typically have an outline already built out, or is it mostly a concept that you're pitching, and then you maybe start with an outline? Or where do you go next after a proposal has been accepted? STEPHANIE: That's a great question. I think first, I procrastinate for several months, [laughs], but I do try to write an outline in the proposal when I submit it so I do have a starting point. And I think that actually helps the CFP committee, too, when they are evaluating proposals to kind of get a better idea of what the talk will be about. And so, in my ideal world, I already have some structure, so by the time I've procrastinated to the point where it's a month or so before the conference itself, [laughs] I have an outline. And I end up writing words, like, I will just write my talk as if it were an essay with this bullet point outline already. And I find that helpful for me because I definitely have a bit of a stream-of-consciousness productivity energy. And so if I just put it all out there, I will then go back and be very ruthless, I suppose, in my editing, and I think that's where the magic happens. So I kind of let myself just word vomit all over the page. And then the real work comes in the editing process and organizing and making sure it sounds the way I would want it to sound when I'm speaking. And yeah, that's how it has worked for me so far. JOËL: So you have a sort of a separate phase for sort of just stream of consciousness dumping and then separately editing. And having those two separate is an important part of your process. STEPHANIE: Yeah, I think so. I don't do as well trying to imagine the structure and everything perfectly the first time around and then filling things in. I find that just putting everything out there and, you know, a lot of things get cut. But that works well for me. What about you? What is your typical conference talk writing process? JOËL: I think mine is a little bit more iterative. I tend to put in some pieces that I like and then try to connect them together, try to make sure it's telling a story. I think a lot about the pedagogical side of things, where people are going to be confused, where they're going to have questions, where they might check out. And then very early, start doing kind of draft rehearsals where I'm starting to work on the talk. And I will stop halfway through because, in my mind, I'm trying to seat myself in the audience and be a person who's listening. And there might be a moment where I'm like, wait a minute, you just jump from one thing to another, and I don't get the logical connection here. And I might pause right there in the rehearsal and add in, say, okay, we need a transitional point, or we need to explain a concept between these two. And I keep doing that until I can get through the whole thing and then realize it's way too long and start cutting. And I cut aggressively, and now it's too short. And now I go through it again. And again, people have questions in the audience, hypothetical audience; I am the audience. And so I really kind of inflate it and then cut it down and re-inflate it and cut it down a bunch of times until I'm happy. STEPHANIE: I like that a lot. That sounds right. That sounds very you to work even on a conference talk iteratively. JOËL: It's very time-consuming. So I don't know it's the most efficient way to build a talk, but it's a process that works for me. STEPHANIE: Yeah, that's true. And then there's value in the journey, even if the talk ends up changing from the very beginning to the end product. JOËL: So the approach that you described for yourself, I think, where you have a rough draft, and you're separating the editing from almost like a creative process, reminds me a lot of an article that I read called "Mise en Place Writing" by...I'm not sure what their full name is. They go by the handle Swyx. This is an article about their process for writing, but I think it applies to conference talks as well. Have you seen this article? STEPHANIE: I haven't. But that, I think, is similar to how I've thought about it or I've seen or heard other authors talk about their writing process and it being kind of similar where the creative work...they give themselves a lot of grace and just letting it be. And then the, like I mentioned, real work is in the editing process. It's kind of two different mindsets, I think. JOËL: We'll link the article in the show notes. STEPHANIE: I'm curious then how you incorporate visuals into your process because I think that's where my workflow is a little less successful because I'm not really thinking about visuals along with the words, and they do feel more like an afterthought. And I've always been really impressed when people who give talks can have a really visual and dynamic slide presentation. How does that work for you? JOËL: So I think I try to avoid slides that are three bullet points in the slide, and then I'm going to talk about it for three or four minutes for each bullet point. People read those quickly and then check out. I'll oftentimes try to have, like, turn each of those bullet points into a full-on slide. And maybe it's just a title and a fun picture or something like that. What this ends up doing is I kind of really inflate my slide deck. I'm going through maybe 80 or 100 slides in a 30-minute presentation. So it's multiple slides a minute. They move by really quickly. So I usually have either just an image or a header. I will usually start by just sketching it out with headers and then, where it makes sense, using an image. An image can be just for fun, or it can be something like a diagram where it is trying to illustrate a point. STEPHANIE: Yeah, I like that. I think talks with a lot of slides that are mostly just images or something that you can grasp in a few seconds are really engaging because you're keeping it moving, and you don't really let people get bored. And so you show a new slide, and they look at it, but then they are able to direct their attention back to what you're saying. JOËL: It's fun too with images because you can reuse them, and then they become a way to connect people back to a theme or let them know that you're making the same point again. A lot of talks, I will have a central theme that gets repeated. I'll often have a slide with some fun image with my key point on it. And then that slide will show up three or four times in my presentation oftentimes because each of the main points I'm trying to make kind of culminates at that same takeaway. And so for example, in the talk I gave at RubyConf Mini last fall, I had a slide about writing Ruby code being delightful. I think having some children being happy with just a big title being like, "Oh, delightful," or something like that. And after each of my examples where we went from code that was less good to something that was more idiomatic, Ruby that was really fun to work with, I would finish on that slide and be like, hey, our code is now delightful. And hopefully, that helped people with the takeaway of, like, we want to write delightful code. Ruby has tools to do that. And then, hopefully, they either remember the things they can do to get to that point or can look it up and find a talk online. STEPHANIE: Yeah, I watched that talk, and I really vividly remember that slide and the theme that you were trying to hone in on. So I thought it was pretty effective. I think this makes me realize speaking, I mean; speaking is obviously a skill but even the process of creating a talk in that particular medium is also a vast skill and can go...there are so many different styles and flavors. But I really think that what you said will get me thinking next time I'm writing my talk and how I can better incorporate that kind of engagement with the audience and making sure that the way I deliver the talk is just as thoughtful as the content itself. JOËL: Yeah, I've been putting a lot of thought into what makes a good talk and what elements are unique to my process, what elements can be useful to others because now I have to coach someone else on their process and say, "Hey, here's the thing that worked for me. Maybe this will be helpful for you." Or maybe it's just, "Have you tried this?" Or "I think audiences will be asking this question at this moment, what do you think of this?" So that's definitely been top of mind in a whole other dimension for me recently. How about you? What's new in your world? STEPHANIE: So before we started recording, I was heads down deep in the muck of trying to write some tests, some RSpec tests on my client project. And the domain for this client project is really big. There are a lot of models. And I was starting to go deep into the factory setups for our test fixtures. And it was hairy. And I was just going further and further down the rabbit hole to the point where I was skipping lunch. JOËL: Ooooh. STEPHANIE: Yeah, I was like, I couldn't pull myself away from it, and I kind of regret it a little bit. [laughs] And so I was just thinking about, like, how can I incorporate taking breaks a little bit more and feeling better about stepping away from the work when I'm really deep in it? You and I had this standing appointment to record [laughs] a podcast, so that was kind of the signal to me that it was time to try to set it aside. And I did end up taking the dog for a walk around the block beforehand to get some fresh air, but yeah, it was a little rough, I don't know. How do you deal with just being so deep in the code that you don't really want to resurface? JOËL: That's hard because sometimes I'm feeling productive, and I don't want to stop because I feel like I might not get back into the flow quite as easily. Sometimes it's just out of frustration. It's like, oh, I'm just so close to getting this bug done. If I get this one more test to pass, then I'll be good. And I keep doing one more thing. And the next thing I know, I have skipped lunch, and it's late in the afternoon. And it's just like; it's been a frustrating day. STEPHANIE: And you're cranky, yeah, yep. I know that feeling. JOËL: I've stopped being productive for the past hour. But I'll be like, one more thing, one more thing. STEPHANIE: I think I was in that place because I was starting to get deep into the internals of models completely unrelated to the test that I was writing, but that was just where the rabbit hole led me. And I think after this, I will go and ask in Slack for a pair because I think that would be really helpful right now. I've just reached the limits of what I know. And I'm almost positive that someone knows how to do this more efficiently than I do. So that was a bit of a signal to me, but it was very challenging untangling myself out of that headspace. JOËL: Have you ever played the video game Civilization? STEPHANIE: No, I haven't. JOËL: It's a turn-based historical strategy game. The running joke about it is that people get really pulled into it. And they're always just saying, "I'm going to play one more turn, and then I'm going to be done for the evening." And the next thing you know, it's 4:00 a.m. And I think that sometimes applies to fixing one more failure, just getting one more file in that chain of figuring out what the bug is in code. It's a very similar feeling. STEPHANIE: Yeah, I know exactly what you're talking about. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So it can be really frustrating when you're kind of chasing down a bug because something's gone wrong, and now you're spending a whole afternoon figuring out where it is. Do you ever find yourself maybe acting preemptively to try to prevent those sorts of things from happening in the first place? So maybe putting in some sort of guards or error handling or something like that so that your future self won't have to spend that afternoon. STEPHANIE: That's a great point because the bug that I was facing just now was definitely something I think could have been avoided. It was a classic no method [laughs] on nil class error. And I am still unsure how that happened, and I hope to come back to it after this. But yeah, that certainly is a great topic to get into, error handling. I think it's been on my mind a little bit lately because I'm working on a full-stack feature that has user-facing errors and things we want to make sure that we communicate to the user so that they could hopefully do something about it or just contact customer support on this app. But there are also some API calls that are kicked off in the process of the user submitting the form, and those can lead to a bunch of different failures. And we may or may not have already discovered what those failures could be, and there may or may not have been designs created for those different failure states. And I feel like I haven't quite gotten a handle on how to deal with all of the possible errors that can happen when implementing a full-stack feature or a vertical slice. Yeah, that has tripped me up a lot lately. JOËL: I think my time working in Elm has really made me much more aware of the different ways that things can fail just because Elm's type system is very robust. It's very complete. And so it will point out to you every potential place that could have a failure and ask you to handle it because it doesn't want to get to a point where it doesn't know what to do and there's a runtime error for something like no method or something like that. So if you've got a potential nullable value and you're trying to say, okay, take this and render it, the compiler will say, wait a minute, you did not handle the null case. Give me something to do with the null case, or I refuse to compile. And now you've got to handle that. If there's something that might feel like an HTTP request, again, the compiler would be like, well, but what about the failure case? You didn't tell me what to do on the failure case. This is an incomplete piece of code. I refuse to compile that. So I think I've built now a little bit of anticipation because I know the compiler is going to tell me to do this. Now even when I write code that's not compiled like Ruby, my brain compiler is still like, oh, there's a nullable value here. You didn't check the null case. What are you going to do about that? STEPHANIE: Yeah, that's a great point. I think the more experience I gain, the more possible errors I see in the world or out in the wild. When I think about developing on the web, you know, you mentioned HTTP requests, but also, if we fail to connect to the database or a job fails to enqueue, there are just so many places where things can go wrong. And it's almost like the more I learn about all those possible failures, the more anxious I am [laughs] to make sure that I've covered everything though I think there is some amount of just that being impossible. And I'm particularly interested in figuring out what is enough because one thing that really I find quite painful is when you don't think through things enough and you just cross your fingers and hope it works and you ship it, and then your team is dealing with a lot of bugs or a lot of noisy error monitoring notifications afterwards. And so that's kind of what I'm trying to adjust for, I think. JOËL: I think there's like two general classes of approach you can use to deal with that; one is to try to prevent errors altogether, and there's a variety of tools you could use for that. I'm thinking of either something like a type system or maybe test-driven development or even some sort of analysis tool. That could be diagramming, that could be decision tables, something like that. All those, I think, fall under better understanding of the edges of your system. Whereas sometimes you want to do the opposite and sort of really lean into, okay, errors will happen. How do we recover from them? How do we make them easy to diagnose in the future? STEPHANIE: Yeah, that second bit is really interesting to me because I've started to try to think about the errors and who we want to notify about the errors. And so I feel like there are a few different categories of errors where if it's a validation error and it's something that the user can fix, you know, that we want to make sure to surface and tell them how they can fix it. If it's like a programming error, there's no value in showing that to the user. And I'm sure that we've all seen a website that responded with a 500, but then we actually saw the error message itself, and we're like, ooh, this is kind of weird [laughs] to be seeing this. And so realizing, okay, that's not valuable to the user. But what should I be doing with it instead? And maybe that is hooking it up to whatever error monitoring service you use to make sure someone is alerted. Or, I don't know, even in the third case, like, what should a customer support team be notified about? And that kind of sits in between not quite a user-facing error but also not a bug, and that's a different category. JOËL: So, something that is not necessarily a problem in the code, but you might want somebody in the company to know about and be notified about. STEPHANIE: Yeah, exactly. Maybe not something that is so urgent that it needs to be flagged in real-time but goes somewhere, and someone will check on it at some point. [laughs] So you were mentioning that you now have a better sense of what could go wrong. How much time do you spend writing code to cover all of those different possibilities? JOËL: Hmm, I don't know that I've ever put the time to quantify it. I would say a decent amount because you've got to think about...sometimes they're not even things that can go wrong per se. But they're off that very simple, linear happy path that you're thinking of. So you might think even for rendering some kind of view, and you've got some search results you're trying to display. Have you considered an empty state? Is there a difference between initially loading the page or have not performed a search yet, and search but did not find any results? Those are things that are not necessarily errors, but they're not things you're thinking in mind when you're just writing that first happy path of, like, oh, load page, show results. I assume there's always a result set. And so those are things that are important for the user experience that you need to have, but that are kind of edge cases that you have to add in afterwards, or you have to think about. And so I think that, for me, tends to fall under a similar category as okay; what if an error happened? Especially when you're dealing with kind of a full-stack situation where on the front end maybe you're making a result to a back end to pull down...let's say you're making a search and the back end is doing the actual search. You send up a query. Now you get back a failure. Is that the same thing as getting no results back? Like, a success with no results, versus an error code, versus not making a query yet. So you've got like four or five states you've got to think about on the front end to display and how you're going to handle those. So I think thinking about those upfront is often really helpful. STEPHANIE: As you were talking about that, I suppose I asked the question because I have experienced when those things are not thought about upfront, and then you discover them as you're implementing. And how much time do you use to kind of go into a little detouring trying to make sure that you have all of the edge cases covered, and at what point do you stop? Because you're like, I've covered what I can. And this ticket was supposed to only be three points [laughs] or whatever, you know. JOËL: Yeah. And how do you keep a feature from ballooning when there are all these edge cases? STEPHANIE: Yeah, exactly. It's a balance. JOËL: Are there any techniques that you like to do when you...so you pick up a ticket that looks easy, but that might have a lot of these hidden edge cases in it. Are there techniques you like to use that might help you figure out those edge cases and maybe give you some follow-up questions that you might reach out to the product person to clarify? Or maybe it's mostly intuition and experience as a developer that you kind of figure out that, oh, these are the things we need to ask about. It's like, have you thought about an error state? STEPHANIE: Yeah, that's a good question. In general, I'm a little suspicious of any ticket that doesn't include some kind of acceptance criteria about the unhappy path. And I certainly think it's a lot easier once you are embedded into this domain, and you have that expertise, and you are able to see the possible issues you'll run into. I do think that I like to do a little bit of coding just to kind of explore the space, and then that does give me more insight into how I might be able to follow up on the ticket. So you mentioned techniques. Especially if they're written as user stories, I don't think they necessarily incorporate the flow or the procedure of how things are kicked off. And so when you're thinking about implementing it, you're like, oh, this actually needs to happen in the background, or this should be synchronous or not. And those are a lot of error states that I find are missing when I pick up a ticket. And I think it also depends on which way you want to implement it what implementation is viable. And then maybe you bring it to a product person, and they are actually like, "No, we don't want it to work like that." And then you have to kind of rethink things a little bit. But yeah, certainly, the process of taking a user story and then doing an initial think-through of what approach you want to take definitely surfaces some potential unhappy paths. JOËL: It's almost like prototyping it in your mind. STEPHANIE: Yeah, I think so. I think it also depends a little bit on the team because if the engineer wrote the ticket, then there likely has been some thought about unhappy paths. But on other teams that I've been on when implementation is up to the person who picked it up rather than kind of spelled out for you by someone else who did that thinking, that's definitely an opportunity to pause, I think, and document which way you might want to go so that you can make sure that you account for the possible things that could go wrong that likely the user story didn't cover. JOËL: Sometimes there are some edge cases or failure states that are just sort of built into the problem that you're solving. If you're having to make a background request, there's always a chance that that might fail because the network is not trustworthy. Sometimes though, those things just kind of come out of our implementation, the fact that we implemented it in a particular way. And that's not something that you'd expect a product person to have to think about. That's more on us as developers to be like, oh yeah, well, I'm indexing into a hash and didn't think to check is a nested hash even present? Maybe that key isn't there. And now I've got a weird nil error, an undefined method. That's kind of on us rather than on, like, oh, a specific kind of thing that we can think about upfront. STEPHANIE: Yeah, that's fair. And I think that is just an important part of the development process. Though you make a good point because I think that just kind of speaks to all of the different layers of things that can go wrong [laughs] and figuring out which ones are specific to your role as developer to account for, and then which are ones that you need to bring in or pull in a designer to chat about. It can be a little overwhelming. I'm overwhelmed just thinking about it. [laughter] JOËL: Yeah, errors are not a sort of monolithic class of things. They can't be an afterthought. But they're also not just a thing where it's like, oh yeah, do the error handling, and then you're good. We kind of lump a lot of things under the concept of errors, even if they might all eventually manifest as some kind of exception. I guess a true solution is just one giant top-level rescue nil. STEPHANIE: [laughs] Very funny. JOËL: So we've talked about a few different dimensions of errors where they might be sort of user-visible or not, or something that's more implementation-based versus inherent to the problem. One thing that we haven't looked at is the dimension of errors that might be recoverable versus not. Have you ever built a system where you had errors that could be recovered from and didn't crash the program? STEPHANIE: Ooh, yes. That makes me think about retrying and especially what you're saying if things are happening in the background. Maybe there is an ephemeral error where the network timed out or something. But if it is given another shot, it might succeed on the second go. And I think there's a whole process of thinking about what happens when a process has to be retried and if there were any side effects that you didn't want to have committed the first time around, you know, but then something else that was supposed to happen and when the process happens again, things are very broken. So making sure that you are keeping things idempotent so that by undoing it again, there are not any unforeseen issues. JOËL: I heard you say that word commit here, and that's kind of a keyword to my mind. I immediately think database transactions. Is that the sense that you're thinking about this term here? Or does it have another meaning for you in this context? STEPHANIE: Yeah. I do think I used that word specifically because when I've run into this in the past, it has been around making database changes. I'm trying to think if there is another way that this might show up. I think even in something like sending an email, too, though it is a bit lower stakes. I've certainly, as a user, experienced when that goes wrong and just been [laughs] flooded with emails and being like, wow, this is annoying. And that's, I think, something valid to consider as well. JOËL: Yeah. You don't want that email job to be a thing that gets retried and just keeps failing because there's a nil error after the email gets sent. And so we just re-enqueue it, re-enqueue it, re-enqueue it, and the person ends up receiving 500 emails. STEPHANIE: What about you? Any thoughts about recoverable errors? JOËL: Yeah. I think really common for me is thinking about that in the context of a background job because those are things that I think are specifically designed to be retried if they fail; at least, a lot of job enqueuing systems assume that. When we write them, we don't always take that into account, but that's the system that we're working in. So that can be something as simple as marking somewhere that you have sent that email so that you don't resend it if that job ever re-executes. I think that goes to your point about idempotency earlier that you often want to write code that can get executed multiple times but doesn't necessarily do the action multiple times. It will do it at most once. And that's probably an interesting distinction to have is knowing what elements of your code need to execute at most once, versus as many times as the code is called, versus things that might get tried and then rolled back like a database transaction. And so then that will...I guess you could say it's at most once because you're writing it but unwriting it. But that plays out a little bit differently than something like an email where you can't undo sending an email outside of Gmail. STEPHANIE: Yeah, I love that undo button. [laughs] JOËL: You need some other mechanisms for that. STEPHANIE: Yeah. As you were talking about that, I was also thinking about the idea of failing gracefully, which I think also ties into the idea of recoverable. So this is not a development-specific example. But the idea of an escalator no longer working well; at least you can use it as stairs. So that doesn't mean that everything is totally broken and people are unable to get from one floor to another. So maybe if there is a network request that's touching data and that fails, you can at least fall back on something that's cached. That mindset, I think, really is important to think about at all the different levels we are talking about. JOËL: Yeah, or hopefully, even maybe some amount of graceful degradation. On a front-end app, you might not want to just crash the whole thing if one background request failed. So you can try again. You can be told, okay, try again in so much time. Maybe we automatically retry to make that same request with some sort of exponential backoff strategy. Or maybe we say, "Look, search is down for now. Here's a link if you want to go check a status page. Until then, other parts of the site are still working." I feel like we're getting back into what makes great product design and how great product designers have to make failure conditions. It has to be at the forefront of the thinking that comes to designing that product. STEPHANIE: Yeah, that's a good point. I think my initial feelings of being overwhelmed and stressed about dealing with errors may be because a lot of it falls on the developer if those things aren't accounted for. And we spoke a little bit earlier about, okay, what is within our realm or domain of actually being responsible for, and what can we loop in others for help with? JOËL: So we've been talking a lot about different ways of preventing errors, different ways of recovering, generally trying to make the whole experience really smooth. A slightly different philosophy around errors is rather than preventing them early is to fail early, like, fail early and loudly. And maybe you recover, maybe you don't. That depends on the context. But instead of putting so much effort into preventing errors upfront, it's better to just crash a lot or to fail loudly and deal with the consequences, or have a strategy for dealing with failure because failure is inevitable. How do you feel about that philosophy? STEPHANIE: Oh, I think it has a time and a place. One example I'm thinking of is if you don't want your application to be deployed if some configuration is not exactly how it needs to be for the app to run effectively. And so there is a matter of, like, okay, I really want to make sure that the DevOps team or the development team knows that something is very wrong because if this were to be deployed, the app would be unusable. And so that's an example to me of failing loudly but, ideally, not letting it affect end users because they're still using [laughs] the site on a different version. [laughs] JOËL: Right. I guess the classic example of that is for a Rails app, doing a Hash#fetch() on the environment to load up your environment variables instead of using the square bracket syntax so that as the app is booting and executing those initializers, it will crash if it encounters one of those and then fail to deploy if you're doing a deploy via something like Heroku. I've even sometimes when I'm adding environment variables, purposefully had them loaded in an initializer rather than maybe like in a class later on, specifically so that it would crash the app and prevent deployment if that environment variable was not set. STEPHANIE: Yeah, that's what I was thinking too, environment variables. Though I think even with that kind of mindset, you're either just delegating that responsibility to someone else down the line to either figure out how to accommodate or account for in a graceful manner. Or you are creating an environment where everyone is very stressed out [laughs] and having to fight fires. So I think it also requires a little bit of thought and isn't necessarily a strategy to just completely embody. [laughs] JOËL: I've noticed that bugs and errors often accumulate at the boundaries of systems or even subsystems or modules within a program. Maybe the place to apply the strategy of failing early and loudly is particularly valuable at those boundaries. But internally, within a subsystem or a module, maybe it's nicer to use other strategies for error handling. Does that sound like maybe a useful distinction to you? STEPHANIE: Yeah, I think subsystems was the keyword to me there because you don't want it to be such a catastrophe that it affects the usability of the app entirely. But that does still require some systems in place, I think, to respond to when that thing is failing loudly. JOËL: I think an example that came to mind for me is like you were mentioning earlier, a full-stack application. And if you've got the back end that's providing an API and something is wrong, I don't want that API to give me back garbage data and try to pretend that everything's okay. Let that API give me back some kind of error code. And I in the front end, I already know that the network is inherently unreliable. I'm planning to handle errors at that point. So it's fine for the API back end to fail loudly in this case. In fact, I think that's the optimal solution. STEPHANIE: Yeah, I think that's true because ideally, that error clues you into what kind of thing failed, and then maybe you can use that information more meaningfully than trying to guess at what happened with this bad data and then having to define some kind of error message in your app when ideally someone else who had more knowledge about it could have told you what went wrong. JOËL: And I guess the problem with not failing loudly or with an explicit failure is that if you try to just pass on some sort of value that will pretend to be like what you initially asked for, whoever's consuming that doesn't know that something went wrong. So then you use this garbage data, and you do some things and pass it along to the next person. And eventually, it may cause a failure three or four steps down the line. And now, trying to trace that, like, why did this fail? And it's not because something was wrong in Module D, or C, or B. It all comes back to oh, A had a problem but didn't crash or give us an error. It tried to pass its sort of best guess, like, this is probably okay. And then it just kind of moved all the way down the line. STEPHANIE: I'm imagining external API developers everywhere just nodding their heads in agreement. [laughs] JOËL: I've fought this on a local level as well. I was working on some kind of code for a JavaScript date picker plugin, and this was back in the jQuery days. It was some kind of...it was not a date picker. I think it was a typeahead drop-down thing. In some situations, I forget exactly how it would happen, but the input from there might be empty, but then that would get converted into an undefined value, which then JavaScript would convert to the string undefined, which would then get passed to something else that if it saw a string, it thought that was the thing that the user typed, and they would pass it through. And I think maybe in the end, I was looking at a crash ten functions away in the front-end code that had to deal with the input from this typeahead and being like, why am I getting these undefined? Or maybe it was a string NaN or something like that. Like, why am I dealing with these weird strings that should never have come out of this? And it turned out it was just kind of an edge case. It wasn't addressed in this component further on, and then it was kind of leaking strings that everybody else thought was sensible up until three or four jumps further down the stack. STEPHANIE: Yeah, that's a great point. I think it does go back to the idea of there being preventable errors. And then there are things that are truly not preventable because we live in a physical world [laughs] where computers talk to each other over the wire. And that distinction is, you know, perhaps the first being avoidable errors by writing resilient code. And the second being like, okay, in reality, there will be things that go wrong, and this is what we really have to watch out for. On that note, shall we wrap up? JOËL: Let's wrap up. [laughs] STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

Elixir em Foco
24. Elixir Radar com Hugo Baraúna

Elixir em Foco

Play Episode Listen Later Mar 1, 2023 67:58


Neste episódio conversamos com Hugo Baraúna, criador e mantenedor da newsletter Elixir Radar e cofundador da Plataformatec (a empresa onde Elixir nasceu). Escute esta entrevista no YouTube em https://youtu.be/P59Au1a4DAo Hugo Baraúna https://br.linkedin.com/in/hugobarauna https://github.com/hugobarauna https://twitter.com/hugobarauna https://hugobarauna.com/ Elixir Radar https://elixir-radar.com/ Elixir Patterns book https://elixirpatterns.dev/ Elixir Ecosystem 2020 responses data https://github.com/hugobarauna/elixir-ecosystem-2020-reponses-data TDD e BDD na prática: Construa aplicações Ruby usando RSpec e Cucumber https://www.amazon.com.br/TDD-BDD-pr%C3%A1tica-Construa-aplica%C3%A7%C3%B5es-ebook/dp/B08FPVS5TW/ Respondendo a Elixir Ecosystem Survey 2020 https://www.youtube.com/watch?v=SVNy2RD5SY0 What's new in Livebook 0.7 https://www.youtube.com/watch?v=lyiqw3O8d_A How to query and visualize data from Google BigQuery using Livebook https://www.youtube.com/watch?v=F98OWdigCjY O que é um sabático? Pra que serve e o que você aprende com ele? https://hugobarauna.com/o-que-e-um-sabatico-pra-que-serve-e-o-que-voce-aprende-com-ele/ O case da Plataformatec com o Elixir - Hugo Baraúna https://www.youtube.com/watch?v=XnEAllPTNWw Elixir Code Smells https://anchor.fm/elixiremfoco/episodes/15--Elixir-Code-Smells-com-Lucas-Vegi-UFV-e-Marco-Tulio-Valente-UFMG-e1jb1bb Hugo Baraúna: Cofundador da Plataformatec https://anchor.fm/policast/episodes/Hugo-Barana-Cofundador-da-Plataformatec-e1pa4mt Inside Elixir Radar with Hugo Baraúna https://podcast.thinkingelixir.com/15 Nosso canal é https://www.youtube.com/@ElixirEmFoco Associe-se à Erlang Ecosystem Foundation em https://bit.ly/3Sl8XTO. O site da fundação é https://bit.ly/3Jma95g. Nosso site é https://elixiremfoco.com. Estamos no Twitter em @elixiremfoco https://twitter.com/elixiremfoco. Nosso email é elixiremfoco@gmail.com. --- Send in a voice message: https://podcasters.spotify.com/pod/show/elixiremfoco/message

The Bike Shed
367: Value Objects

The Bike Shed

Play Episode Listen Later Jan 17, 2023 34:00


Joël's been traveling. Stephanie's working on professional development. She's also keeping up a little bit more with Ruby news and community news in general and saw that Ruby 3.2 introduced a new class called data to its core library for the use case of creating simple value objects. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Maggie Appleton's Tools for Thought (https://maggieappleton.com/tools-for-thought) Episode on note-taking with Amanda Beiner (https://www.bikeshed.fm/357) Obsidian (https://obsidian.md/) Zettelkasten (https://zettelkasten.de/posts/overview/) Evergreen notes (https://notes.andymatuschak.org/Evergreen_notes) New Data class (https://ruby-doc.org/3.2.0/Data.html) Joël's article on value objects (https://thoughtbot.com/blog/value-object-semantics-in-ruby) Episode on specialized vocabulary (https://www.bikeshed.fm/356) Primitive Obsession (https://wiki.c2.com/?PrimitiveObsession) Transcript: AD: thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program. We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator. STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been traveling for the past few weeks in Europe. I just recently got back to the U.S. and have just gotten used to drinking American-style drip coffee again after having espresso every day for a few weeks. And it's been an adjustment. STEPHANIE: I bet. I think that it's such a downgrade compared to European espresso. I remember when I was in Italy, I also would really enjoy espresso every day at a local cafe and just be like sitting outside drinking it. And it was very delightful. JOËL: They're very different experiences. I have to say I do enjoy just holding a hot mug and sort of sipping on it for a long time. It's also a lot weaker. You wouldn't want to do a full hot mug of espresso. That would just be way too intense. But yeah, I think both experiences are enjoyable. They're just different. STEPHANIE: Yeah. So, that first day with your measly drip coffee and your jet lag, how are you doing on your first day back at work? JOËL: I did pretty good. I think part of the fun of coming back to the U.S. from Europe is that the jet lag makes me a very productive morning person for a week. Normally, I'm a little bit more of an evening person. So I get to get a bit of an alter ego for a week, and that helps me to transition back into work. STEPHANIE: Nice. JOËL: So you've also been on break and have started work again. How are you feeling productivity-wise, kicking off the New Year? STEPHANIE: I'm actually unbooked this week and the last week too. So I'm not working on client projects, but I am having a lot of time to work on just professional development. And usually, during this downtime, I also like to reassess just how I'm working, and lately, what that has meant for me is changing my note-taking process. And I'm really excited to share this with you because I know that you have talked about this on the show before, I think in a previous episode with a guest, Amanda Beiner. And I listened to that episode, and I was really inspired because I was feeling like I didn't have a note-taking system that worked super well for me. But you all talked about some tools you used and some, I guess, philosophies around note-taking that like I said, I was really inspired by. And so I hopped on board the Obsidian train. And I'm really excited to share with you my experience with it. So I really like it because I previously was taking notes in my editor under the impression that, oh, like, everything is in one place. It'll be like a seamless transition from code to note-taking. And I was already writing in Markdown. But I actually didn't like it that much because I found it kind of distracting to have code things kind of around. And if I was navigating files or something, something work or code-related might come up, and that ended up being a bit distracting for me. But I know that that works really well for some people; a coworker of ours, Aji, I know that he takes his notes in Vim and has a really fancy setup for that. And so I thought maybe that's what I wanted, but it turns out that what I wanted was actually more of a boundary between code and notes. And so, I was assessing different note-taking and knowledge management software. And I have been really enjoying Obsidian because it also has quite a bit of community support. So I've installed a few plugins for just quality-of-life features like snippets which I had in my editor, and now I get to have in Obsidian. I also installed things like Natural Language Dates. So for my running to-do list, I can just do a shortcut for today, and it'll autofill today's date, which, I don't know, because for me, [laughs] that is just a little bit less mental work that I have to do to remember the date. And yeah, I've been really liking it. I haven't even fully explored backlinking, and that connectivity aspect, which I know is a core feature, but it's been working well for me so far. JOËL: That's really exciting. I love notes and note-taking and the ways that we can use those to make our lives better as developers and as human beings. Do you have a particular system or way you've approached that? Because I know for me, I probably looked at Obsidian for six months before I kind of had the courage to download it because I didn't want to go into it and not have a way to organize things. I was like; I don't want to just throw random notes in here. I want to have a system. That might just be me. But did you just kind of jump into it and see, like, oh, a system will emerge? Did you have a particular philosophy going in? How are you approaching taking notes there? STEPHANIE: That's definitely a you thing because I've definitely had the opposite experience [laughs] where I'm just like, oh, I've downloaded this thing. I'm going to start typing notes and see what happens. I have never really had a good organizational system, which I think is fine for me. I was really leaning on pen and paper notes for a while, and I still have a certain use case for them. Because I find that when I'm in meetings or one-on-ones and taking notes, I don't actually like to have my hands on the keyboard because of distractions. Like I mentioned earlier, it's really easy for me to, like, oh, accidentally Command-Tab and open Slack and be like, oh, someone posted something new in Slack; let me go read this. And I'm not giving the meeting or the person I'm talking to my full attention, and I really didn't like that. So I still do pen and paper for things where I want to make sure that I'm not getting distracted. And then, I will transfer any gems from those notes to Obsidian if I find that they are worth putting in a place where I do have a little bit more discoverability and eventually maybe kind of adding on to my process of using those backlinks and connecting thoughts like that. So, so far, it's truly just a list of separate little pages of notes, and yeah, we'll see how it goes. I'm curious what your system for organizing is or if you have kind of figured out something that works well for you. JOËL: So my approach focuses very heavily on the backlinks. It's loosely inspired by two similar systems of organization called Zettelkasten and evergreen notes. The idea is that you create notes that are ideas. Typically, the title is like a thesis statement, and you keep them very short, focused on a single thing. And if you have a more complex idea, it probably breaks down into two or three, and then you link them to each other as makes sense. So you create a web of these atomic ideas that are highly interconnected with each other. And then later on, because I use this a lot for either creating content in the future or to help refine my thinking on various software topics, so later on, I can go through and maybe connect three or four things I didn't realize connected together. Or if I'm writing an article or a talk, maybe find three or four of these ideas that I generated at very different moments, but now they're connected. And I can make an article or a talk out of them. So that's sort of the purpose that I use them for and how I've organized things for myself. STEPHANIE: I think that's a really interesting topic because while I was assessing different software for note-taking and, like I said, knowledge management, I discovered this blog post by Maggie Appleton that was super interesting because she is talking about the term tools of thought which a lot of these different software kind of leveraged in their marketing copy as like, oh, this software will be like the key to evolving your thinking and help you expand making connections, like you mentioned, in ways that you weren't able to before. And was very obviously trying to upsell you on this product, and she -- JOËL: It's over the top. STEPHANIE: A little bit, a little bit. So in this article, I liked that she took a critical lens to that idea and rooted her article in history and gave examples of a bunch of different things in human history that also evolved the ways humans were able to express their thoughts and solve problems. And so some of the ones that she listed were like storytelling and oral tradition. Literally, the written language obviously [laughs] empowered humans to be able to communicate and think in ways that we never were before but also drawings, and maps, and spreadsheets. So I thought that was really cool because she was basically saying that tools of thought don't need to be digital, and people claiming that these software, you know, are the new way to think or whatever, it's like, the way we're thinking now, but we also have this long history of using and developing different things that helped us communicate with each other and think about stuff. JOËL: I think that's something that appealed to me when I was looking at some of these note-taking systems. Zettelkasten, in particular, predates digital technology. The original system was built on note cards, and the digital stuff just made it a little bit easier. But I think also when I was reading about these ideas of keeping ideas small and linking them together, I realized that's already kind of how I tend to organize information when I just hold it in my brain or even when I try to do something like a tweet thread on Twitter where I'll try to break it up. It might be a larger, more complex idea, but each tweet, I try to get it to kind of stand on its own to make it easier to retweet and all that. And so it becomes a chain of related ideas that maybe build up to something, but each idea stands on its own. And that's kind of how in these systems notes end up working. And they're in a way that you can kind of remix them with each other. So it's not just a linear chain like you would have on Twitter. STEPHANIE: Yeah, I remember you all in that episode about note-taking with Amanda talked about the value of having an atomic piece of information in every note that you write. And since then, I've been trying to do that more because, especially when I was doing pen and paper, I would just write very loose, messy thoughts down. And I would just think that maybe I would come back to them one day and try to figure out, like, oh, what did I say here, and can I apply it to something? But it's kind of like doing any kind of refactoring or whatever. It's like, in that moment, you have the most context about what you just wrote down or created. And so I've been a little more intentional about trying to take that thought to its logical end, and then hopefully, it will provide value later. What you were saying about the connectivity I also wanted to kind of touch on a little bit further because I've realized that for me, a lot of the connection-making happens during times where I'm not very actively trying to think, or reflect, or do a lot of deep work, if you will. Because lately, I've been having a lot of revelations in the shower, or while I'm trying to fall asleep, or just other kinds of meditative activity. And I'm just coming to terms with that's just how my brain works. And doing those kinds of activities has value for me because it's like something is clearly going on in my brain. And I definitely want to just honor that's how it works for me. JOËL: I had a great conversation recently with another colleague about the gift of boredom and how that can impact our work and what we think about, and our creativity. That was really great. Sometimes it's important to give ourselves a little bit more blank space in our lives. And counter-intuitively, it can make us more productive, even though we're not scheduling ourselves to be productive. STEPHANIE: Yes, I wholeheartedly agree with that. I think a lot about the feeling of boredom, and for me, that is like the middle of summer break when you're still in school and you just had no obligations whatsoever. And you could just do whatever you wanted and could just laze around and be bored. But letting your mind wander during those times is something I really miss. And sometimes, when I do experience that feeling, I get a little bit anxious. I'm like, oh, I could be doing something else. There's whatever endless list of chores or things that are, quote, unquote, "productive." But yeah, I really like how you mentioned that there is value in that experience, and it can feel really indulgent, but that can be good too. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So you mentioned recently that you've had a lot of revelations or new ideas that have come upon you or that you've been able to dig into a little bit more. Is there one you'd like to share with the audience? STEPHANIE: Yeah. So during this downtime that I've had not working on client work, I have been able to keep up a little bit more with Ruby news or just community news in general. And in, I think, an edition of Ruby Weekly, I saw that Ruby 3.2 introduced this new class called data to its core library for the use case of creating simple value objects. And I was really excited about this new feature because I remembered that you had written a thoughtbot blog post about value objects back in the summer that I had reviewed. That was an opportunity that I could make a connection between something happening in recent news with some thoughts that I had about this topic a few months ago. But basically, this new class can be used over something like a struct to create objects that are immutable in their values, which is a big improvement if you are trying to follow value objects semantics. JOËL: So, I have not played around with the new data class. How is it different from the existing struct that we have in Ruby? STEPHANIE: So I think I might actually answer that first by saying how they're similar, which is that they are both vehicles for holding pieces of data. So we've, in the past, been able to use a struct to very cheaply and easily create a new class that has attributes. But one pitfall of using a struct when you're trying to implement something like a value object is that structs also came with writer methods for all of its members. And so you could change the value of a member, and that it kind of inherently goes against the semantics of a value object because, ideally, they're immutable. And so, with the data class, it doesn't offer writer methods essentially. And I think that it freezes the instance as well in the constructor. And so even if you tried to add writer methods, you would eventually get an error. JOËL: That's really convenient. I think that may be an area where I've been a little bit frustrated with structs in the past, which is that they can be modified. They basically get treated as if they're hashes with a slightly nicer syntax to interact with them. And I want slightly harder boundaries around the data. Particularly when I'm using them as value objects, I generally don't want people to modify them because that might lead to some weird bugs in the code where you've got a, I don't know, something represents a time value or a date value or something, and you're trying to do math on it. And instead of giving you a new time or date, value just modifies the first one. And so now your start date is in the past or something because you happen to subtract a time from it to do a calculation. And you can't assign it to a variable anywhere. STEPHANIE: Yeah, for sure. Another kind of pitfall I remember noticing about structs were that the struct class includes the enumerable module, which makes a struct kind of like a collection. Whereas if you are using it for a value object, that's maybe not what you want. So there was a bit of discourse about whether or not the data class should inherit from struct. And I think they landed on it not inheriting because then you can draw a line in the sand and have that stricter enforcement of saying like, this is what a data as value object should be, and this is what it should not be. So I found that pretty valuable too. JOËL: I think I've heard people talk about sort of two classes of problems that are typically solved with a struct; one is something like a value object where you probably don't want it to be writable. You probably don't want it to be enumerable. And it sounds like data now takes on that role very nicely. The other category of problem is that you have just a hash, and you're trying to incrementally migrate it over to some nicer objects in some kind of domain. And struct actually gives you this really nice intermediate phase where it still mostly behaves like a hash if you needed to, but it also behaves like an object. And it can help you incrementally transition away from just a giant hash into something that's a little bit more programmatic. STEPHANIE: Yeah, that's a really good point. I think struct will still be a very viable option for that second category that you described. But having this new data class could be a good middle ground before you extract something into its own class because it better encapsulates the idea of a value object. And one thing that I remember was really interesting about the article that you wrote was that sometimes people forget to implement certain methods when they're writing their own custom value objects. And these come a bit more out of the box with data and just provide a bit more like...what's the word I'm looking for? I'm looking for...you know when you're bowling, and you have those bumpers, I guess? [laughs] JOËL: Uh-huh. STEPHANIE: They provide just like safeguards, I guess, for following semantics around value objects that I thought was really important because it's creating an artifact for this concept that didn't exist. JOËL: And to recap for the audience here, the difference is in how objects are compared for equality. So value objects, if they have the same internal value, even if they're separate objects in memory, should be considered equal. That's how numbers work. That's how hashes work. Generally, primitives in Ruby behave this way. And structs behave that way, and the new data class, it sounds, also behaves that way. Whereas regular objects that you would make they compare based off of the identity of the object, not its value. So if you create two user instances, not ActiveRecord, but you could create a user class, you create two instances in memory. They both have the same attributes. They will be considered not equal to each other because they're not the same instance in memory, and that's fine for something more complex. But when you're dealing with value objects, it's important that two objects that represent the same thing, like a particular time for a unit of measure or something like that, if they have the same internal value, they must be the same. STEPHANIE: Right. So prior to the introduction of this class, that wasn't really enforced or codified anywhere. It was something that if you knew what a value object was, you could apply that concept to your code and make sure that the code you wrote was semantically aligned with this concept. And what was kind of exciting to me about the addition of this to the core class library in Ruby is that someone could discover this without having to know what a value object is like more formally. They might be able to see the use of a data class and be like, oh, let me look this up in the official Ruby docs. And then they could learn like, okay, here's what that means, and here's some rules for this concept in a way that, like I mentioned earlier, felt very implicit to me prior. So that, I don't know, was a really exciting new development in my eyes. JOËL: One of the first episodes that you and I recorded together was about the value of specific vocabulary. And I think part of what the Ruby team has done here is they've taken an implicit concept and given it a name. It's extracted, and it has a name now. And if you use it now, it's because you're doing this data thing, this value object thing. And now there's a documentation page. You can Google it. You can find it rather than just be wondering like, oh, why did someone use a struct in this way and not realize there are some implicit semantics that are different? Or wondering why did the override double equals on this custom class? STEPHANIE: Yeah, exactly. I think that the introduction of this class also provides a solution for something that you mentioned in that blog post, which was the idea of testing value objects. Because previously, when you did have to make sure that you implemented methods, those comparison methods to align with the concept of a value object, it was very easy to forget or just not know. And so you provided a potential solution of testing value objects via an RSpec shared example. And I remember thinking like, ooh, that was a really hot topic because we had also been debating about shared examples in general. But yeah, I was just thinking that now that it's part of the core library, I think, in some ways, that eliminates the need to test something that is using a data class anyway because we can rely a little bit more on that dependency. JOËL: Right? It's the built-in behavior now. Do you have any fun uses for value objects recently? STEPHANIE: I have not necessarily had to implement my own recently. But I do think that the next time I work with one or the next time I think that I might want to have something like a value object it will be a lot easier. And I'm just excited to play around with this and see how it will help solve any problem that might come up. So, Joël, do you have any ideas about when you might reach for a data object? JOËL: A lot of situations, I think, when you see the primitive obsession smell are a great use case for value objects, or maybe we should call them data objects now, now that this is part of Ruby's vocabulary. I think I often tend to; preemptively sounds bad, but a lot of times, I will try to be careful. Anytime I'm doing anything with raw numbers, magic strings, things like that, I'll try to encapsulate them into some sort of struct. Or even if it's like a pair of numbers, it always goes together, maybe a latitude and longitude. Now, those are a pair. Do I want to just be passing around a two-element array all the time or a hash that would probably make a very nice data object? If I have a unit of measure, some number that represents not just the abstract concept of three but specifically three miles or three minutes, then I might reach for something like a data class. STEPHANIE: Yeah, I think that's also true if you're doing any kind of arithmetic or, in general, trying to compare anything about two of the same things. That might be a good indicator as well that you could use something richer, like a value object, to make some of that code more readable, and you get some of those convenient methods for doing those comparisons. JOËL: Have you ever written code where you just have like some number in the code, and there's a comment afterwards that's like minutes or miles or something like that, just giving you the unit as a comment afterwards? STEPHANIE: Oh yeah. I've definitely seen some of that code. And yeah, I mean, now that you mentioned it, that's a great use case for what we're talking about, and it's definitely a code smell. JOËL: It can often be nice as you make these more domain concepts; maybe they start as a data object, but then they might grow with their own custom methods. And maybe you extend data the same way you could extend a struct, or maybe you create a custom class to the point where the user...whoever calls that object, doesn't really need to know or care about the particular unit, just like when you have duration value. If you have a duration object, you can do the math you want. You can do all the operations and don't have to know whether it is in milliseconds, or seconds, or minutes because it knows that internally and keeps all of the math straight as opposed to just holding on to what I've done before, which is you have some really big number somewhere. You have start is, or length is equal to some big number and then comment milliseconds. And then, hopefully, whoever does math on that number later remembers to do the division by 1,000 or whatever they need. STEPHANIE: I've certainly worked on code where we've tolerated those magic numbers for probably longer than we should have because maybe we did have the shared understanding that that value represents minutes or milliseconds or whatever, and that was just part of the domain knowledge. But you're right, like when you see them, and without a very clear label, all of that stuff is implied and is really not very friendly for someone coming along in the future. As well as, like you mentioned earlier, if you have to do math on it later to convert it to something else, that is also a red flag that you could use some kind of abstraction or something to represent this concept at a higher level but also be extensible to different forms, so a duration to represent different amounts of time or money to represent different values and different currencies, stuff like that. JOËL: Do you have a guideline that you follow as to when something starts being worth extracting into some kind of data object? STEPHANIE: I don't know if I have particularly clear guidelines, but I do remember feeling frustrated when I've had to test really complicated hashes or just primitives that are holding a lot of different pieces of information in a way that just is very unwieldy when you do have to write a test for it. And if those things were encapsulated in methods, that would have been a lot easier. And so I think that is a bit of a signal for me. Do you have any other guidelines or gut instincts around that? JOËL: We mentioned the comment that is the unit. That's probably a...I wasn't sure if I would have to call it a code smell, but I'm going to call it a code smell that tells you maybe you should...that value wants to be something a little bit more than just a number. I've gotten suspicious of just raw integers in general, not enough to say that I'm going to make all integers data objects now, but enough to make me pause and think a lot of times. What does this number represent? Should it be a data object? I think I also tend to default to try to do something like a data object when I'm dealing with API responses. You were talking about hashes and how they can be annoying to test. But also, when you're dealing with data coming back from a third-party API, a giant nested hash is not the most convenient thing to work with, both for the implementation but then also just for the readability of your code. I often try to have almost like a translation layer where very quickly I take the payload from a third-party service and turn it into some kind of object. STEPHANIE: Yeah, I think the data class docs itself has an example of using it for HTTP responses because I think the particular implementation doesn't even require it to have attributes. And so you can use it to just label something rather than requiring a value for it. JOËL: And that is one thing that is nice about something like a data object versus a hash is that a hash could have literally anything in it. And to a certain extent, a data object is self-documenting. So if I want to know I've gotten to a shopping cart object from a third-party API, what can I get out of the shopping cart? I can look at the data object. I can open the class and see here are the methods I can call. If it's just a hash, well, I guess I can try to either find the documentation for the API or try to make a real request and then inspect the hash at runtime. But there's not really any way to find out without actually executing the code. STEPHANIE: Yeah, that's totally fair. And what you said about self-documenting makes a lot of sense. And it's always preferable than that stray comment in the code. [laughs] JOËL: I'm really excited to use the data class in future Ruby 3.2 projects. So I'm really glad that you brought it up. I've not tried it myself, but I'm excited to use it in future projects. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeeeee!!!!!!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

YAGNI
RSpec w/ Justin Searls

YAGNI

Play Episode Listen Later Nov 21, 2022 51:19


Justin Searls on TwitterMatt Swanson on TwitterArrowsTest DoubleJustin's "What do you like about RSpec" thread (2016)Dan North BDDNate Berkopec "Reading rspec documentation like..." tweetCorey Haines: Fast Rails TestsKent Beck: "I get paid for code that works, not for tests..."Nick Quaranto's minitest CLI wrapperMocktail gemtest_data gem

The Bike Shed
362: Prioritizing Learning

The Bike Shed

Play Episode Listen Later Nov 15, 2022 29:40


This week, Steph and Joël discuss investment time and keeping track of things they want to learn. How do you, dear listener, keep track of things you want to learn? When investment time rolls around, what do you reach for, or how do you prioritize that list? Are there things you actively decide not to focus on when choosing where to develop deep expertise? Are there things you wish you could spend time on if you could? This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Bloom's Taxonomy (https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/) thoughtbot's interview (https://thoughtbot.com/playbook/our-company/hiring#interviewing) 3 categories of learning (https://thoughtbot.com/blog/what-technologies-should-i-learn) Four Thousand Weeks: Time Management for Mortals (https://www.amazon.com/Four-Thousand-Weeks-Management-Mortals/dp/B08XZY5ZF7/ref=sr_1_1?gclid=CjwKCAiA9qKbBhAzEiwAS4yeDZvBGi9yS1YkifUNcf0j8jx3s-NKc1pw5itLKSjI1vOfzlYJCuRNFRoC7ioQAvD_BwE&hvadid=599682518804&hvdev=c&hvlocphy=9006718&hvnetw=g&hvqmt=e&hvrand=3741098155096216457&hvtargid=kwd-1661352592925&hydadcr=28547_10703911&keywords=the+four+thousand+weeks&qid=1667835268&sr=8-1) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I was recently having a conversation with another colleague at thoughtbot, and they brought up Bloom's Taxonomy, which is a taxonomy of different phases of learning. It's often visualized as a pyramid with a broad base that starts with remembering facts and then expands up to understanding and then up to applying, and then analyzing, evaluating, and then finally creating. So it's a way to kind of quantify progression of someone who is trying to master a topic. And what really struck me when I saw this diagram was I immediately thought about how the tech industry interviews and a lot of our interviews are focused on the base of that pyramid. It's all about did you memorize certain facts, or APIs, or things like that? But a lot of the value that we create as developers...but to be good at our jobs, we have to actually be active much higher up in that pyramid in the analyze, evaluate, and create layers. But unfortunately, I feel like interviews often don't go that far; they're really just focused on the base. So that was a really interesting realization. We were not talking about interviewing, but this colleague shared the diagram. I looked at it, and the first thing I thought was like, oh, this is the problem with a lot of tech interviews these days. STEPHANIE: Yeah, I think a lot about how in interviews, we want to be showing off our best selves in a sense. Like, we want our interviewers to see the version of ourselves that we bring to work, which is usually like you were saying, at that top layer and isn't recalling particular facts about how our framework works or things we might have learned in computer science class in college. And one thing I actually really like about thoughtbot's interview...even in the job application, I think it says, "We want to see your strengths and see you at your best self." And it asks what can we, as thoughtbot, interview you on in a way that gives you the opportunity to display those skills? And so I really like that. I think I remember when I submitted that application, I might have said something along the lines of debugging a problem because I think that's where I personally shine. I don't know if it ended up being a conscious thing. But I do remember when I was doing the pairing interview, there was an aspect of debugging, and I was like, yes, this is where I can show off what I would normally do in a real-life work situation. So that really resonates with me. JOËL: Debugging is such a core developer skill, and yet I feel it's not often something that we dig into in a process like an interview. Sometimes you have almost like a code review style where you've got, oh, there is one bug hidden in here, find it, and it's almost like a gotcha sort of thing. I don't like those. But a real situation where you could show off your problem-solving and debugging skills sounds like a really good way to play to your strengths. STEPHANIE: Yeah. Where else do you think that higher level of critical analysis and creative output shows up in your day-to-day work? JOËL: I think it has to pervade the day-to-day work. The majority of my job is not remembering what method from enumerable is used to sort an array; it's trying to find a way to translate a problem that the business has into code or a code solution that will satisfy quite a lot of different constraints. This might be something that is doable in one or two days because that's all we have to allocate to this problem. So a lot of that work could be scoping down a problem. There might be some performance-related constraints where it needs to be faster than X. There are certainly some correctness constraints as well that you're trying to work within. So all of that, I think, is much more at that analysis, evaluation, and creation layers of the pyramid. STEPHANIE: Yeah, that's a really good point. I think sometimes I've seen interviews try to replicate that or recreate it in an interview question, even though they may be genuinely based off of real-life experiences that companies might have had. But most often, it's really hard to be evaluated on that situation until you're really just doing that work. JOËL: It is really hard to translate that into an interview format. I think one aspect that I do appreciate, and maybe that's just the consultant in me but having a conversation about trade-offs in a situation where there isn't a single correct answer. And so, maybe the interviewer and the candidate have different conclusions. But as long as they can show their reasoning down that path of why they came to the conclusion that they did, I think that's the important part of that. The hard thing is if the interviewer has their preferred solution, and they're just like, "No, you didn't come to my conclusion," then that's not a good interview. But a situation where a candidate gets to demonstrate their critical thinking skills, their analysis skills, their ability to make difficult decisions to balance trade-offs, I think that's a great way to show off some of those high-level skills that honestly we use on a daily basis. STEPHANIE: Yeah, I agree 100%. JOËL: So that's what I've been kind of excited about recently, just seeing this diagram and having that moment of clarity about interviewing. What's something new in your world, Stephanie? STEPHANIE: That's really interesting that you brought that up because it's kind of related to what I was going to say about what I've been working on on my client project, which is the ambiguity of the rewrite. So I mentioned last week that I've been rewriting some Rails views. And we're working on a pretty old legacy application, so there are a lot of things that, as we're rewriting, we need to figure out whether or not we want to include it in the new version. So it's been a little more challenging than just copying over the functionality that you want because there are a lot of things in this legacy app that were written 10-12 years ago that we don't have any context on, especially as consultants and even the people we're working with on this team, the code might even predate them. So we do our best to ask them questions about, hey, is this still necessary? Do you think we want it in this rewrite? And they don't always know the answers. And so we have to make our best judgment and make a lot of micro-level decisions about what we think is important to bring into this rewrite without a ton of that historical context. So when you were talking about those analytical, critical thinking skills, that seemed like a very relevant experience that I would say has been utilizing those aspects of learning. JOËL: Definitely, especially for a codebase that is that old. I feel like ten years is almost like a generation in software developer terms. Ten years ago would be what? 2012. That's Rails 3 still. I forget when Rails 4 came out. But yeah, that's a long time ago when you talk about technology. And at a company, even the odds of someone sticking around for that long are very low. STEPHANIE: Absolutely. And so sometimes we just choose to leave the code as it is, and we will just copy and paste it. But other times, we might try to rewrite it in a more modern way. One thing that we did recently was migrate a hand-rolled form builder to use Simple Form. And we did our best to retain most of the original functionality. But there were aspects of it, things like browser validation and stuff like that, that had to change because we made the conscious decision to use a more modern form builder. But then there were always going to be some differences, and so we had to reconcile those with the product team, have a lot of communication around what was important to keep and what wasn't. And yeah, really, just try to get the code in a better spot if we can while also acknowledging that some things have been working for ten years, and that's okay too. JOËL: So you're talking about a lot of old code that you're working with and seeing how much things have changed over ten years. And I feel like, as software developers, we're constantly having to learn and hone our skills, but it can really be overwhelming because there's so much to learn. How do you prioritize what you want to learn next? STEPHANIE: At thoughtbot, we're lucky enough to have investment times. So typically, on Fridays, most of us will not be working on client work, but we'll be working on things to improve thoughtbot internally or improve ourselves professionally. So I'm really grateful that I have dedicated learning time, and figuring out how to spend it has been both fun and also fraught in a way because like you were saying, there are so many things I want to learn about, and we internally have so much lively discussion about really cool technical things. But I've kind of accepted that I'm not going to be able to learn it all. And so when Friday does roll around, I do have to figure out, okay, how do I want to spend my precious investment time today? For me, it honestly feels really dependent on how I'm doing that Friday. So I do have a bit of a backlog of talks and articles that I've collected along the way or bookmarked that I might come back to if that is the mode I'm in. I also have bigger themes, I think, around frameworks and technologies that I want to dig a little more deeply into. I've been trying to work through a TypeScript tutorial for a while now, especially because it's not something that I've gotten a chance to spend a ton of time on in client work. And so in some ways, it's like, well, if I want to work on a client project using TypeScript, then I feel like I should brush up on TypeScript first. So that's kind of in the back of my head is just a more nebulous goal. But I also think that it really changes depending on how I'm feeling throughout the year. It could be very well that the TypeScript thing never comes to fruition and maybe something else will grab my attention. JOËL: I'm sure there are lessons, though, that you would learn from TypeScript that you could then use to improve your day-to day-work on a Rails project, for example. STEPHANIE: Yeah, absolutely. I think that's the really cool thing is that everything I learn in some way can connect to other things that I do know, or experience, or come across during my everyday work. So none of it ever feels like a waste of time. I think the best feeling is when you can make that connection as you are experiencing something in the codebase that reminds you of something you read about in a blog post or something like that. JOËL: Connections are one of the most crucial parts of, I think, knowledge creation. And in a past episode on note-taking, we had a whole deep conversation about how sometimes making connections between some of your notes is almost more valuable than taking a note by itself. STEPHANIE: Joël, how do you prioritize your learning? JOËL: I have three broad categories of technical learning that I like to do. The first is anything related to my core language and framework, and as of right now, that is Ruby, Rails. And maybe a little bit more broadly, anything related to the paradigms related to that, so object-oriented design, patterns related to that, all things that will help me to write better Ruby and Rails code. Then there are evergreen skills that are always great to invest in, things like getting better at Git, learning a little bit of SQL, getting better at doing things on the command line. Those are all things that I look to level up every now and then. And then, finally, just whatever interests me right now. I find that the return on investment for the amount of time you put in versus the amount of knowledge you get out is much higher when I'm personally interested. So it might be something completely unrelated to maybe more strategic elements of tech that I'm trying to get, but if I'm interested, it's worth putting a little bit of time into that. And so, for me, several years ago, that was functional programming types. Elm, I went really deep into that. And I think that really unlocked a whole other way of thinking about software for me and helped me...like we were saying earlier, I was able to bring that back to the way I think about Rails applications, the way I think about test-driven development. And that really rounded out my thinking, I think. STEPHANIE: Yeah, I think focusing your energy into where you're interested in makes it easier, for sure. It makes it more fun. I think like you're saying, your learning gets accelerated. And I think it's also really cool that people have different interests that they do like to go deep on. So maybe you might be thinking that you should focus your energy on this other aspect of development that you think would be really cool or useful in your work but doesn't necessarily interest you that much. Chances are that there's someone else who loves learning and talking about it, and you can use them as a resource when you want to know more. JOËL: That is a really important aspect because learning is not necessarily a solo activity. So sometimes, maybe I'm not even just prioritizing things that I think are strategically good for me or even things I'm just interested in. It might be things that my colleagues are interested in. So we have a book club that we run at thoughtbot. We've been going through the book Ruby Science, and there have been some great discussions around that. Recently, we've also been doing watch parties for episodes of I know it is RubyTapas by Avdi Grimm, but I think it rebranded recently, and I forget the new name of it, Graceful...I think Graceful.Dev. STEPHANIE: Graceful Devs, I think, yeah. JOËL: So we've been watching some of these together as a team and then having a conversation afterwards, so that's also been great. STEPHANIE: That's really cool. Yeah, I think getting other people involved makes it a lot more fun. And you have an accountability buddy. And you can have those deep, thoughtful conversations about the things you've learned. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPHANIE: I'm curious, have you ever made a conscious effort to not focus on something super deeply? JOËL: I don't know that I've made a decision to be like, I will not spend time here. But I've definitely made a decision to I will invest here and maybe not care quite as much there. So I've done quite a bit of different front-end technologies, starting with jQuery and Backbone.js and moving through a lot of the frameworks. Somehow I have not yet done much React. It's sort of a big hole in that list of frameworks that I have worked with. It's just not something that I've prioritized. I've done other things. I've learned concepts that I think mirror a lot of what React does, but that's not been something that I've dug into. STEPHANIE: That's really interesting because I think a lot of people think that they need to learn React because it's the popular front-end framework of the time. And so they think that it's something that they should know, or if they do ever have to work on a project with React, that kind of contributes to that feeling. But I like what you were saying earlier about how you have experience with other front-end frameworks. And that can help inform you if you ever do have to work in it. And also, there are so many great expert React devs out there. Like, we don't have to all be that dev. JOËL: Yeah. I think there can definitely be a pressure to feel like you have to know it all. And a lot of these tech stacks are changing so quickly that it becomes overwhelming to try to just keep up with everything. STEPHANIE: For sure. I remember having to write some tests for a React app, and the things that I had learned several years ago using Enzyme or something were no longer as relevant today, and having to pick up on the new best practices for writing Jest tests and React Testing Library. It was a lot, even though I was able to identify aspects of it that lined up with what I knew. It can be overwhelming, for sure. And people spend a lot of time digging deep into this framework and like I said, becoming those experts and accepting that I probably won't be that person [laughs] was also a little bit liberating, I think. JOËL: It's also important, I think, to accept that these sorts of labels of I'm that person, or I'm not that person are not permanent. It's I'm not that person now because that's not where I want to prioritize my time. Maybe in two or three years, it will make sense for me to become that person. And I can become that person if I put in the time, but today is not the day for me to be that person. STEPHANIE: That's a really good way of putting that. I like that a lot. JOËL: One struggle that I have, and I've seen a lot of people too is that it's easy to get very scattered in your learning that you'll have a lot of different things you're trying to learn at the same time or you feel like you want to do a little bit of this and a little bit of that. And then maybe you don't go very deep in any of them and feel like you're not being very effective with your time. Do you ever feel that, and do you have any strategies you like to use to make the most out of your learning time? STEPHANIE: I really relate to that. And I think one resource that helped me reframe that conundrum if you will, was this book called Four Thousand Weeks: Time Management for Mortals by Oliver Burkeman. It was really interesting because it kind of turned productivity culture around a bit on its head because his whole thesis is that you won't achieve at all and that by trying to hack your own productivity, what you're really preventing yourself from doing is accepting the fact that time is finite. And that you have to make hard decisions about where to focus your time in a way that will enrich your life the most. And sacrifice the idea that you will get to do everything on your to-do list, that you will learn every framework that you want to learn. And it's still hard for me to totally accept that. But I think I'm inching towards the idea that if I do drop a ball on something that I have had bookmarked for at this point, you know, a year, I'm probably never going to get around to reading that. And that's okay because I'm still getting by with the things that I am learning and applying them in the aspects of my work that are relevant to me today. JOËL: That sounds like a really refreshing take on productivity culture, maybe with some hard truths in there as well. Is 4,000 weeks the human lifespan? STEPHANIE: [laughs] Yeah, it is. It's really funny because I think he even starts off in the book quizzing one of his friends, like, how many weeks do you think we have to live? And his friend very naively answered, "Oh, must be, you know, 500,000 or so," or something like that. But he used that as an illustration of how we inflate how much time we think that we might have in a day, a week, our lifespan. [laughs] JOËL: I'm a big history nerd in my personal time. You see this theme that comes up a lot in medieval European art and the 1400s after a lot of these big plagues have happened where they feature a lot of death or skeletons or those sorts of motifs that are much more prevalent than maybe an earlier art, and this idea that comes with a Latin phrase Memento Mori (remember death). And I think there's maybe an element of that that comes back into this book at least the way you were describing it, the idea that you only have 4,000 weeks, roughly, in your life, so make the best use of it. STEPHANIE: Yeah, absolutely. It's nothing new, for sure. I think it's just one of those things that we've been grappling with as a species for as long as we've existed. [laughs] So I don't know if anyone out there feels slightly relieved that it's okay for them not to get through their list of bookmarked articles about technical things. I hope that feels slightly better for you. JOËL: We give you permission for you, the audience, to go to your bookmarks and those articles that you've been meaning to read for two years and you haven't got to; it's okay to remove them. You will be okay. STEPHANIE: Agreed. So we've talked about how we spend our investment time. But I'm curious, do you have any strategies for people who do most of their learning in their everyday work? JOËL: You know, I think that applies to me as well. We've been heavily emphasizing investment time, but that's only one day a week. And four days a week, I am doing regular application development for clients. And so the majority of my hours in a week are going to be dedicated to that. I find that being very self-aware for the things that you do and trying to notice when I learn something new or when I interact with something new has really helped me get more out of my day-to-day work. And a way to level that, I think, is to be on the lookout for opportunities to share with others. And that can be as small as just put a today I learned message in a group chat, maybe in thoughtbot's Slack developer channel, and just say, "Hey, today I learned this interesting thing about a particular method." Or "Today I learned this weird thing about time zones." Or "Today I learned this interesting fact about testing." And then that might start a discussion, or it might not. But the fact that I took the time to take it out of my head and write it out, I think, makes that more concrete, and it helps me hold on to it. STEPHANIE: I've noticed you are really good about doing that, about sharing things that you encounter in your everyday work in a very low-stakes kind of way. I am not so good at doing that. I tend to be so steeped in client work, and I have to really intentionally, after a project is over, think about what I learned along the way. And oftentimes, they're not as small, incremental atomic bits of information but bigger picture things about, oh, I learned how to navigate this aspect of ambiguity. And maybe the next time, I can point to a past experience or lean on a little bit more on my gut instinct to guide me towards making the right decision. And I think that's an important aspect of learning too, even if it wasn't necessarily a technical tidbit. It is part of becoming a better developer, just as equally as gaining that more concrete technical knowledge. JOËL: Intuition, I think, is really important as developers, and honing that intuition is something that is really valuable. One way that I found helpful is dialogue, just a conversation with one other person, maybe it's asynchronous over Slack, maybe it's a call in person, and just talking through an idea that I have. A recent one and I think I mentioned this on the previous episode of The Bike Shed, was talking about RSpec matchers. And does your choice of matcher impact the sorts of design that will come out of the code that you write? Does EQ tend to push you in a direction maybe where you're less strongly encapsulating data? And so that's just a thought, and then you have a conversation about it. And then that can help sharpen your intuition so that the next time you're writing a test you're not just thoughtlessly bringing in a matcher because whatever; it's the thing to do. And initially, maybe it's not intuition; it's much more explicit. You're thinking, ooh, do I want EQ, or do I want not? But I imagine that after six months of me being hyperaware of that, I will have built up some intuitions to be like, oh, this is the place where we want a custom matcher, or here's the place where I want EQ. And my hope is that that will eventually come to the point where it's so natural. Someone would almost have to stop me and say, hey, wait, why are you choosing that? And then I have to think a little bit and be like, oh, it's because of these things. But I'll have started with a conversation, which then turned into just hyperawareness thinking about it every time I do that action which then turns into intuition. STEPHANIE: Yeah. I think you can also call that experience. I remember having a conversation with someone, and I told them that I could inject their brain with all of the knowledge and information that I had. But that isn't quite the same as having really experienced the process of gaining that knowledge through more conventional learning methods but also that day-to-day client work that you're doing. So I totally agree with you there. JOËL: You took this whole long thing I had to say and were able to condense it down to one word: experience. STEPHANIE: [laughs] JOËL: Which I think, yeah, exactly describes what I'm trying to say. And with that, shall we wrap up? STEPHANIE: Let's wrap up. JOËL: The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
361: Working Incrementally

The Bike Shed

Play Episode Listen Later Nov 8, 2022 30:28


thoughtbotter Stephanie Minn joins The Bike Shed as co-host!

The Bike Shed
355: Test Performance

The Bike Shed

Play Episode Listen Later Sep 20, 2022 42:44


Guest Geoff Harcourt, CTO of CommonLit, joins Joël to talk about a thing that comes up with a lot with clients: the performance of their test suite. It's often a concern because with test suites, until it becomes a problem, people tend to not treat it very well, and people ask for help on making their test suites faster. Geoff shares how he handles a scenario like this at CommonLit. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Geoff Harcourt (https://twitter.com/geoffharcourt) Common Lit (https://www.commonlit.org/) Cuprite driver (https://cuprite.rubycdp.com/) Chrome DevTools Protocol (CDP) (https://chromedevtools.github.io/devtools-protocol/) Factory Doctor (https://test-prof.evilmartians.io/#/profilers/factory_doctor) Joël's RailsConf talk (https://www.youtube.com/watch?v=LOlG4kqfwcg) Formal Methods (https://www.hillelwayne.com/post/formally-modeling-migrations/) Rails multi-database support (https://guides.rubyonrails.org/active_record_multiple_databases.html) Knapsack pro (https://knapsackpro.com/) Prior episode with Eebs (https://www.bikeshed.fm/353) Shopify article on skipping specs (https://shopify.engineering/spark-joy-by-running-fewer-tests) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Geoff Harcourt, who is the CTO of CommonLit. GEOFF: Hi, Joël. JOËL: And together, we're here to share a little bit of what we've learned along the way. Geoff, can you briefly tell us what is CommonLit? What do you do? GEOFF: CommonLit is a 501(c)(3) non-profit that delivers a literacy curriculum in English and Spanish to millions of students around the world. Most of our tools are free. So we take a lot of pride in delivering great tools to teachers and students who need them the most. JOËL: And what does your role as CTO look like there? GEOFF: So we have a small engineering team. There are nine of us, and we run a Rails monolith. I'd say a fair amount of the time; I'm hands down in the code. But I also do the things that an engineering head has to do, so working with vendors, and figuring out infrastructure, and hiring, and things like that. JOËL: So that's quite a variety of things that you have to do. What is new in your world? What's something that you've encountered recently that's been fun or interesting? GEOFF: It's the start of the school year in America, so traffic has gone from a very tiny amount over the summer to almost the highest load that we'll encounter all year. So we're at a new hosting provider this fall. So we're watching our infrastructure and keeping an eye on it. The analogy that we've been using to describe this is like when you set up a bunch of plumbing, it looks like it all works, but until you really pump water through it, you don't see if there are any leaks. So things are in good shape right now, but it's a very exciting time of year for us. JOËL: Have you ever done some actual plumbing yourself? GEOFF: I am very, very bad at home repair. But I have fixed a toilet or two. I've installed a water filter but nothing else. What about you? JOËL: I've done a little bit of it when I was younger with my dad. Like, I actually welded copper pipes and that kind of thing. GEOFF: Oh, that's amazing. That's cool. Nice. JOËL: So I've definitely felt that thing where you turn the water source back on, and it's like, huh, let's see, is this joint going to leak, or are we good? GEOFF: Yeah, they don't have CI for plumbing, right? JOËL: [laughs] You know, test it in production, right? GEOFF: Yeah. [laughs] So we're really watching right now traffic starting to rise as students and teachers are coming back. And we're also figuring out all kinds of things that we want to do to do better monitoring of our application, so some of this is watching metrics to see if things happen. But some of this is also doing some simulated user activity after we do deploys. So we're using some automated browsers with Cypress to log into our application and do some user flows, and then report back on the results. JOËL: So is this kind of like a feature test in CI, except that you're running it in production? GEOFF: Yeah. Smoke test is the word that we've settled on for it, but we run it against our production server every time we deploy. And it's a small suite. It's nowhere as big as our big Capybara suite that we run in CI, but we're trying to get feedback in less than six minutes. That's sort of the goal. In addition to running tests, we also take screenshots with a tool called Percy, and that's a visual regression testing tool. So we get to see the screenshots, and if they differ by more than one pixel, we get a ping that lets us know that maybe our CSS has moved around or something like that. JOËL: Has that caught some visual bugs for you? GEOFF: Definitely. The state of CSS at CommonLit was very messy when I arrived, and it's gotten better, but it still definitely needs some love. There are some false positives, but it's been really, really nice to be able to see visual changes on our production pages and then be able to approve them or know that there's something we have to go back and fix. JOËL: I'm curious, for this smoke test suite, how long does it take to run? GEOFF: We run it in parallel. It runs on Buildkite, which is the same tool that we use to orchestrate our CI, and the longest test takes about five minutes. It signs in as a teacher, creates an account. It creates a class; it invites the student to that class. It then logs out, logs in as that student creates the student account, signs in as the student, joins the class. It then assigns a lesson to the student then the student goes and takes the lesson. And then, when the student submits the lesson, then the test is over. And that confirms all of the most critical flows that we would want someone to drop what they were doing if it's broken, you know, account creation, class creation, lesson creation, and students taking a lesson. JOËL: So you're compressing the first few weeks of school into five minutes. GEOFF: Yes. And I pity the school that has thousands of fake teachers, all named Aaron McCarronson at the school. JOËL: [laughs] GEOFF: But we go through and delete that data every once in a while. But we have a marketer who just started at CommonLit maybe a few weeks ago, and she thought that someone was spamming our signup form because she said, "I see hundreds of teachers named Aaron McCarronson in our user list." JOËL: You had to admit that you were the spammer? GEOFF: Yes, I did. [laughs] We now have some controls to filter those people out of reports. But it's always funny when you look at the list, and you see all these fake people there. JOËL: Do you have any rate limiting on your site? GEOFF: Yeah, we do quite a bit of it, actually. Some of it we do through Cloudflare. We have tools that limit a certain flow, like people trying to credential stuffing our password, our user sign-in forms. But we also do some further stuff to prevent people from hitting key endpoints. We use Rack::Attack, which is a really nice framework. Have you had to do that in client work with clients setting that stuff up? JOËL: I've used Rack:Attack before. GEOFF: Yeah, it's got a reasonably nice interface that you can work with. And I always worry about accidentally setting those things up to be too sensitive, and then you get lots of stuff back. One issue that we sometimes find is that lots of kids at the same school are sharing an IP address. So that's not the thing that we want to use for rate limiting. We want to use some other criteria for rate limiting. JOËL: Right, right. Do you ever find that you rate limit your smoke tests? Or have you had to bypass the rate limiting in the smoke tests? GEOFF: Our smoke tests bypass our rate limiting and our bot detection. So they've got some fingerprints they use to bypass that. JOËL: That must have been an interesting day at the office. GEOFF: Yes. [laughter] With all of these things, I think it's a big challenge to figure out, and it's similar when you're making tests for development, how to make tests that are high signal. So if a test is failing really frequently, even if it's testing something that's worthwhile, if people start ignoring it, then it stops having value as a piece of signal. So we've invested a ton of time in making our test suite as reliable as possible, but you sometimes do have these things that just require a change. I've become a really big fan of...there's a Ruby driver for Capybara called Cuprite, and it doesn't control chrome with Chrome Driver or with Selenium. It controls it with the Chrome DevTools protocol, so it's like a direct connection into the browser. And we find that it's very, very fast and very, very reliable. So we saw that our Capybara specs got significantly more reliable when we started using this as our driver. JOËL: Is this because it's not actually moving the mouse around and clicking but instead issuing commands in the background? GEOFF: Yeah. My understanding of this is a little bit hazy. But I think that Selenium and ChromeDriver are communicating over a network pipe, and sometimes that network pipe is a little bit lossy. And so it results in asynchronous commands where maybe you don't get the feedback back after something happens. And CDP is what Chrome's team and I think what Puppeteer uses to control things directly. So it's great. And you can even do things with it. Like, you can simulate different time zone for a user almost natively. You can speed up or slow down the traveling of time and the direction of time in the browser and all kinds of things like that. You can flip it into mobile mode so that the device reports that it's a touch browser, even though it's not. We have a set of mobile specs where we flip it with CDP into mobile mode, and that's been really good too. Do you find when you're doing client work that you have a demand to build mobile-specific specs for system tests? JOËL: Generally not, no. GEOFF: You've managed to escape it. JOËL: For something that's specific to mobile, maybe one or two tests that have a weird interaction that we know is different on mobile. But in general, we're not doing the whole suite under mobile and the whole suite under desktop. GEOFF: When you hand off a project...it's been a while since you and I have worked together. JOËL: For those who don't know, Geoff used to be with us at thoughtbot. We were colleagues. GEOFF: Yeah, for a while. I remember my very first thoughtbot Summer Summit; you gave a really cool lightning talk about Eleanor of Aquitaine. JOËL: [laughs] GEOFF: That was great. So when you're handing a project off to a client after your ending, do you find that there's a transition period where you're educating them about the norms of the test suite before you leave it in their hands? JOËL: It depends a lot on the client. With many clients, we're working alongside an existing dev team. And so it's not so much one big handoff at the end as it is just building that in the day-to-day, making sure that we are integrating with the team from the outset of the engagement. So one thing that does come up a lot with clients is the performance of their test suite. That's often a concern because the test suite until it becomes a problem, people tend to not treat it very well. And by the time that you're bringing on an external consultant to help, generally, that's one of the areas of the code that's been a little bit neglected. And so people ask for help on making their test suite faster. Is that something that you've had to deal with at CommonLit as well? GEOFF: Yeah, that's a great question. We have struggled a lot with the speed that our test suite...the time it takes for our test suite to run. We've done a few things to improve it. The first is that we have quite a bit of caching that we do in our CI suite around dependencies. So gems get cached separately from NPM packages and browser assets. So all three of those things are independently cached. And then, we run our suites in parallel. Our Jest specs get split up into eight containers. Our Ruby non-system tests...I'd like to say unit tests, but we all know that some of those are actually integration tests. JOËL: [laughs] GEOFF: But those tests run in 15 containers, and they start the moment gems are built. So they don't wait for NPM packages. They don't wait for assets. They immediately start going. And then our system specs as soon as the assets are built kick off and start running. And we actually run that in 40 parallel containers so we can get everything finished. So our CI suite can finish...if there are no dependency bumps and no asset bumps, our specs suite you can finish in just under five minutes. But if you add up all of that time, cumulatively, it's something like 75 minutes is the total execution as it goes. Have you tried FactoryDoctor before for speeding up test suites? JOËL: This is the gem from Evil Martians? GEOFF: Yeah, it's part of TestProf, which is their really, really unbelievable toolkit for improving specs, and they have a whole bunch of things. But one of them will tell you how many invocations of FactoryBot factories each factory got. So you can see a user factory was fired 13,000 times in the test suite. It can even do some tagging where it can go in and add metadata to your specs to show which ones might be candidates for optimization. JOËL: I gave a talk at RailsConf this year titled Your Tests Are Making Too Many Database Calls. GEOFF: Nice. JOËL: And one of the things I talked about was creating a lot more data via factories than you think that you are. And I should give a shout-out to FactoryProf for finding those. GEOFF: Yeah, it's kind of a silent killer with the test suite, and you really don't think that you're doing a whole lot with it, and then you see how many associations. How do you fight that tension between creating enough data that things are realistic versus the streamlining of not creating extraneous things or having maybe mystery guests via associations and things like that? JOËL: I try to have my base factories be as minimal as possible. So if there's a line in there that I can remove, and the factory or the model still saves, then it should be removed. Some associations, you can't do that if there's a foreign key constraint, and so then I'll leave it in. But I am a very hardcore minimalist, at least with the base factory. GEOFF: I think that makes a lot of sense. We use foreign keys all over the place because we're always worried about somehow inserting student data that we can't recover with a bug. So we'd rather blow up than think we recorded it. And as a result, sometimes setting up specs for things like a student answering a multiple choice question on a quiz ends up being this sort of if you give a mouse a cookie thing where it's you need the answer options. You need the question. You need the quiz. You need the activity. You need the roster, the students to be in the roster. There has to be a teacher for the roster. It just balloons out because everything has a foreign key. JOËL: The database requires it, but the test doesn't really care. It's just like, give me a student and make it valid. GEOFF: Yes, yeah. And I find that that challenge is really hard. And sometimes, you don't see how hard it is to enforce things like database integrity until you have a lot of concurrency going on in your application. It was a very rude surprise to me to find out that browser requests if you have multiple servers going on might not necessarily be served in the order that they were made. JOËL: [laughs] So you're talking about a scenario where you're running multiple instances of your app. You make two requests from, say, two browser tabs, and somehow they get served from two different instances? GEOFF: Or not even two browser tabs. Imagine you have a situation where you're auto-saving. JOËL: Oooh, background requests. GEOFF: Yeah. So one of the coolest features we have at CommonLit is that students can annotate and highlight a text. And then, the teachers can see the annotations and highlights they've made, and it's actually part of their assignment often to highlight key evidence in a passage. And those things all fire in the background asynchronously so that it doesn't block the student from doing more stuff. But it also means that potentially if they make two changes to a highlight really quickly that they might arrive out of order. So we've had to do some things to make sure that we're receiving in the right order and that we're not blowing away data that was supposed to be there. Just think about in a Heroku environment, for example, which is where we used to be, you'd have four dynos running. If dyno one takes too long to serve the thing for dyno two, request one may finish after request two. That was a very, very rude surprise to learn that the world was not as clean and neat as I thought. JOËL: I've had to do something similar where I'm making a bunch of background requests to a server. And even with a single dyno, it is possible for your request to come back out of order just because of how TCP works. So if it's waiting for a packet and you have two of these requests that went out not too long before each other, there's no guarantee that all the packets for request one come back before all the packets from request two. GEOFF: Yeah, what are the strategies for on the client side for dealing with that kind of out-of-order response? JOËL: Find some way to effectively version the requests that you make. Timestamp is an easy one. Whenever a request comes in, you take the response from the latest timestamp, and that wins out. GEOFF: Yeah, we've started doing some unique IDs. And part of the unique ID is the browser's timestamp. We figure that no one would try to hack themselves and intentionally screw up their own data by submitting out of order. JOËL: Right, right. GEOFF: It's funny how you have to pick something to trust. [laughs] JOËL: I'd imagine, in this case, if somebody did mess around with it, they would really only just be screwing up their own UI. It's not like that's going to then potentially crash the server because of something, and then you've got a potential vector for a denial of service. GEOFF: Yeah, yeah, that's always what we're worried about, and we have to figure out how to trust these sorts of requests as what's a valid thing and what is, as you're saying, is just the user hurting themselves as opposed to hurting someone else's stuff? MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! GEOFF: You were talking about test suites. What are some things that you have found are consistently problems in real-world apps, but they're really, really hard to test in a test suite? JOËL: Difficult to test or difficult to optimize for performance? GEOFF: Maybe difficult to test. JOËL: Third-party integrations. Anything that's over the network that's going to be difficult. Complex interactions that involve some heavy frontend but then also need a lot of backend processing potentially with asynchronous workers or something like that, there are a lot of techniques that we can use to make all those play together, but that means there's a lot of complexity in that test. GEOFF: Yeah, definitely. I've taken a deep interest in what I'm sure there's a better technical term for this, but what I call network hostile environments or bandwidth hostile environments. And we see this a lot with kids. Especially during the pandemic, kids would often be trying to do their assignments from home. And maybe there are five kids in the house, and they're all trying to do their homework at the same time. And they're all sharing a home internet connection. Maybe they're in the basement because they're trying to get some peace and quiet so they can do their assignment or something like that. And maybe they're not strongly connected. And the challenge of dealing with intermittent connectivity is such an interesting problem, very frustrating but very interesting to deal with. JOËL: Have you explored at all the concept of Formal Methods to model or verify situations like that? GEOFF: No, but I'm intrigued. Tell me more. JOËL: I've not tried it myself. But I've read some articles on the topic. Hillel Wayne is a good person to follow for this. GEOFF: Oh yeah. JOËL: But it's really fascinating when you'll see, okay, here are some invariants and things. And then here are some things that you set up some basic properties for a system. And then some of these modeling languages will then poke holes and say, hey, it's possible for this 10-step sequence of events to happen that will then crash your server. Because you didn't think that it's possible for five people to be making concurrent requests, and then one of them fails and retries, whatever the steps are. So it's really good at modeling situations that, as developers, we don't always have great intuition, things like parallelism. GEOFF: Yeah, that sounds so interesting. I'm going to add that to my list of reading for the fall. Once the school year calms down, I feel like I can dig into some technical topics again. I've got this book sitting right next to my desk, Designing Data-Intensive Applications. I saw it referenced somewhere on Twitter, and I did the thing where I got really excited about the book, bought it, and then didn't have time to read it. So it's just sitting there unopened next to my desk, taunting me. JOËL: What's the 30-second spiel for what is a data-intensive app, and why should we design for it differently? GEOFF: You know, that's a great question. I'd probably find out if I'd dug further into the book. JOËL: [laughs] GEOFF: I have found at CommonLit that we...I had a couple of clients at thoughtbot that dealt with data at the scale that we deal with here. And I'm sure there are bigger teams doing, quote, "bigger data" than we're doing. But it really does seem like one of our key challenges is making sure that we just move data around fast enough that nothing becomes a bottleneck. We made a really key optimization in our application last year where we changed the way that we autosave students' answers as they go. And it resulted in a massive increase in throughput for us because we went from trying to store updated versions of the students' final answers to just storing essentially a draft and often storing that draft in local storage in the browser and then updating it on the server when we could. And then, as a result of this, we're making key updates to the table where we store a student's answers much less frequently. And that has a huge impact because, in addition to being one of the biggest tables at CommonLit...it's got almost a billion recorded answers that we've gotten from students over the years. But because we're not writing to it as often, it also means that reads that are made from the table, like when the teacher is getting a report for how the students are doing in a class or when a principal is looking at how a school is doing, now, those queries are seeing less contention from ongoing writes. And so we've seen a nice improvement. JOËL: One strategy I've seen for that sort of problem, especially when you have a very write-heavy table but that also has a different set of users that needs to read from it, is to set up a read replica. So you have your main that is being written to, and then the read replica is used for reports and people who need to look at the data without being in contention with the table being written. GEOFF: Yeah, Rails multi-DB support now that it's native to the framework is excellent. It's so nice to be able to just drop that in and fire it up and have it work. We used to use a solution that Instacart had built. It was great for our needs, but it wasn't native to the framework. So every single time we upgraded Rails, we had to cross our fingers and hope that it didn't, you know, whatever private APIs of ActiveRecord it was using hadn't broken. So now that that stuff, which I think was open sourced from GitHub's multi-database implementation, so now that that's all native in Rails, it's really, really nice to be able to use that. JOËL: So these kinds of database tricks can help make the application much more performant. You'd mentioned earlier that when you were trying to make your test performant that you had introduced parallelism, and I feel like that's maybe a bit of an intimidating thing for a lot of people. How would you go about converting a test suite that's just vanilla RSpec, single-threaded, and then moving it in a direction of being more parallel? GEOFF: There's a really, really nice tool called Knapsack, which has a free version. But the pro version, I feel like if you're spending any money at all on CI, it's immediately worth the cost. I think it's something like $75 a month for each suite that you run on it. And Knapsack does this dynamic allocation of tests across containers. And it interfaces with several of the popular CI providers so that it looks at environment variables and can tell how many containers you're splitting across. It'll do some things, like if some of your containers start early and some of them start late, it will distribute the work so that they all end at the same time, which is really nice. We've preferred CI providers that charge by the minute. So rather than just paying for a service that we might not be using, we've used services like Semaphore, and right now, we're on Buildkite, which charge by the minute, which means that you can decide to do as much parallelism as you want. You're just paying for the compute time as you run things. JOËL: So that would mean that two minutes of sequential build time costs just the same as splitting it up in parallel and doing two simultaneous minutes of build time. GEOFF: Yeah, that is almost true. There's a little bit of setup time when a container spins up. And that's one of the key things that we optimize. I guess if we ran 200 containers if we were like Shopify or something like that, we could technically make our CI suite finish faster, but it might cost us three times as much. Because if it takes a container 30 seconds to spin up and to get ready, that's 30 seconds of dead time when you're not testing, but you're paying for the compute. So that's one of the key optimizations that we make is figuring out how many containers do we need to finish fast when we're not just blowing time on starting and finishing? JOËL: Right, because there is a startup cost for each container. GEOFF: Yeah, and during the work day when our engineers are working along, we spin up 200 EC2 machines or 150 EC2 machines, and they're there in the fleet, and they're ready to go to run CI jobs for us. But if you don't have enough machines, then you have jobs that sit around waiting to start, that sort of thing. So there's definitely a tension between figuring out how much parallelism you're going to do. But I feel like to start; you could always break your test suite into four pieces or two pieces and just see if you get some benefit to running a smaller number of tests in parallel. JOËL: So, manually splitting up the test suite. GEOFF: No, no, using something like Knapsack Pro where you're feeding it the suite, and then it's dividing up the tests for you. I think manually splitting up the suite is probably not a good practice overall because I'm guessing you'll probably spend more engineering time on fiddling with which tests go where such that it wouldn't be cost-effective. JOËL: So I've spent a lot of time recently working to improve a parallel test suite. And one of the big problems that you have is trying to make sure that all of your parallel surfaces are being used efficiently, so you have to split the work evenly. So if you said you have 70 minutes worth of work, if you give 50 minutes to one worker and 20 minutes to the other, that means that your total test suite is still 50 minutes, and that's not good. So ideally, you split it as evenly as possible. So I think there are three evolutionary steps on the path here. So you start off, and you're going to manually split things out. So you're going to say our biggest chunk of tests by time are the feature specs. We'll make them almost like a separate suite. Then we'll make the models and controllers and views their own thing, and that's roughly half and half, and run those. And maybe you're off by a little bit, but it's still better than putting them all in one. It becomes difficult, though, to balance all of these because then one might get significantly longer than the other then, you have to manually rebalance it. It works okay if you're only splitting it among two workers. But if you're having to split it among 4, 8, 16, and more, it's not manageable to do this, at least not by hand. If you want to get fancy, you can try to automate that process and record a timing file of how long every file takes. And then when you kick off the build process, look at that timing file and say, okay, we have 70 minutes, and then we'll just split the file so that we have roughly 70 divided by number of workers' files or minutes of work in each process. And that's what gems like parallel_tests do. And Knapsack's Classic mode works like this as well. That's decently good. But the problem is you're working off of past information. And so if the test has changed or just if it's highly variable, you might not get a balanced set of workers. And as you mentioned, there's a startup cost, and so not all of your workers boot up at the same time. And so you might still have a very uneven amount of work done by each worker by statically determining the work to be done via a timing file. So the third evolution here is a dynamic or a self-balancing approach where you just put all of the tests or the files in a queue and then just have every worker pull one or two tests when it's ready to work. So that way, if something takes a lot longer than expected, well, it's just not pulling more from the queue. And everybody else still pulls, and they end up all balancing each other out. And then ideally, every worker finishes work at exactly the same time. And that's how you know you got the most value you could out of your parallel processes. GEOFF: Yeah, there's something about watching all the jobs finish in almost exactly, you know, within 10 seconds of each other. It just feels very, very satisfying. I think in addition to getting this dynamic splitting where you're getting either per file or per example split across to get things finishing at the same time, we've really valued getting fast feedback. So I mentioned before that our Jest specs start the moment NPM packages get built. So as soon as there's JavaScripts that can be executed in test, those kick-off. As soon as our gems are ready, the RSpec non-system tests go off, and they start running specs immediately. So we get that really, really fast feedback. Unfortunately, the browser tests take the longest because they have to wait for the most setup. They have the most dependencies. And then they also run the slowest because they run in the browser and everything. But I think when things are really well-oiled, you watch all of those containers end at roughly the same time, and it feels very satisfying. JOËL: So, a few weeks ago, on an episode of The Bike Shed, I talked with Eebs Kobeissi about dependency graphs and how I'm super excited about it. And I think I see a dependency graph in what you're describing here in that some things only depend on the gem file, and so they can start working. But other things also depend on the NPM packages. And so your build pipeline is not one linear process or one linear process that forks into other linear processes; it's actually a dependency graph. GEOFF: That is very true. And the CI tool we used to use called Semaphore actually does a nice job of drawing the dependency graph between all of your steps. Buildkite does not have that, but we do have a bunch of steps that have to wait for other steps to finish. And we do it in our wiki. On our repo, we do have a diagram of how all of this works. We found that one of the things that was most wasteful for us in CI was rebuilding gems, reinstalling NPM packages (We use Yarn but same thing.), and then rebuilding browser assets. So at the very start of every CI run, we build hashes of a bunch of files in the repository. And then, we use those hashes to name Docker images that contain the outputs of those files so that we are able to skip huge parts of our CI suite if things have already happened. So I'll give an example if Ruby gems have not changed, which we would know by the Gemfile.lock not having changed, then we know that we can reuse a previously built gems image that has the gems that just gets melted in, same thing with yarn.lock. If yarn.lock hasn't changed, then we don't have to build NPM packages. We know that that already exists somewhere in our Docker registry. In addition to skipping steps by not redoing work, we also have started to experiment...actually, in response to a comment that Chris Toomey made in a prior Bike Shed episode, we've started to experiment with skipping irrelevant steps. So I'll give an example of this if no Ruby files have changed in our repository, we don't run our RSpec unit tests. We just know that those are valid. There's nothing that needs to be rerun. Similarly, if no JavaScript has changed, we don't run our Jest tests because we assume that everything is good. We don't lint our views with erb-lint if our view files haven't changed. We don't lint our factories if the model or the database hasn't changed. So we've got all these things to skip key types of processing. I always try to err on the side of not having a false pass. So I'm sure we could shave this even tighter and do even less work and sometimes finish the build even faster. But I don't want to ever have a thing where the build passes and we get false confidence. JOËL: Right. Right. So you're using a heuristic that eliminates the really obvious tests that don't need to be run but the ones that maybe are a little bit more borderline, you keep them in. Shaving two seconds is not worth missing a failure. GEOFF: Yeah. And I've read things about big enterprises doing very sophisticated versions of this where they're guessing at which CI specs might be most relevant and things like that. We're nowhere near that level of sophistication right now. But I do think that once you get your test suite parallelized and you're not doing wasted work in the form of rebuilding dependencies or rebuilding assets that don't need to be rebuilt, there is some maybe not low, maybe medium hanging fruit that you can use to get some extra oomph out of your test suite. JOËL: I really like that you brought up this idea of infrastructure and skipping. I think in my own way of thinking about improving test suites, there are three broad categories of approaches you can take. One variable you get to work with is that total number of time single-threaded, so you mentioned 70 minutes. You can make that 70 minutes shorter by avoiding database writes where you don't need them, all the common tricks that we would do to actually change the test themselves. Then we can change...as another variable; we get to work with parallelism, we talked about that. And then finally, there's all that other stuff that's not actually executing RSpec like you said, loading the gems, installing NPM packages, Docker images. All of those, if we can skip work running migrations, setting up a database, if there are situations where we can improve the speed there, that also improves the total time. GEOFF: Yeah, there are so many little things that you can pick at to...like, one of the slowest things for us is Elasticsearch. And so we really try to limit the number of specs that use Elasticsearch if we can. You actually have to opt-in to using Elasticsearch on a spec, or else we silently mock and disable all of the things that happen there. When you're looking at that first variable that you were talking about, just sort of the overall time, beyond using FactoryDoctor and FactoryProf, is there anything else that you've used to just identify the most egregious offenders in a test suite and then figure out if they're worth it? JOËL: One thing you can do is hook into Active Support notification to try to find database writes. And so you can find, oh, here's where all of the...this test is making way too many database writes for some reason, or it's making a lot, maybe I should take a look at it; it's a hotspot. GEOFF: Oh, that's really nice. There's one that I've always found is like a big offender, which is people doing negative expectations in system specs. JOËL: Oh, for their Capybara wait time. GEOFF: Yeah. So there's a really cool gem, and the name of it is eluding me right now. But there's a gem that raises a special exception if Capybara waits the full time for something to happen. So it lets you know that those things exist. And so we've done a lot of like hunting for...Knapsack will report the slowest examples in your test suite. So we've done some stuff to look for the slowest files and then look to see if there are examples of these negative expectations that are waiting 10 seconds or waiting 8 seconds before they fail. JOËL: Right. Some files are slow, but they're slow for a reason. Like, a feature spec is going to be much slower than a model test. But the model tests might be very wasteful and because you have so many of them, if you're doing the same pattern in a bunch of them or if it's a factory that's reused across a lot of them, then a small fix there can have some pretty big ripple effects. GEOFF: Yeah, I think that's true. Have you ever done any evaluation of test suite to see what files or examples you could throw away? JOËL: Not holistically. I think it's more on an ad hoc basis. You find a place, and you're like, oh, these tests we probably don't need them. We can throw them out. I have found dead tests, tests that are not executed but still committed to the repo. GEOFF: [laughs] JOËL: It's just like, hey, I'm going to get a lot of red in my diff today. GEOFF: That always feels good to have that diff-y check-in, and it's 250 lines or 1,000 lines of red and 1 line of green. JOËL: So that's been a pretty good overview of a lot of different areas related to performance and infrastructure around tests. Thank you so much, Geoff, for joining us today on The Bike Shed to talk about your experience at CommonLit doing this. Do you have any final words for our listeners? GEOFF: Yeah. CommonLit is hiring a senior full-stack engineer, so if you'd like to work on Rails and TypeScript in a place with a great test suite and a great team. I've been here for five years, and it's a really, really excellent place to work. And also, it's been really a pleasure to catch up with you again, Joël. JOËL: And, Geoff, where can people find you online? GEOFF: I'm Geoff with a G, G-E-O-F-F Harcourt, @geoffharcourt. And that's my name on Twitter, and it's my name on GitHub, so you can find me there. JOËL: And we'll make sure to include a link to your Twitter profile in the show notes. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
350: 21 Bell Salute

The Bike Shed

Play Episode Listen Later Aug 16, 2022 52:09


It's Steph and Chris' last show. Steph found a game, and if you've been following the journey, all of the Test::Unit test files are now live in RSpec. JWTs really grind Chris' gears. They wrap up with things they've learned, takeaways they've had, and their proudest podcasting moments. They also thank all the folks who've helped make The Bike Shed happen. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Microservices (https://www.youtube.com/watch?v=y8OnoxKotPQ) Transcript: CHRIS: One more round of golden roads, our golden. So here we go. STEPH: Oh, one more round of golden roads. Okay, maybe that's going to get to me today. [laughs] CHRIS: [singing] Golden roads take me home to the place. STEPH: [singing] I belong. CHRIS: Yeah, there you go. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together we're here to share a bit of what we've learned along the way, at least one more time. So with that [chuckles] as an intro, Steph, what would you say is new in your world? STEPH: Hey, Chris. Well, today is the big day. It is the day that you and I are recording our final Bike Shed episode, which we have all the feels about, and we will definitely dive into. But to ignore some of that for now, I have another small fun update I can provide about a new game that I found. So one of the things that's new in my world is I started playing a new board game with Tim; it's called Ticket to Ride. Have you heard of that? CHRIS: I have. I don't know if I've played it. I feel like it's a particularly popular one now. But I don't know if I've ever had the pleasure. STEPH: It's a very cute game, so we have the smaller version of it. For anyone that's not familiar, it's essentially a map. And then there's a bunch of spots where you can build trains and connect them, and then you get tickets. So your goal is that you're going to connect one location to another location. And then you get points and yada yada, but it's so much fun and especially the two-player version. It's like this perfect 20, maybe 30-minute game. I'll be honest; I'm not really a board game person. I always enjoy it. Once I get into it, then I'm like, this is great. I don't know why I was resistant to this. But every time someone's like, "Do you want to play a board game?" I'm like, "Not really." [laughs] I first have to get into it. But I have really enjoyed Ticket to Ride. That's been a really fun game to play. And it's been a nice way to, like, even during the day, we'll break for lunch and squeeze in a game. CHRIS: Well, I love good two-player games. They're hard to find. But when you find a good one, and it's got that easy pickup and play...I believe I'm going to now purchase this. And thank you for the tip. STEPH: Yeah, this is definitely one of those where it's easy to pick up, and then you can get the expanded board. So there's a two-player version, but then yeah, you can get one that's a map of the U.S. or a map of Europe. And I think it accommodates up to five players as the maximum, so not a huge group but definitely more than two. On a slightly more technical note, I have something that I'm very excited to share. It is a journey that you have been on with me, that everybody listening has been on this journey with me. And I'm very excited. I see you nodding your head, so I'm guessing that you're going to know where I'm headed with this. But I'm very excited to announce that all of the Test::Unit test files now live in RSpec. So that is a big win. I'm very, very excited for that to be a previous state of life and not an ongoing state of life. Because I have certainly developed too much niche knowledge around migrating these tests, and that became apparent to me when I was pairing with another developer that works with the client because they had offered...they had some time. They're like, "Hey, do you want help migrating a test file?" And I was like, "Sure." I was like, "But this is wonky enough, like, we should pair and work on this together because I just know some ins and outs. And I don't want you to have to learn a lot of the hard lessons that I've learned." And the test that we happened to pick up was very gnarly. It had a lot of mystery guests. And we spent, I think it was a good two hours. And we only migrated one of the tests, so not even a full file but one of the tests. And at the end of it, I was like, I know way too much about some of the oddities and quirkiness of this. And we got through it, but we decided that wasn't a good use of their time for them to go at this alone. So that's why I'm extra excited and relieved because I didn't want this task to carry on to someone else. So, hooray, we did it. CHRIS: Hooray. Just in time. You're Indiana Jones grabbing your hat right as you roll out and off to [laughs] be away from the project for a bit. So you stuck the landing. Well done, Steph. STEPH: Thank you. Thank you. So that's some great news. And then also, everything else in life is pretty much focused around getting ready for maternity leave. That's about to happen soon, and I am so ready. I have thoroughly enjoyed a lot of the things that I'm doing, [laughs] but goodness, being pregnant is hard. And I am very much ready for that leave. So also, a lot of the things that I'm doing right now are very focused on making sure everything's transitioned and communicated and that I just feel really good about that day of departure. That covers all the newness in my world other than the big thing that we're just not talking about yet. How about you? What's new in your world? CHRIS: Well, continuing to skirt the bigger topic that we will certainly get to in the episode, what is new in my world? I'm actually quite excited workwise right now. We have a much larger body of work that finally we got the clarity. All the pieces fell into place, and now we're sort of everybody rowing in the same direction. There's interesting, I think, really impactful code that we're writing for Sagewell right now. So that's really fantastic. We've got the whole team back together on the engineering side. And so we're, I think, in the strongest and most interesting point that I have experienced thus far. So that's all really fantastic. On a slight technical deep dive, you know what really grinds my gears? It's JWTs. JSON Web Tokens and I have never gotten along. It's never been a match made in heaven. And we have a webhook that comes from Plaid. Plaid is a vendor for connecting bank accounts and whatnot. And they have webhooks like many people do. So they can inform us when things change, lovely feature of how we build web apps these days. But often, there's a signature that says, "This is definitively from us, and you can trust us." And usually, it's some calculated signature, HMAC, or something like that. For some reason, Plaid's uses JWTs, and more than that, they use JWKs. So there's JWT which is the signature. That JWT itself is signed with a JWK. You have to fetch the JWK from their server based on the key ID in the header of the JWT. But how do you know if you can trust the JWT before you've gotten the JWK? All of this broke in a recent upgrade. We went from Heroku-20 to Heroku-22 to the new platform with Heroku, which bumped us to OpenSSL 3.0, and it turns out JWT doesn't work with it. And so that's sad. It's a no. It's going to be a no. It turns out the way that OpenSSL 3.0 works is incompatible with some of the code paths in JWT. And so I was like, wait, we just can't do this? And it's low-level cryptographic primitive stuff that I'm not comfortable messing around with. I'm not going to hop in there and roll up my sleeves. And even just getting to the point that I understood what was broken about this took like an hour and a half just to sort of like, wait, which is okay...so the JWT signs and encodes. And this will be a theme that we come back to later, but I think web development should be simpler. I think we should strive for simplicity. And this is a perfect example where I'm guessing Plaid uses JWTs and that approach to communicating security things often, but I've not seen it used much for signing webhooks. And, oof, it led to a complicated day. And it's unfixable now as far as I can tell. There is a commit on the JWT Ruby repo as of five days ago, but it doesn't build in our system. And it's not released. And it's just a mess. So yeah, engineering is complicated. I'm both wildly excited about what we're doing at Sagewell, and then today was this local minimum of like, oh, JWTs again. Again, we find ourselves battling. And you won today, but hopefully not for too long. STEPH: Oof, how did this manifest that you first noticed? So is it because a webhook suddenly stopped working, and that was like the error that rose up, and that's what helped you dive into it? CHRIS: Yeah, we have a little bit of code in the controller for where Plaid events come in. We calculate and verify the signature of the webhook to make sure that it's valid, and we reject it otherwise. And we alert ourselves via Sentry, and then we also have a Datadog scan that can show what's the status code of the response. Because these are incoming HTTP payloads or requests, and so we can see there were 200 up until this magical day when suddenly everything changed. And that was when we switched Heroku stacks. And then we can see it also in Sentry. So we're able to look at it, and we're like, why are none of the Plaid webhooks able to verify the signature anymore? That seems weird. And so then Datadog confirmed that it consistently was broken from this point in time. And then we were able to track that back. It was also pretty easy to guess because the error was "pkeys are immutable in OpenSSL 3.0," and that was the data. And I was like, oh, cool, that sounds fun. Let me go figure out what that means. STEPH: [laughs] Well, it's a nice use of Datadog. I remember in the past you were talking about adding it. And I was excited because I've never been at that point where a team has just introduced it; either a team doesn't have it, and they wish they had more insights, or they have it and don't use it. And nobody ever checks the board. So that's a nice anecdote for Datadog helping you out. Yeah, I'm not envious of your situation, friend. CHRIS: I do love the cup half full take [laughs] that you have on the overall situation, but that's nice how Datadog worked out for you. And you know what? It was. Thank you, Steph, for once again being that voice of positivity. STEPH: I appreciate that you enjoy it because there are times that when someone points it out to me that I do that, I have to be like, "I'm sorry, I'm not trying to be toxic positivity over here. [chuckles] That's just how my brain works." CHRIS: Oh, you are definitively not toxic positivity. That's a different thing. Because you ended with but also, I feel bad for you, and I'm glad that I'm not in your shoes. So you are the right level of positivity. I don't think I could have talked to you for three and a half years as co-host on a podcast if I didn't appreciate the level of positivity or the general approach that you bring to thinking about stuff. STEPH: Okay. Well, to borrow a phrase from Matt Sumner, who has been a guest on the show, cool, cool, cool, cool. I'm glad my positivity has been well calibrated. And I was about to say I'm interested to hear how this turns out for the team. [laughs] But we're in an awkward spot where I mean, you and I, we can still totally chat. But listeners won't get to hear the rest of that particular saga. I mean, you can share. I mean, you do you. I'm setting all sorts of boundaries for you right now. Okay. And now I'm just rambling, and I'm getting weird with it. Because the truth is that, you know, we won't be back. And this is our final episode together. So I think let's just go ahead and rip off the Band-Aid. Let's dive into it. Let's talk about it. Given that it's our last episode that we are recording, we thought of a couple of things that we'd like to talk about. You brought up a great idea that I'm excited to dive into. Do you want to lead us in? CHRIS: Sure. Well, if we go back all the way to Episode 172, that is the first episode that you came on as a guest. I actually continue to really love the title of that episode, which is What I Believe About Software. And it both captured that conversation really well, but also, more generally, it's actually become the tagline of the show when we do our little introduction. What do we believe about building great software? Et cetera. And I think that's been the throughline of the conversations that we've had is what remains true. What are the themes? Not necessarily the specific technologies, although we certainly talk about that. But what do we believe about building great software? And so today, I thought it would be fun for us to talk about what do we still believe about building great software? It's roughly three and a half years or so that we've been doing this. What's still true? STEPH: Oh, well, I have the first unequivocal one, the thing that I still believe about building great software, and that's you should hire thoughtbot. That's definitely the way to go. We'll help you get it done, not that I'm biased in any way. CHRIS: No. I'd say collectively between us; there's zero bias with regard to thoughtbot or any other web development shop out there. But thoughtbot is the best. STEPH: All right, perfect. So we've got the first one, the clutch one of hire thoughtbot. And then I also really like this topic. And I still think back to that first episode that I recorded with you and how much fun that was and how that really got me to start thinking about this. Because it was something that, at the time, I didn't really reflect on a lot in terms of what does it take to build great software? I was often just doing the day-to-day actions but then not really going high-level think about it. So I'm excited this is one of the topics that we're revisiting. So for the next one, this one is, I don't know, maybe it's a little cutesy, but I was trying to think of an alliteration that I enjoyed. And so this one is be an assumption assassin. So what assumptions are you making? And then how can you validate or disprove them? And that is something that I find myself doing constantly. And it always yields better work, better questions, better software, better code, better code reviews. And that's my first one is be an assumption assassin and identify what assumptions you have. And I had a really good example come up today while I was having a conversation with Joël about something that I was looking to merge. But I was a little hesitant about it because there are some oddities that I won't dig in too deeply. But essentially, there's a test that I migrated that highlights an existing concern in the code. And I was like, should I go ahead and merge this test that documents it, or should I wait to fix that concern and address it? And he brought up a good point. And he's like, "Well, we're assuming it's a bug and an issue, but it may not actually be depending on how the software is being used." And so then he was encouraging me to reevaluate that assumption that I had where I'm like, oh, this is definitely a problem to, like, I don't know, is it a problem? Let's ask somebody. CHRIS: First off, I love that as a theme, as one of the things that you still believe about software. Second, I believe you correctly said that you were looking for an alliteration, but my brain heard acronym. STEPH: [laughs] CHRIS: And so then I was like, B-A-A-A. Is it BAAA? What are you going for there? Oh, you just wanted a bunch of As. Okay, I got it now. Secondly or thirdly, I think I'm on my third now. Apparently, within Sagewell team culture, one of the things that I'm most known for is... there are two phrases: one is just to name it, and the other is to be clear. And these are the two things that I do apparently constantly so much that it's become a meme within the team. It's just like, okay, everybody's been talking. But I just want to make sure we're on the same page here. So just to be clear, or just to name it, here's what I'm seeing. But I agree; I think taking those things...what are the implicit bits? What are the assumptions? And making them more explicit. Our job as developers is just to yell at computers all the time and make them try and do human stuff. And there's so much room for lossy conversions at every point in that conversation chain. And so yeah, being very clear, getting rid of assumptions, love it. It's all great stuff. Actually, in a very related note, the first on my list is that code is for humans to read. This is one of the things that I believe most deeply and most impacts the way that I write software. Any given piece of functionality that we want to author in our code feels like 10, 20, 50, frankly, almost infinite different versions of the code that would produce nearly identical functionality. So at the end of the day, the actual symbols and strings of text that we bring together to write the code is all about other humans, other people on your team, you five months from now, you a week from now, frankly, or me. I'm going to say me, me a week from now. I want to do future me and everyone else on the team a solid and spend that extra 10% of okay, I have something that works now, but let me try and push it around and try and massage it into a shape that is a little more representative of how we're actually thinking about the code, how we talk about it as an organization. Is that the word that we use to describe that domain concept? Maybe we could change that just a little bit. Can I push more of this into the private API? What actually needs to be known here? And I think that's where I'm happiest is in those moments because that's where all of the parts of the job come together, the bit where I trick a computer into doing what I want and simultaneously making it so that that code is revisitable, clear, expressive, all of those things. So yeah, code is for humans. And that's true across every language, and framework, and domain that I have worked in. And I've only believed it more and more so over time. So yeah, that's mine. STEPH: Yeah, I love that one. That's one of the things that comes to mind when people talk about disliking code reviews. And I can imagine there are a number of reasons that people may have had a poor experience with a code review process. But at the end of the day, if you're not getting that feedback or validation from fellow humans, then how do you know that you've been successful, that you've written something that other people can follow up on? Which goes back to the assumptions in terms of like, you're assuming that you have written something that your future self or that other people are going to be able to read and maintain down the road. So yeah, I love that one. One of the other things that I still hold really true to building great software is prioritize early and often. So always be checking in to understand with your users, with your tech concerns, with data that you may have, new insights, and then just confirm that yes, you and the team are constantly working on the thing that has been prioritized and that is the most important. And also, be ready to let go. That can be really hard. I have definitely had those moments in my career where I've spent two weeks working really hard on something. And then we've realized that the thing that we were pursuing isn't that valuable, or it's something that users don't need or actually want. And so it was better to let go of it than to pursue it and ship it anyways. So that's one of my other mantras that I have adopted now is prioritize, prioritize, prioritize. CHRIS: Unsurprisingly, I agree wholeheartedly with all of that. We're still searching for that thing, that core thing that we disagree on other than Pop-Tarts and IPAs. But I don't know that today is the episode that we're actually going to find that. But yeah, prioritizing is such a critical activity. And it is this interesting collaboration point. It gets different groups together. It's this trade-off. It's this balance. And it's a way to focus on and make explicit the choices that we're making. And we're always making choices. We're always making trade-offs. And so being more explicit, being more connected and collaborative around those I believe in so, so, so much. So love that that was something on your list. Let's see, next up on my list is reduce complexity, just sort of as an adage, just always be reducing complexity. It is amazing to me in my time, particularly as a consultant, but even now, this is something that I hold very true is just it's so easy to grow a system in anticipation of future complexity or imagine that the performance concerns that we're going to run into will be so large that we must switch from Postgres and a nice, simple atomic database into a sharded, clustered Kafka queue adventure. And there are absolutely cases that make sense for that sort of thing. But at a minimum, I beg of you, anyone starting a new system, don't start with microservices. Don't start with an event queue-based system. These are wildly complex versions of what often can be done with so much simpler of an application. And this scales through to everything. What's the complexity of an API? Do we need caching in that API layer? Or can we just be a little bit inefficient for a little while and avoid the complexity and the overhead of caching? Turns out caching is a tricky thing to get right, just as an aside. And so the idea of like, oh, let's just sprinkle in a little bit of caching. It'll be easy, and then we'll get better performance, like, yeah, but did you get it right? Or did you introduce a subtle bug into your program that's going to be really hard to debug later? Because do you cache in development? Well, maybe, I'm not sure, could be. So over time, this is something that I've sort of always felt, but I've only ratcheted it up. It's only something that I've come to believe in more and to hold more firmly to. I think earlier in my career, it was something that I felt, but I would more easily be swayed by aspirational ideas of the staggering amounts of traffic that we would be getting soon or the nine different ways that the data model will expand. And so, we should code the current version in anticipation of that. And I have become somewhat the old man on his lawn yelling at the clouds like, "Nah, we don't need it yet. We can grow to that." And there's a certain category of things that are useful to try and get out in front of and don't introduce additional complexity, but they're a tiny, tiny list. And so, for most things, my stance is what's the simplest thing that we can get away with right now, that still provides a meaningful experience to our users, that doesn't compromise on security or robustness or correctness but just solves the problem we have right now? And over and over and over again, that has served me incredibly well. So yeah, keep that complexity at bay. STEPH: That is one that I've definitely struggled with. And frankly, it works in my favor, that idea of keeping things simple. Because I'm terrible when it comes to predicting the future or trying to build things in a way that I just don't have enough information to really drive the architecture or the application that I'm building. So anytime I'm trying to then stretch and reach for the future in those ways unless I really have a concrete understanding of I am building for these particular scenarios, it's really hard to do. So I very much like keeping it simple and not optimizing before you need to. And it reminds me of I think it's Mark Twain, who has a quote, "Worrying is like paying a debt that you don't owe." And that's something that comes to mind for me when also writing code and building features and software is that I tend to be someone who will worry about stuff. And I'm like, oh, is this going to be easy to extend? Is it going to be what it needs to be six months from now if we need to add more features to this and build on top of it? And I have to remind myself it's like, well, let's just wait. Let's wait till we get there and we know more. One of my other ideas that couples nicely with the one that you just shared in regards to keeping things simple and then waiting for those needs to arise is that mistakes are going to happen. They are a part of the process. As we are learning and growing and we're stretching our skills and trying things out, things are going to go wrong. We're going to introduce bugs. And to take those opportunities, that's when we start to use that feedback to then improve things like observability, like capturing logs, and how we handle error reporting or having a plan for emergencies. So maybe that's the part of worrying that can pay off is thinking through, all right, if something does break, or if something gets shipped that shouldn't, then what is our plan in how we handle that? How do we roll back? Or how do we get things back to a stable build? CHRIS: It's funny. I was actually visiting with a friend this past weekend, and we were chatting more generally about life things but the idea of worrying and anticipation and trying to prepare for every bad outcome. And there's the adage of an ounce of prevention is worth a pound of cure. But increasingly, both in life, depending on the context, and in code, I've found that I've shifted to the opposite of it's impossible to stop everything. There are going to be bugs that are going to get out there. There are going to be places where we code things incorrectly. And I would rather...I still want to try as hard as I can to get things right, to be clear. I'm not giving up on trying. But I'm all the more focused on how do we know and how do we recover when those things happen? So it's interesting that you just described exactly that, which, again, is a very human life conversation, and yet it applies to the code. STEPH: I love that rephrasing of it. Instead of the mistakes are going to happen, it's, like, how do we know, and how do we recover? I think that's perfect. I've also found that by answering the how do we know and how do we recover, that really helps you build trust with clients as well. Because again, things are going to happen, things are going to break. And the more prepared you are for that and then the better plan that you have, and then they can watch how you execute that plan, and it's going to establish a lot of deep trust with other engineers and also the team that you're working with, that you have been thoughtful and that you have ideas on how are we going to address this? Instead of waiting for that moment to happen. That's going to happen too. You're going to make decisions in the heat of the moment. But I have found that to be a really useful way to establish yourself with a team in terms of I care about this team and these processes and this application. So how do we handle the bad times, not just the good times? I do want to circle back because you alluded to the fact that you and I, we've tried to find things that we disagree on. And so far, Pop-Tarts and beer have been the two things that we disagree on. But I do have a question for you that maybe I will disagree with you on. But I need to know some more about it first. You have alluded to there's the Brussels snack, (Oh, I'm going to get this wrong.) Brussels sprout snack hour or working lunch, something combination of those words. [laughs] And it's the working lunch that has stuck out to me, and I've wanted to ask you about it. So here I am. I'm asking you about it. What's a working lunch? What's the Brussels snack happy hour, snackariffic working lunch look like? CHRIS: This is fantastic. I love that you waited until the last episode that this was rolling around in the back of your head. And you're like, are you making the team work through lunch? And now, on this final episode, we get to address the controversy that has been brewing in the back of your head. Spoiler alert, no, this is just ridiculous nomenclature. These are two meetings that we have that are more like, let's get the dev team together and talk about stuff that's in our platform sort of developer experience. Or stuff in observability often is talked about in this context because it doesn't quite impact users, but it's how we think about the work. And so there are two different meetings that alternate every other week. So every Friday afternoon, we do this, but it's one of two meetings depending on the day. So there's a crispy Brussels snack hour that was the first one that was named, which was named purely for nonsense reasons because we don't have anything else that's named nonsensically in our organization. And so I was like, oh when we name this meeting, we should make it nonsense because we don't have any other...We don't have, you know, an SOA microservices fleet with Barbie doll and Galactus and all of the other wonderful names. Those are references to the greatest video ever about microservices; if you've not seen it, that will be in the show notes. It's required reading. But anyway, we don't have that. And so we thought, let's be funny with the name of this. So the crispy Brussels snack hour is one, and the crispy Brussels we wanted something that was...the first one is a planning meeting. The second is like, let's actually sort of ensemble program. Let's get the four of us together, and we'll work on some of the stuff that we're talking about here but as a group. And so I wanted the idea of we're working, and so I was like, oh, this will be the crispy Brussels work lunch. But it's purely a name. It's the same time slot. It's 3:00 o'clock on a Friday afternoon. [laughs] So it is not at all us working through lunch. I don't think we should work through lunch. I'm concerned that you thought that for a while, and you were just like, I'm a little worried, but I'm not going to bring it up. But I'm glad we got to cover this before we wrapped up this whole Bike Shed co-hosting adventure together. STEPH: I feel relieved and also a little robbed of an opportunity for us to have something that we disagree on because I thought this might be a thing. [laughs] CHRIS: We can continue searching for that thing. But maybe it's okay that we agreed on most stuff for the run [laughs] of this fun, little show that we did together. STEPH: Yeah, that's gone on quite a time. We've got like three years together that we have managed to really only find two, I mean, very important of course, two things. But yeah, it's been pretty limited to those two areas. And each time that you'd mentioned the work lunch, I was like, huh, I need to ask about that because I have feelings about it. But then, you always would dive into very interesting stories of things that came out of it, and I quickly forgot about it. So this feels good. This feels like very good important closure. I'm glad that this finally surfaced. But circling back, since I took us on a detour for a little bit, what are some other things that you still hold deeply about building great software? CHRIS: I've really got one last thing on the list. It's interesting, there's not a ton technically in this list, which I think represents broadly how I feel about software, and I think how you feel about software. It's like, it's actually mostly about how the people interact at the end of the day. And you can program in any language or framework, and you can get the job done. We certainly have our preferences and things that we enjoy. But the last one really rounds us out, which is think about the users. I always want to be anchoring the conversations that we're having, the approach that we're taking to building the software in what do the users think? Who are our users? What do we know about them? What do they care about? How are they using this technology? How is it impacting their lives? We've talked a number of times about potentially actually watching the sales demo as an engineering team, trying to understand what's the messaging that we're putting out into the world for this piece of software that we're building? Or write along with customer support and understand what are the pain points that people are hitting? And really, like, real humans, what are they experiencing? Potentially with a name attached. And that just changes the way that you think about the software. There's also even the lower-level version of it. As we're building classes or modules, what are the public facets of that, and what are the private API? What's the stuff that we're hiding away? And what's the shape that we are exposing to the outside world for varying definitions of outside? And how can we just bring in a little bit of empathy to try and think about, again, in the case of like the API for a class, it's probably you on the other side of it, but it's future you in a slightly different mindset with a little bit less information and context on the current problem that you're working on. And so, how can we make things easier for ourselves in the code, for our users at the end of the day? How can we deliver real value that is not mired in the minutiae of technical complexity and whatnot but really is trying to help people live better lives? That's a little too fancy as I say it out loud. But it is kind of the core of what I believe, so I'm not going to take it back. STEPH: I love how you've expanded users where more traditionally, it's people that are then using the software. But then you've expanded it to include developers because that is something that is often on my mind and something that I just agree with wholeheartedly in terms of when you're writing software; as you mentioned before, software is for people. And so we want to include others. And it does improve people's lives. People show up to work every day, and if you've been thoughtful if your past you has been thoughtful, it's either going to give you your future self a better day, or it's going to give other people a better day. So I think that's a very fair statement, improving lives by being thoughtful in regards to focusing on the users, people consuming software, and working in the codebase. CHRIS: I know we've talked about this before, but I was having a conversation with one of the developers on the team at Sagewell just last week, and they were mentioning how they really loved working on admin features. And I was like, oh, that's interesting. Let's talk more about that. And it was really it's that same thing that I think you and I have discussed of like there's that immediacy. There's that connection. These are actually colleagues, but you can build software to make their day better. You can understand in detail what the pain points are. What's the workflow that as you watch it, you're like, oh, I could put a button up in the corner of the screen that would automate almost all of this and your day would be that much faster? Oh, let me do that. That's exciting. And so I love that as another variation of it, like, yeah, there's for other developers. There's also for the admin team or other users in the organization of the software. There are so many different versions of users, but I think I think we build a better thing if we think about them more. STEPH: I have definitely worked with teams where I can tell that certain people are demoralized, and it comes down to they feel frustrated and often disconnected from the people that they are building for. And so then you really feel isolated. I'm pushing code around, but I don't really see the benefit or the purpose of it. And I think that's very hard for developers who typically want to build something that's going to be useful and not feel like it's just going to be thrown away. So connecting your team to those users, I certainly understand. Getting to build something for your colleagues and then they get to say how much they like it is an incredible, rewarding experience. You also touched on something that I really appreciate, where you highlighted that a lot of the technical decisions that we make are important, but they're not at the center of the things that we believe when it comes to building great software. And that's something that I will often reflect on. Like, as we were thinking through these particular ideas that we still hold true today, how mine are more people and process-focused and rarely deep in the technical weeds. And there are times that I think, well, shouldn't there be something that's more technical, something that's very concrete? Yes, you should build your code this way or build your application or use a specific technology. But after all the projects and teams that I've been a part of, that's just usually not the most important part. And so I appreciate that you highlighted that because sometimes I have to remind myself that, yes, those things can be challenging, but it's often with people and process. That's where the heart of great software lies. CHRIS: That's a fantastic phrase, I think, that really encapsulates all of the conversations that we're having here. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: Actually shifting gears a little bit, so we've just talked about what we still believe about building great software. I'm intrigued. We've been chatting for a number of years here on this microphone, these microphones. We have separate ones because we're in different states. But I'm interested; what have we changed our minds about? What have you changed your mind about, Steph? I got a couple of ideas, but I'm intrigued to hear yours. STEPH: Nothing. I've never been wrong. I've stuck to everything that I've ever thought. CHRIS: That must be boring. STEPH: [laughs] Yeah, that's totally not true there. There are definitely things that I've changed my mind about. One of the things that I've changed my mind about is that people who know the most will ask the fewest questions. That's something that I used to consider the trademark of someone who is a more experienced senior developer in terms of you really know what you're doing. And so you typically don't ask for help or need help very often. And so, I'm going way back in terms of things that I have changed my mind about. But I have definitely changed my mind where people who know the most are actually the ones that do a really great job of constantly asking questions and asking for feedback. And I think that is still a misconception that people still carry forward. The idea that if you're asking a lot of questions or asking for help that you are not as skilled in your work, and I view it as quite the opposite, that you are very good at what you do and that you know precisely the value of your time. And then also reaching out to others for help, and then also just getting validation on things that you may have concerns around. So that's one I've changed my mind on is that I think the more experienced you are, the more questions you tend to ask. CHRIS: Oh, I love that one. It's a behavior that I know...I think we've talked about this before. But as consultants, we try and model it just the like; it's totally fine to ask questions. And because we often come in with less context, it makes sense for us to be asking questions, but I will definitely intentionally lean into it in those contexts to be like, everybody keeps throwing around this acronym. I don't actually know what that is. Let me raise my hand. And my favorite moment is when people disagree on what the acronym or what the particular word or what the particular project is. Like, I ask the question, and people are like, "Oh, it's this," and someone across the room is like, "Wait, that's what it means? I thought it was this totally other thing." I'm like, cool, glad that we sorted that out. Glad that we got that one up in the air. But I actually remember many, many, many years ago, at this point, there was a video series of...PeepCode was the company, and there was the Play by Play series. And so there were particular prominent developers, particularly in the Ruby community. And they would come and sort of be interviewed and pair program. And it was amazing getting to watch these big names that you had heard of, like Yehuda Katz is the one that stands out in my mind. He was one of the authors of merb, which was a framework that was merged with Rails, I want to say around the 3.0 time. And just an absolute, very big name in this world and someone that I looked up to and respected. And watching this video, they had to Google for particular API signatures and Rails methods. They were like, "Oh, how does that work? Is it link to and then you pass the name?" I forget what it was specifically. But it was just this very human normalizing moment of this person who has demonstrably done incredible work in our community and produced very complex software still needs to Google for the order of arguments to a particular method within Rails. I was like, oh, okay, that's good to know. And with complete humility in the moment, I was just like, yeah, this is normal. Like, it's impossible to hold all of that in your head. And seeing that early on shook me off the idea that that's the thing to do is just memorize everything. It's like no, no, get good at asking the questions. Get good at debugging. Get good, yeah, asking questions. It's a core skill rather than a thing that you grow out of. But I definitely shared early on I was like, not allowed to ask questions, that'll be scary. STEPH: I love that example. Because counterintuitively, to me, it demonstrates confidence when someone can say, "Oh, I don't remember how this works," or "Let me go look it up." And so I just very much appreciate when I see someone demonstrating that level of confidence of let's keep going. Let's keep making progress. I'm going to ask for help because that is totally fine, and we are in a safe space. Or I'm going to create a safe space for us to do that. One of my favorite versions of this where you shared like if you ask about an acronym and then people disagree, one of my favorite versions is to ask about a particular area of the codebase and be like, what would you say this code is doing here? What do you think users do here? Like, what is the purpose? What's the point of this? [chuckles] And then having people be able to say, "Oh, yeah, this definitively does this thing." Or people are like, "You know, I'm not sure. I don't even know if that code is getting run." That's one of my favorite outcomes of asking questions. How about you? What's something you've changed your mind about? CHRIS: I made a list of a couple of things like remote is on there. I didn't know if I'd like remote. I wanted to try it for a while. Tried it, turns out I like it a lot. It's complex. You got to manage it, whatever. But that I think everybody's talked about that a bunch. I think probably the most interesting one is deadlines. Initially, in my career, I didn't really feel anything about them. And then I experienced the badness of deadlines. Deadlines are bad. Deadlines are things that come down from on high and then you fail to hit them, and then you're sad. And maybe along the way, you're very stressed and work long hours to try and get there. But they're perhaps arbitrary. And what do they even mean? And also, we have this fixed scope, and they're just bad. And then there was a period of my time where, like, deadlines are bad. The only thing that we do is we show up, and we make the software as quickly as we can. But in reality, there are times that we need that constraint. And in fact, I have found a ton of value in deadlines when used intentionally. So we can draw a line in the sand, and we can say, at this point in time, we will have a version of the software. We have a marketing campaign that we need to align with this. So we got to have something at that point. And critically, if you're going to have a deadline, you've now fixed a point in time. You need to flex other things. And critically, I think the thing to flex is the scope. So we need to have team management. We have user accounts right now, but now we need to organize them into teams. That is like a category of functionality. It's not a singular feature. And so yeah, we can ship teams in the next quarter. What exactly that means is up in the air. And as long as we're able to have conversations essentially on a day-to-day at least weekly cadence as to what will make it in by that deadline and what won't, and we're able to have sometimes the hardest conversations but the very necessary conversations of the trade-offs that we have to make as we're building that software, then I find deadlines are absolutely fantastic tools for focusing and for actually reducing scope but in a really useful way. And getting something out there in the hands of users so that you start to get real feedback so that you start to learn, is this useful? What are the ways that people are using this? What should we lean into and do more of? What maybe should we roll back, actually? So yeah, deadlines. First, I didn't know them, then I feared them. Now I love them but only under the right circumstances. It's a double-edged sword, definitely. STEPH: I, too, have felt the terribleness of deadlines and railed against them pretty hard because I had gone through a negative experience with them but have also shifted my feelings about them where they can be incredibly useful. So I really liked that's one of the things that you've changed your mind about. It also reminds me of one of the other things...I'm going to circle back for a moment to one of the things that I believe about creating great software is to not wait for perfection, and deadlines are a really good tool that helps you not wait for perfection. Because I have also seen teams really struggle or sometimes fail because they waited until there was something perfect to present, and then you realize that you've built the wrong thing. So I do want to transition and talk a bit about the show because it's our last episode, and we should talk about it, and the fun adventures that we've had and some of the things that we've learned or things that we're feeling in the moment. So given that it's been a wonderful three years for me, it's been four years for you since you've been a host on the show. How are you feeling? CHRIS: I'm feeling a bunch of different things sort of all at once. I am definitely going to miss this immensely. Particularly, I loved when I started, and I got to interview a bunch of thoughtboters and other people from the community. But frankly, three-plus years of getting to chat with you has been just such a delight. There's been an ease to it. We kind of just show up and talk about what we're doing. And yet there are these themes that have run through it. And it has definitely helped me hone and shape my thinking and my ability to communicate about what I'm thinking. I've learned that you have a literal superpower to remember the last thing that you were talking about. Listeners, you may not know this, but we are not quite the put-together folks that we may sound like in these recordings. We have a wonderful editor, Mandy Moore, who makes us sound so much better than we are. But we'll often pause and stop and then discuss what we want to talk about next. And Steph always knows the exact phrase that she or I left off on. And it has been so valuable to the team. But really, it's been just such a pleasure getting to have these conversations. It's also been something that has just gently been in the back of my mind at all times. And so, I'm observing the work in any given week as I'm doing it. It's almost like meditation in a certain way, whereas I'm working on something, like, oh, this is actually really cool. I want to take a note about this and talk about it on The Bike Shed with Steph. And having this outlet, having this platform to be able to have those conversations and knowing that there are people out there is fantastic, although it's very weird because really, every one of these recordings is just you and I on a video call. And so there is an audience, I'm pretty sure. I think people listen to the show; I don't know, occasionally they write in, so it seems like they do. But at the end of the day, this really just feels like a conversation with a friend, and that has been so valuable to have. And yeah, I'm definitely going to miss that. It's been a wonderful run, you know, four years is a long time. It's about as long as I've done most things in my career. And so I'm very happy with what we have done here. And there's a trite saying that isn't...yeah, whatever; I'll just say it, which is, "Don't be sad that it's over. Be glad that it happened." And I guess I'm still going to be sad that it's over. But I am so glad that I got the opportunity to do this, that you joined in this adventure and that we got to chat each week. It's been really delightful. STEPH: I really liked how you refer to this as being a meditative state. And that is something that I have certainly picked up from you and thoroughly enjoyed that I have this space that I get to show up and bring these ideas and topics and then get to talk them out with you. And that has been such a nice way to either end the week or start a week. I mean, it doesn't matter. Anytime that we record, it's this very nice moment of the week where we get to come together and talk through some of the difficulties and share our stories. And that's been one of my favorite moments is because you and I get to show up and share everything that's going on. But then when someone writes into the show or if they send a tweet or something and they share their story or their version of something that happened, or if they said that we made them laugh, that was one of my favorite accomplishments is the idea that something that we have done was silly enough or fun enough that it has brought them joy and made them laugh. So I, too, I'm very, very much going to miss this. It has been a wonderful adventure. And I thank you for encouraging me to come on this adventure because I was quite nervous in the beginning. And this has definitely been an aspect of my life that started out as something that was very challenging and stretching my limits, and now it has become this very fun aspect and something that I get to show up and do and then get to share with everyone. And I do feel very proud of it, very much in part to Thom Obarski, who was our initial producer and helped us have that safe space to chat about things. And now Mandy, who keeps the show running smoothly and helps us sound our best week to week. So it's been a wonderful adventure. This is going to be hard to let go. And I think it's going to hit me most. Like, this was one of those things as we're talking about it, it's, like, I'll see you next week. This will be fine. But I think it's going to hit me when there's something that I want to talk about where I'm like, oh, this would be great to talk about, and I'll add it to The Bike Shed Trello board. And I'll be like, oh yeah, that's not a thing anymore, at least not quite in the same way that it was. CHRIS: So what I'm taking away from this is that you're immediately going to delete my phone number the minute we hang up this call and stop recording. [laughs] STEPH: Oh yeah. I preemptively deleted. So that's already done. Friendship is over at this point. CHRIS: That's smart. Yeah, because you might forget otherwise in the heat of the moment as we're wrapping this whole thing up. STEPH: [laughs] CHRIS: But actually, on that note, in a slightly more serious vein, again, there's this weird aspect where the audience is out there. But we're very sort of disconnected, particularly at the moment in time where we're recording. But it has been so wonderful getting various notes from listeners, listener questions, but also just commentary and the occasional thanks and focusing; oh, you pointed me in the direction, or you helped me think through a complicated piece of work or process a problem that we were having. And so that has been just so, so rewarding. And one of the facets of this that has been so interesting to me is being able to connect to people and basically put out there what we believe about software, and for the folks that resonate with it and be able to have that connection instantly. And meeting people, and they're like, "Oh, I've listened to The Bike Shed. I like all these things." I'm like, oh, cool, we get to skip way further into the conversation because I've already said a bunch, and you say you like that thing. So, cool, we're halfway through our introductory chat. And I know that we agree about a bunch of things, and that's really wonderful. And frankly, I'm going to miss that immensely. So for anyone out there who's found something valuable in this, who's enjoyed listening week to week, or perhaps even back to Upcase for things, I would love to hear from you. I'd love to connect to folks. Send me an email, Twitter. I'm on all the places. I'm Chris Toomey in various spots or ctoomey.com on the internet. Chris Toomey on GitHub. I'm findable, I think. Chris Toomey developer will probably get you there. But I would really love to hear from folks, to connect to folks, you know, someday down the road; I think I'll be hiring again. And that'll be fun. I would love to work with some of the folks that have listened to this show or meet you at a conference, or if I happen to be traveling to a city or you're traveling to Boston. Really for me, so much of what this show is about is connecting with people around how we think about building great software. And so, I would love to continue that forward into the future. So yeah, say hi, if you're interested. STEPH: I agree. That's been one of the most fun aspects of being co-host of the show. And I'll be honest, you are welcome to contact me, but I am going to be off-grid for probably six months. [laughs] So just know that there will be a bit of a delay before you hear back from me. But I would definitely love to hear from you. I also want to say a very heartfelt thanks to a couple of people, just folks that have made this journey incredible and have made it so much fun. One, in particular, is everyone at thoughtbot for their continuous stream of knowledge. I mean, frankly, my software opinions wouldn't be half as interesting if it wasn't for everyone at thoughtbot constantly sharing their knowledge and being a source of inspiration. So I deeply appreciate everyone that has contributed to topics and ideas and just constantly churning out blog posts because those are phenomenal. And I also want to give a shout-out to my husband, Tim, because he has listened to The Bike Shed for many years and even helped out with a number of show notes when that was something that you and I used to do before Mandy made our life so much easier and took that over for us. And has intervened a number of times when Utah mid-recording would decide it's time to play. So I want to give a very special thank you to him because he has been a very big supporter of the show and frankly helped me manage through a lot of the recordings for when I had an 80-pound dog that was demanding my attention. CHRIS: I think continuing on the note of thanks; similarly, I'm so grateful to thoughtbot as an organization for everything that is represented in my career. It's a decade-plus that I have been following and then listening to the podcasts and then joining the organization, and then getting so many wonderful opportunities to learn about this thing called web development. And then, even after I left the organization, I was able to stay on here on The Bike Shed and hang out and still chat with you, Steph, which has been really wonderful. So thank you, thoughtbot, so much. Thank you to Joël Quenneville, who will be the continuing host of the show. This show is not going anywhere. And, Steph, you and I aren't really going anywhere, but we won't be around anymore. But we are leaving it in the very, very capable hands of Joël, and I'm super excited to hear the direction that he takes it and Joel's incredibly thoughtful and nuanced approach to thinking about programming and communicating. So I think that will be really wonderful. And lastly, I definitely want to thank Derek Prior and Sage Griffin, the two original hosts of this show, who really produced something wonderful, and for many years, I think it was about four years that they hosted together. I was an avid listener despite actually working at the company the whole time and really loved the thing that they produced and was so grateful that they entrusted me with continuing it forward. And hopefully myself and then with the help of you along the way, we've...I think we've done an okay job, but now it is time to pass the torch or the green lantern. That's the adage I've been going with. Gotta pass the lantern, pass the mantle on to the next one. So, Joël, it's going to be in your hands now. STEPH: Yeah, I'm so looking forward to future episodes with Joël Quenneville. They are going to be fabulous. So I've been thinking in terms of this being our finale episode and then a fun ending for it, so there's a thing called the 21-gun salute, which is the military honor that's performed by firing cannons or artillery. Not to be confused with the three-volley salute, which I definitely confused earlier that is reserved and used at funerals, which this is not. So using the 21-gun salute, I was like, hmm, it is The Bike Shed, and we have this cute ring ring that goes. So I think for our finale, we should have a 21-bell salute as we exit the shed and right off into the sunset. CHRIS: I love it. I couldn't imagine a more perfect send-off. So with that, what do you think? Should we wrap up? STEPH: Yes, but I have one more silly thing to add. I've thought of a new software idiom that I'm excited about. And so, this may be my final send-off into glory that I'd like to share with you. And I think that we should make like a shard and split. CHRIS: [laughs] I so appreciate that in this moment, this final moment that we have together, you choose to go with a punny joke. It is so on brand for the show. It is absolutely perfect. And I think with that note, shall we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
347: Tracking Velocity

The Bike Shed

Play Episode Listen Later Jul 26, 2022 38:50


Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough? They answer a listener's question about where to get started testing a legacy app. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. jnicklas/capybara_table: (https://github.com/jnicklas/capybara_table) Capybara selectors and matchers for working with HTML tables Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Just gotta hold on. Fly this thing straight to the crash site. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. CHRIS: And I'm Steph Viccari. STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs] CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are. STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened. CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari. STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody. CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time. STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world? CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever. But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain. It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it. And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into. First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while? Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass. So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something. The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough. STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes. But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take. CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate. And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something. But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen. There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal. And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common. But to highlight something that's really great and wonderful that I've been working with, there is a project called capybaratable. capybaratable is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it. And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it. And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found. STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future. CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you? [laughter] STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do. So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for. On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done. CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience. STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered. So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine. But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle. And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does. And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land. CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up? STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow. I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case. CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions. There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in. STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough? So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace? And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity? CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting. There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that. There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets. And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual. I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction. STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well. For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase? Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in. I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily. I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals. And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off. And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing. But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like. I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster? And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours. Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed. I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to? I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough. CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation. So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations. Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose. Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations. But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations. And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that. If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do. So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question. STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself. So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily. And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing. CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team. But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all. My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this. STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful. But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off. But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start. And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making. And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows. So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well. So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went. So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you. CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks. And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected. Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that. Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things. And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it. So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind. One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true? Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited. And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation. The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day. So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done. And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts. STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project. If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything? CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph? STEPH: I think those are fabulous thoughts. I think you covered it all. CHRIS: Sounds good. Well, in that case, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
346: Occasional Biscuits

The Bike Shed

Play Episode Listen Later Jul 19, 2022 37:13


Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon. Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Greenland (https://www.imdb.com/title/tt7737786/) Geostorm (https://www.imdb.com/title/tt1981128/) San Andreas (https://www.imdb.com/title/tt2126355/) Armageddon (https://www.imdb.com/title/tt0120591/) This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles] I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby. CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling. STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride. CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact? STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right? CHRIS: They sure do. STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go? CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say. It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not? STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones. CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great. STEPH: Oh yeah, that's a good combo. CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that. STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts. CHRIS: Right. That is what they say in the movie. STEPH: [laughs] Okay. CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space. STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role. CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out. STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up. CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically. And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work. I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time. STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo. CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me. In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think... I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah. STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'. CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two. STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced. I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people. And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either. In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it. And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability. And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point." And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up. CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it's also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase. Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that. But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it. I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles] STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day. CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially? STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing. CHRIS: Sold. STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs] CHRIS: And occasional biscuits, I hope. STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world? CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things. Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work. We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood. And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that. I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true. So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice. I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication. STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing. Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point. CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything. We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us. As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them. And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain. STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up." CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations. But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff. But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please. So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that. So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application. STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that. CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system. In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph? STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase. So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application. I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized? Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage? I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app. CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software? Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't. And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points. If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code. Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application? I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button. But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works. The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency. Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box. But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this. STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one. I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there. I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity. CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way. STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application? That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me. One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me. And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase. CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration. I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is. Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo. Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff. STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on. But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code? There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs] CHRIS: We've been burned before. STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question. CHRIS: What would Michael Bay do? STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
344: Spinner Armageddon

The Bike Shed

Play Episode Listen Later Jun 28, 2022 38:50


Steph has an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. Chris got to do something exciting this week using dry-monads. Success or failure? This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Bartender (https://www.macbartender.com/) dry-rb - dry-monads v1.0 - Pattern matching (https://dry-rb.org/gems/dry-monads/1.0/pattern-matching/) alfred-workflows (https://github.com/tupleapp/alfred-workflows/blob/master/scripts/online_users.rb) Raycast (https://www.raycast.com/) ruby-science (https://github.com/thoughtbot/ruby-science) Inertia.js (https://inertiajs.com/) Remix (https://remix.run/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. STEPH: What type of bird is the strongest bird? CHRIS: I don't know. STEPH: A crane. [laughter] STEPH: You're welcome. And on that note, shall we wrap up? CHRIS: Let's wrap up. [laughter] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris, I saw a good movie I'd like to tell you about. It was just over the weekend. It's called The Duke, and it's based on a real story. I should ask, have you seen it? Have you heard of this movie called The Duke? CHRIS: I don't think so. STEPH: Okay, cool. It's a true story, and it's based on an individual named Kempton Bunton who then stole a particular portrait, a Goya portrait; if you know your artist, I do not. But he stole a Goya portrait and then essentially held at ransom because he was a big advocate that the BBC News channel should be free for people that are living on a pension or that are war veterans because then they're not able to afford that fee. But then, if you take the BBC channel away from them, it disconnects them from society. And it's a very good movie. I highly recommend it. So I really enjoyed watching that over the weekend. CHRIS: All right. Excellent recommendation. We will, of course, add that to the show notes mostly so that I can find it again later. STEPH: On a more technical note, I have a small update, or it's more of a question. It's an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. This has been quite a journey that Joël and I have been on for a while now. And we're making progress, but we're realizing that we're spending like 95% of our time in the test setup and porting that over, specifically because we're mapping fixture data over to FactoryBot, and we're just realizing that's really painful. It's taking up a lot of time to do that. And initially, when I realized we were just doing that, we hadn't even really talked about it, but we were moving it over to FactoryBot. I was like, oh, cool. We'll get to delete all these fixtures because there are around 208 files of them. And so that felt like a really good additional accomplishment to migrating the test over. But now that we realize how much time we're spending migrating the data over for that test setup, we've reevaluated, and I shared with Joël in the Slack channel. I was like, crap. I was like, I have a bad idea, and I can't not say it now because it's crossed my mind. And my bad idea was what if we stopped porting over fixtures to FactoryBot and then we just added the fixtures to a directory that RSpec would look so then we can rely on those fixtures? And then that way, we're literally then ideally just copying over from Test::Unit over to RSpec. But it does mean a couple of things. Well, one, it means that we're now running those fixtures at the beginning of RSpec test. We're introducing another pattern of where these tests are already using FactoryBot, but now they have fixtures at the top, and then we won't get to delete the fixtures. So we had a conversation around how to manage and mitigate some of those concerns. And we're still in that exploratory. We're going to test it out and see if this really speeds us up referencing the fixtures. The question that's wrapped up in this is there's something different between how fixtures generate data and how factories generate data. So I've run into this a couple of times now where I moved data over to just call a factory. But then I was hitting these callbacks or after-save-hooks or weird things that were then preventing me from creating the record, even though fixtures was creating them just fine. And then Joël pointed out today that he was running into something similar where there were private methods that were getting called. And there were all sorts of additional code that was getting run with factories versus fixtures. And I don't have an answer. Like, I haven't looked into this. And it's frankly intentional because I was trying hard to not dive into understanding the mechanics. We really want to get through this. But now I'm starting to ponder a little more as to what is different with fixtures and factories? And I liked that factories is running these callbacks; that feels correct. But I'm surprised that fixtures doesn't, or at least that's the experience that I'm having. So there's some funkiness there that I'd like to explore. I'll be honest; I don't know if I'm going to. But if anybody happens to know what that funkiness is or why fixtures and factories are different in that regard, I would be very intrigued because, at some point, I might look into it just because I would like to know. CHRIS: Oh, that is interesting. I have not really worked with fixtures much at all. I've lived a factory life myself, and thus that's where almost all of my experience is. I'm not super surprised if this ends up being the case, like, the idea that fixtures are just some data that gets shoveled into the database directly as opposed to FactoryBot going through the model layer. And so it's sort of like that difference. But I don't know that for certain. That sounds like what this is and makes sense conceptually. But I think this is what you were saying like, that also kind of pushes me more in the direction of factories because it's like, oh, they're now representative. They're using our model layer, where we're defining certain truths. And I don't love callbacks as a mechanism. But if your app has them, then getting data that is representative is useful in tests. Like one of the things I add whenever I'm working with FactoryBot is the FactoryBot lint rake task RSpec thing that basically just says, "Are your factories valid?" which I think is a great baseline to have. Because you may add a migration that adds a default constraint or something like that to the database that suddenly all your factories are invalid, and it's breaking tests, but you don't know it. Like subtly, you change it, and it doesn't actually break a test, but then it's harder later. So that idea of just having more correctness baked in is always nice, especially when it can be automated like that, so definitely a fan of that. But yeah, interested if you do figure out the distinction. I do like your take, though, of like, but also, maybe I just won't figure this out. Maybe this isn't worth figuring it out. Although you were in the interesting spot of, you could just port the fixtures over and then be done and call the larger body of work done. But it's done in sort of a half-complete way, so it's an interesting trade-off space. I'm also interested to hear where you end up on that. STEPH: Yeah, it's a tough trade-off. It's one that we don't feel great about. But then it's also recognizing what's the true value of what we're trying to deliver? And it also comes down to the idea of churn versus complexity. And I feel like we are porting over existing complexity and even adding a smidge, not actual complexity but adding a smidge of indirection in terms that when someone sees this file, they're going to see a mixed-use of fixtures and factories, and that doesn't feel good. And so we've already talked about adding a giant comment above fixtures that just is very honest and says, "Hey, these were ported over. Please don't mimic this. But this is some legacy tests that we have brought over. And we haven't migrated the fixtures over to use factories." And then, in regards to the churn versus complexity, this code isn't likely to get touched like these tests. We really just need them to keep running and keep validating scenarios. But it's not likely that someone's going to come in here and really need to manage these anytime soon. At least, this is what I'm telling myself to make me feel better about it. So there's also that idea of yes, we are porting this over. This is also how they already exist. So if someone did need to manage these tests, then going to Test::Unit, they would have the same experience that they're going to have in RSpec. So that's really the crux of it is that we're not improving that experience. We're just moving it over and then trying to communicate that; yes, we have muddied the waters a little bit by introducing this other pattern. So we're going to find a way to communicate why we've introduced this other pattern, but that way, we can stay focused on actually porting things over to RSpec. As for the factories versus fixtures, I feel like you're onto something in terms of it's just skipping that model layer. And that's why a lot of that functionality isn't getting run. And I do appreciate the accuracy of factories. I'd much rather know is my data representative of real data that can get created in the world? And right now, it feels like some of the fixtures aren't. Like, how they're getting created seemed to bypass really important checks and validations, and that is wrong. That's not what we want to have in our test is, where we're creating data that then the rest of the application can't truly create. But that's another problem for another day. So that's an update on a trade-off that we have made in regards to the testing journey that we are on. What's going on in your world? CHRIS: Well, we got to do something exciting this week. I was working on some code. This is using dry-monads, the dry-rb space. So we have these result objects that we use pretty pervasively throughout the app, and often, we're in a controller. We run one of these command objects. So it's create user, and create user actually encompasses a ton of logic in our app, and that object returns a result. So it's either a success or a failure. And if it's a success, it'll be a success with that new user wrapped up inside of it, or if it's a failure, it's a specific error message. Actually, different structured error messages in different ways, some that would be pushed to the form, some that would be a flash message. There are actually fun, different things that we do there. But in the controller, when we interact with those result objects, typically what we'll do is we'll say result equals create user dot run, (result=createuser.run) and then pass it whatever data it needs. And then on the next line, we'll say results dot either, (results.either), which is a method on these result objects. It's on both the success and failure so you can treat them the same. And then you pass what ends up being a lambda or a stabby proc, or I forget what they are. But one of those sort of inline function type things in Ruby that always feel kind of weird. But you pass one of those, and you actually pass two of them, one for the success case and one for the failure case. And so in the success case, we redirect back with a notice of congratulations, your user was created. Or, in the failure case, we potentially do a flash message of an alert, or we send the errors down, or whatever it ends up being. But it allows us to handle both of those cases. But it's always been syntactically terrible, is how I would describe it. It's, yeah, I'm just going to leave it at that. We are now living in a wonderful, new world. This has been something that I've wanted to try for a while. But I finally realized we're actually on Ruby 2.7, and so thus, we have access to pattern matching in Ruby. So I get to take it for a spin for the first time, realizing that we were already on the correct version. And in particular, dry-monads has a page in their docs specific to how we can take advantage of pattern matching with the result objects that they provide us. There's nothing specific in the library as far as I understand it. This is just them showing a bunch of examples of how one might want to do it if they're working with these result objects. But it's really great because it gives the ability to interact with, you know, success is typically going to be a singular case. There's one success branch to this whole logic, but there are like seven different ways it can fail. And that's the whole idea as to why we use these command objects and the whole Railway Oriented Programming and that whole thing which I have...what is this word? [laughs] I feel like I should know it. It's a positive rant. I have raved; that is how our users kindly pointed that out to us. I have raved about the Railway Oriented Programming that allows us to do. But it's that idea that they're actually, you know, there's one happy path, and there are seven distinct failure modes, seven unhappy paths. And now, using pattern matching, we actually get a really expressive, readable, useful way to destructure each of those distinct failures to work with the particular bits of data that we need. So it was a very happy day, and I got to explore it. This is, again, a feature of Ruby, not a feature of dry-monads. But dry-monads just happens to embrace it and work really well with it. So that was awesome. STEPH: That is awesome. I've seen one or two; I don't know, I've seen a couple of tweets where people are like, yeah, Ruby pattern matching. I haven't found a way to use it. So I'm excited that you just shared a way that you found to use it. I'm also worried what it says about our developer culture that we know the word rant so well, but rave, we always have to reach back into our memory to be like, what's that positive word or something that we like? [laughs] CHRIS: And especially here on The Bike Shed, where we try to gravitate towards the positive. But yeah, it's an interesting point that you make. STEPH: We're a bunch of ranters. It's what we do, pranting ranters. I don't know why we're pranting. [laughs] CHRIS: Because it's that exciting. That's what it is. Actually, there was an interesting thing as we were playing around with the pattern matching code, just poking around in the console session with it, and it prints out a deprecation warning. It's like, warning: this is an experimental feature. Do not use it, be careful. But in the back of my head, I was like, I actually know how this whole thing plays out, Ruby 2.7, and I assure you, it's going to be fine. I have been to the future, at least I'm pretty sure. I think the version that is in Ruby 2.7 did end up getting adopted basically as it stands. And so, I think there is also a setting to turn off that deprecation warning. I haven't done it yet, but I mostly just enjoyed the conversation that I had with this deprecation message of like, listen, I've been to the future, and it's great. Well, it's complicated, but specific to this pattern matching [laughs] in Ruby 3+ versions, it went awesome. And I'm really excited about that future that we now live in. STEPH: I wish we had that for so many more things in our life [laughs] of like, here's a warning, and it's like, no, no, I've seen the future. It's all right. Or you're totally right; I should avoid and back out of this now. CHRIS: If only we could know how the things would play out, you know. But yeah, so pattern matching, very cool. I'll include a link in the show notes to the particular page in the dry-monads docs. But there are also other cool things on the internet. In an unrelated but also cool thing that I found this week, we use Tuple a lot within our organization for pair programming. For anyone who's not familiar with it, it's a really wonderful piece of technology that allows you to pair program pretty seamlessly, better video quality, all of those nice things that we want. But I found there was just the tiniest bit of friction in starting a Tuple call. I know I want to pair with this person. And I have to go up and click on the little menu bar, and then I have to find their name, then I have to click a button. That's just too much. That's not how...I want to live my life at the keyboard. I have a thing called Bartender, which is a little menu bar manager utility app that will collapse down and hide the icons. But it's also got a nice, little hotkey accessible pop-up window that allows me to filter down and open one of the menu bar pop-out menus. But unfortunately, when that happens, the Tuple window isn't interactive at that point. I can't use the arrow keys to go up and down. And so I was like, oh, man, I wonder if there's like an Alfred workflow for this. And it turns out indeed there is actually managed by the kind folks at Tuple themselves. So I was able to find that, install it; it's great. I have it now. I can use that. So that was a nice little upgrade to my workflow. I can just type like TC space and then start typing out the person's name, and then hit enter, and it will start a call immediately. And it doesn't actually make me more productive, but it makes me happier. And some days, that's what matters. STEPH: That's always so impressive to me when that happens where you're like, oh, I need a thing. And then you went through the saga that you just went through. And then the people who manage the application have already gotten there ahead of you, and they're like, don't worry, we've created this for you. That's one of those just beautiful moments of like, wow, y'all have really thought this through on a bunch of different levels and got there before me. CHRIS: It's somewhat unsurprising in this case because it's a very developer-centric organization, and Ben's background being a thoughtbot developer and Alfred user, I'm almost certain. Although I've seen folks talking about Raycast, which is the new hotness on the quick launcher world. I started eons ago in Quicksilver, and then I moved to Alfred, I don't know, ten years ago. I don't know what time it is anymore. But I've been in Alfred land for a while, but Raycast seems very cool. Just as an aside, I have not allowed myself... [laughs] this is another one of those like; I do not have permission to go explore this new tool yet because I don't think it will actually make me more productive, although it could make me happier. So... STEPH: I haven't heard of that one, Raycast. I'm literally adding it to the show notes right now as a way so you can find The Duke later, and I can find Raycast later [chuckles] and take a look at it and check it out. Although I really haven't embraced the whole Alfred workflow. I've seen people really enjoy it and just rave about it and how wonderful it is. But I haven't really leaned into that part of the world; I don't know why. I haven't set any hard and fast rules for myself where I can't play around with these technologies, but I haven't taken the time to do it either. CHRIS: You've also not found yourself writing thousands of lines of Vimscript because you thought that was a good idea. So you don't need as many guardrails it would seem. That's my guess. STEPH: This is true. CHRIS: Whereas I need to be intentional [laughs] with how I structure my interaction with my dev tools. STEPH: Instead, I'm just porting over fixtures from one place to another. [laughs] That's the weird space that I'm living in instead. [laughs] CHRIS: But you're getting paid for that. No one paid me for the Vimscript I wrote. [laughter] STEPH: That's fair. Speaking around process-y things, there's something that's been on my mind that Valeria, another thoughtboter, suggested around how we structure our meetings and the default timing that we have for meetings. So Thursdays are my team-focused day. And it's the day where I have a lot of one on ones. And I realized that I've scheduled them back to back, which is problematic because then I have zero break in between them, which I'm less concerned about that because then I can go for an hour or something and not have a break. And I'm not worried about that part. But it does mean that if one of those discussions happens to go over just even for like two or three minutes, then it means that someone else is waiting for me in those two to three minutes. And that feels unacceptable to me. So Valeria brought up a really good idea where I think it's only with the Google Meet paid version. I could be wrong there. But I think with the paid version of it that then you can set the new default for how long a meeting is going to last. So instead of having it default to 30 minutes, have it default to 25 minutes. So then, that way, you do have that five-minute buffer. So if you do go over just like two or three minutes with someone, you've still got like two minutes to then hop to the next call, and nobody's waiting for you. Or if you want those five minutes to then grab some water or something like that. So we haven't implemented it just yet because then there's discussion around is this a new practice that we want everybody to move to? Because I mean, if just one person does it, it doesn't work. You really need everybody to buy into the concept of we're now defaulting to 25 versus 30-minute meetings. So I'll have to let you know how that goes. But I'm intrigued to try it out because I think that would be very helpful for me. Although there's a part of me that then feels bad because it's like, well, if I have 30 minutes to chat with somebody, but now I'm reducing it to 25 minutes each time, I didn't love that I'm taking time away from our discussion. But that still feels like a better outcome than making somebody wait for three to five minutes if something else goes over. So have you ever run into something like that? How do you manage back-to-back meetings? Do you intentionally schedule a break in between or? CHRIS: I do try to give myself some buffer time. I stack meetings but not so much so that they're just back to back. So I'll stack them like Wednesdays are a meeting-heavy day for me. That's intentional just to be like, all right, I know that my day is going to get chopped up. So let's just really lean into that, chop the heck out of Wednesday afternoons, and then the rest of the week can hopefully have slightly longer deep work-type sessions. And, yeah, in general, I try and have like a little gap in between them. But often what I'll do for that is I'll stagger the start of the next meeting to be rather than on the hour or the half-hour, I start it on the 15th minute. And so then it's sort of I now have these little 15-minute gaps in my workflow, which is enough time to do one or two small things or to go get a drink or whatever it is or if things do run over. Like, again, I feel what you're saying of like, I don't necessarily want to constrain a meeting. Or I also don't necessarily want to go into the habit of often over-running. I think it's good to be intentional. Start meetings on time, end meetings on time. If there's a great conversation that's happening, maybe there's another follow-up meeting that should happen or something like that. But for as nonsensical of a human as I believe myself to be, I am rather rigid about meetings. I try very hard to be on time. I try very hard to wrap them up on time to make sure I go to the next one. And so with that, the 15-minute staggering is what I've found works for me. STEPH: Yeah, that makes sense. One-on-ones feels special to me because I wholeheartedly agree with being very diligent about like, hey, this is our meeting time. Let's do a time check. Someone says that at the end, and then that way, everybody can move on. But one on ones are, there's more open discussion space, and I hate cutting people off, especially because it might not be until the last 15 minutes that you really got into the meat of the conversation. Or you really got somewhere that's a little bit more personal or things that you want to talk about. So if someone's like, "Yeah, let me tell you about my life goals," and you're like, "Oh, no, wait, sorry. We're out of time." That feels terrible and tragic to do. So I struggle with that part of it. CHRIS: I will say actually, on that note, I'm now thinking through, but I believe this to be true. Everyone that reports to me I have a 45-minute one-on-one with, and then my CEO I set up the one-on-one. So I also made that one a 45-minute one-on-one. And that has worked out really well. Typically, I try and structure it and reiterate this from time to time of, like, hey, this is your space, not mine. So let's have whatever conversation fits in here. And it's fine if we don't need to use the whole time, but I want to make sure that we have it and that we protect it. Because I often find much like retro, I don't know; I think everything's fine. And then suddenly the conversation starts, and you're like, you know what? Actually, I'm really concerned now that you mentioned it. And you need that sort of empty space that then the reality sort of pop up into. And so with one on one, I try and make sure that there is that space, but I'm fine with being like, we can cut this short. We can move on from one-on-one topics to more of status updates; let's talk about the work. But I want to make sure that we lead with is there anything deeper, any concerns, anything you want to talk through? And sort of having the space and time for that. STEPH: I like that. And I also think it speaks more directly to the problem I'm having because I'm saying that we keep running over a couple of minutes, and so someone else is waiting. So rather than shorten it, which is where I'm already feeling some pain...although I still think that's a good idea to have a default of 25-minute meetings so then that way, there is a break versus the full 30. So if people want to have back-to-back meetings, they still have a little bit of time in between. But for one on ones specifically, upping it to 45 minutes feels nice because then you've got that 15-minute buffer likely. I mean, maybe you schedule a meeting, but, I don't know, that's funky. But likely, you've got a 15-minute buffer until your next one. And then that's also an area that I feel comfortable in sharing with folks and saying, "Hey, I've booked this whole 45 minutes. But if we don't need the whole time, that's fine." I'm comfortable saying, "Hey, we can end early, and you can get more of your time back to focus on some other areas." It's more the cutting someone off when they're talking because I have to hop to the next thing. I absolutely hate that feeling. So thanks, I think I'll give that a go. I think I'll try actually bumping it up to 45 minutes, presuming that other people like that strategy too, since they're opting in [laughs] to the 45 minutes structure. But that sounds like a nice solution. CHRIS: Well yeah, happy to share it. Actually, one interesting thing that I'm realizing, having been a manager at thoughtbot and then now being a manager within Sagewell, the nature of the interactions are very different. With thoughtbot, I was often on other projects. I was not working with my team day to day in any real capacity. So it was once every two weeks, I would have this moment to reconnect with them. And there was some amount of just catching up. Ideally, not like status update, low-level sort of thing, but sort of just like hey, what have you been working on? What have you been struggling with? What have you been enjoying? There was more like I needed bigger space, I would say for that, or it's not surprising to me that you're bumping into 30 minutes not being quite long enough. Whereas regularly, in the one on ones that I have now, we end up cutting them short or shifting out of true one-on-one mode into more general conversation and chatting about Raycast or other tools or whatever it is because we are working together daily. And we're pairing very regularly, and we're all on the same project and all sorts of in sync and know what's going on. And we're having retro together. We have plenty of places to have the conversation. So the one-on-one again, still, I keep the same cadence and the same time structure just because I want to make sure we have the space for any day that we really need that. But in general, we don't. Whereas when I was at thoughtbot, it was all the more necessary. And I think for folks listening; I could imagine if you're in a team lead position and if you're working very closely with folks, then you may be on the one side of things versus if you're a little bit more at a distance from the work that they're doing day to day. That's probably an interesting question to ask, and think about how you want to structure it. STEPH: Yeah, I think that's an excellent point. Because you're right; I don't see these individuals. We may not have really gotten to interact, except for our daily syncs outside of that. So then yeah, there's always like a good first 10 minutes of where we're just chatting about life and catching up on how things are going before then we dive into some other things. So I think that's a really good point. Cool, solving management problems on the mic. I dig it. In slightly different news, I've joined a book club, which I'm excited about. This book club is about Ruby. It's specifically reading the book Ruby Science, which is a book that was written and published by thoughtbot. And it requires zero homework, which is my favorite type of book club. Because I have found I always want to be part of book clubs. I'm always interested in them, but then I'm not great at budgeting the time to make sure I read everything I'm supposed to read. And so then it comes time for folks to get together. And I'm like, well, I didn't do my homework, so I can't join it. But for this one, it's being led by Joël, and the goal is that you don't have to do the homework. And they're just really short sections. So whoever's in charge of leading that particular session of the book club they're going to provide an overview of what's covered in whatever the reading material that we're supposed to read, whatever topic we're covering that day. They're going to provide an overview of it, an example of it, so then we can all talk about it together. So if you read it, that's wonderful. You're a bit ahead and could even join the meeting like five minutes late. Or, if you haven't read it, then you could join and then get that update. So I'm very excited about it. And this was one of those books that I'd forgotten that thoughtbot had written, and it's one that I've never read. And it's public for anybody that's interested in it. So to cover a little bit of details about it, so it talks about code smells, ways to refactor code, and then also common patterns that you can use to solve some issues. So there's a lot of really just great content that's in it. And I'll be sure to include a link in the show notes for anyone else that's interested. CHRIS: And again, to reiterate, this book is free at this point. Previously, in the past, it was available for purchase. But at one point a number of years ago, thoughtbot set all of the books free. And so now that along with a handful of other books like...what's Edward's DNS book? Domain Name Sanity, I believe, is Edward's book name that Edward Loveall wrote when he was not a thoughtboter, [laughs] and then later joined as a thoughtboter, and then we made the book free. But on the specific topic of Ruby Science, that is a book that I will never forget. And the reason I will never forget it is that book was written by the one and only CTO Joe Ferris, who is an incredibly talented developer. And when I was interviewing with thoughtbot, I got down to the final day, which is a pairing session. You do a morning pairing session with one thoughtbot developer, and you do an afternoon pairing session with another thoughtbot developer. So in the morning, I was working with someone on actually a patch to Rails which was pretty cool. I'd never really done that, so that was exciting. And that went fine with the exception that I kept turning on Caps Lock on their keyboard because I was used to Caps Lock being CTRL, and then Vim was going real weird for me. But otherwise, that went really well. But then, in the afternoon, I was paired with the one and only CTO Joe Ferris, who was writing the book Ruby Science at that time. And the nature of the book is like, here's a code sample, and then here's that code sample improved, just a lot of sort of side-by-side comparisons of code. And I forget the exact way that this went, but I just remember being terrified because Joe would put some code up on the screen and be like, "What do you think?" And I was like, oh, is this the good code or the bad code? I feel like I should know. I do not know. I'm not sure. It worked out fine, I guess. I made it through. But I just remember being so terrified at that point. I was just like, oh no, this is how it ends for me. It's been a good run. STEPH: [laughs] CHRIS: I made it this far. I would have loved to work for this nice thoughtbot company, but here we are. But yeah, I made it through. [laughs] STEPH: There are so many layers to that too where it's like, well if I say it's terrible, are you going to be offended? Like, how's this going to go for me if I speak my truths? Or what am I going to miss? Yeah, that seems very interesting (I kind of like it.) but also a terrifying pairing session. CHRIS: I think it went well because I think the code...I'd been following thoughtbot's work, and I knew who Joe was and had heard him on podcasts and things. And I kind of knew roughly where things were, and I was like, that code looks messy. And so I think I mostly got it right, but just the openness of the question of like, what do you think? I was like, oh God. [laughs] So yeah, that book will always be in my memories, is how I would describe it. STEPH: Well, I'm glad it worked out so we could be here today recording a podcast together. [laughs] CHRIS: Recording a podcast together. Now that I say all that, though, it's been a long time since I've read the book. So maybe I'll take a revisit. And definitely interested to hear more about your book club and how that goes. But shifting ever so slightly (I don't have a lot to say on this topic.) but there's a new framework technology thing out there that has caught my attention. And this hasn't happened for a while, so it's kind of novel for me. So I tend to try and keep my eye on where is the sort of trend of web development going? And I found Inertia a while ago, and I've been very, very happy with that as sort of this is the default answer as to how I build websites. To be clear, Inertia is still the answer as to how I build websites. I love Inertia. I love what it represents. But I'm seeing some stuff that's really interesting that is different. Specifically, Remix.run is the thing that I'm seeing. I mentioned it, I think, in the last episode talking about there was some stuff that they were doing with data loading and async versus synchronous, and do you wait on it or? They had built some really nice levers and trade-offs into the framework. And there's a really great talk that Ryan Florence, one of the creators of Remix.run, gave about that and showed what they were building. I've been exploring it a little bit more in-depth now. And there is some really, really interesting stuff in Remix. In particular, it's a meta-framework, I think, is the nonsense phrase that we use to describe it. But it's built on top of React. That won't be true for forever. I think it's actually they would say it's more built on top of React Router. But it is very similar to Next.js for folks that have seen that. But it's got a little bit more thought around data loading. How do we change data? How do we revalidate data after? There's a ton of stuff that, having worked in many React client-side API-heavy apps that there's so much pain, cache invalidation. How do you think about the cache? When do you fetch from the network? How do you avoid showing 19 different loading spinners on the page? And Remix as a framework has some really, I think, robust and well-thought-out answers to a lot of that. So I am super-duper intrigued by what they're doing over there. There's a particular video that I think shows off what Remix represents really well. It's Ryan Florence, that same individual, the creator of Remix, building just a newsletter signup page. But he goes through like, let's start from the bare bones, simplest thing. It's just an input, and a form submits to the server. That's it. And so we're starting from web 2.0, long, long ago, sort of ideas, and then he gradually enhances it with animations and transitions and error states. And even at the end, goes through an accessibility audit using the screen reader to say, "Look, Remix helps you get really close because you're just using web fundamentals." But then goes a couple of steps further and actually makes it work really, really well for a screen reader. And, yeah, overall, I'm just super impressed by the project, really, really intrigued by the work that they're doing. And frankly, I see a couple of different projects that are sort of in this space. So yeah, again, very early but excited. STEPH: On their website...I'm checking it out as you're walking me through it, and on their website, they have "Say goodbye to Spinnageddon." And that's very cute. [laughs] CHRIS: There's some fundamental stuff that I think we've just kind of as a web community, we made some trade-offs that I personally really don't like. And that idea of just spinners everywhere just sending down a ball of application logic and a giant JavaScript file turning it on on someone's computer. And then immediately, it has to fetch back to the server. There are just trade-offs there that are not great. I love that Remix is sort of flipping that around. I will say, just to sort of couch the excitement that I'm expressing right now, that Remix exists in a certain place. It helps with building complex UIs. But it doesn't have anything in the data layer. So you have to bring your own data layer and figure out what that means. We have ActiveRecord within Rails, and it's deeply integrated. And so you would need to bring a Prisma or some other database connection or whatever it is. And it also doesn't have more sort of full-featured framework things. Like with Rails, it's very easy to get started with a background job system. Remix has no answer to that because they're like, no, no, this is what we're doing over here. But similarly, security is probably the one that concerns me the most. There's an open conversation in their discussion portal about CSRF protection and a back and forth of whether or not Remix should have that out of the box or not. And there are trade-offs because there are different adapters that you can use for auth. And each would require their own CSRF mitigation. But to me, that is the sort of thing that I would want a framework to have. Or I'd be interested in a framework that continues to build on top of Remix that adds in background jobs and databases and all that kind of stuff as a complete solution, something more akin to a Rails or a Laravel where it's like, here we go. This is everything. But again, having some of these more advanced concepts and patterns to build really, really delightful UIs without having to change out the fundamental way that you're building things. STEPH: Interesting. Yeah, I think you've answered a couple of questions that I had about it. I am curious as to how it fits into your current tech stack. So you've mentioned that you're excited and that it's helpful. But given that you already have Rails, and Inertia, and Svelte, does it plug and play with the other libraries or the other frameworks that you have? Are you going to have to replace something to then take advantage of Remix? What does that roadmap look like? CHRIS: Oh yeah, I don't expect to be using Remix anytime soon. I'm just keeping an eye on it. I think it would be a pretty fundamental shift because it ends up being the server layer. So it would replace Rails. It would replace the Inertia within the stack that I'm using. This is why as I started, I was like, Inertia is still my answer. Because Inertia integrates really well with Rails and allows me to do the sort of it's not progressive enhancement, but it's like, I want fancy UI, and I don't want to give up on Rails. And so, Inertia is a great answer for that. Remix does not quite fit in the same way. Remix will own all of the request-response lifecycle. And so, if I were to use it, I would need to build out the rest of that myself. So I would need to figure out the data layer. I would need to figure out other things. I wouldn't be using Rails. I'm sure there's a way to shoehorn the technologies together, but I think it sort of architecturally would be misaligned. And so my sense is that folks out there are building...they're sort of piecing together parts of the stack to fill out the rest. And Remix is a really fantastic controller and view from their down experience and routing layer. So it's routing, controller, view I would say Remix has a really great answer to, but it doesn't have as much of the other stuff. Whereas in my case, Inertia and Rails come together and give me a great answer to the whole story. STEPH: Got it. Okay, that's super helpful. CHRIS: But yeah, again, I'm in very much the exploratory phase. I'm super intrigued by a lot of what I've seen of it and also just sort of the mindset, the ethos of the project as it were. That sounds fancy as I say it, but it's what I mean. I think they want to build from web fundamentals and then enhance the experience on top of that, and I think that's a really great way to go. It means that links will work. It means that routing and URLs will work by default. It means that you won't have loading spinner Armageddon, and these are core fundamentals that I believe make for good websites and web applications. So super interested to see where they go with it. But again, for me, I'm still very much in the Rails Inertia camp. Certainly, I mean, I've built Sagewell on top of it, so I'm going to be hanging out with it for a while, but also, it would still be my answer if I were starting something new right now. I'm just really intrigued by there's a new example out there in the world, this Remix thing that's pushing the envelope in a way that I think is really great. But with that, my now…what was that? My second or my third rave? Also called the positive rant, as we call it. But yeah, I think on that note, what do you think? Should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
341: Fundamentals and Weird Stuff

The Bike Shed

Play Episode Listen Later Jun 7, 2022 35:27


Steph and Chris are recording together! Like, in the same room, physically together. Chris talks about slowly evolving the architecture in an app they're working on and settling on directory structure. Steph's still working on migrating unit tests over to RSpec. They answer a listener question: "As senior-level developers, how do you set goals to ensure that you keep growing?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Faking External Services In Tests With Adapters (https://thoughtbot.com/blog/faking-external-services-in-tests-with-adapters) Testing Third-Party Interactions (https://thoughtbot.com/blog/testing-third-party-interactions) Jen Dary - On Future Goals (https://www.beplucky.com/on-future-goals/) Charity Majors - The Engineer Manager Pedulum (https://charity.wtf/2017/05/11/the-engineer-manager-pendulum/) Charity Majors Bike Shed Episode (https://www.bikeshed.fm/302) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What is new in my world? Actually, this episode feels different. There's something different about it. I can't quite put my finger on it. I think it may be that we're actually physically in the same room recording for the first time in two years and a little bit more, which is wild. STEPH: I can't believe it's been that long. I feel like it wasn't that long ago that we were in The Bike Shed...oh, I said The Bike Shed studio. I'm being very biased. Our recording studio [laughs] is the more proper description for it. Yeah, two and a half years. And we tried to make this happen a couple of months ago when I was visiting Boston, and then it just didn't work out. But today, we made it. CHRIS: Today, we made it. Here we are. So hopefully, the audio sounds great, and we get all that more richness in conversation because of the physical in-person manner. We're trying it out. It'll be fun. But let's see, to the normal tech talk and nonsense, what's new in my world? So we've been slowly evolving the architecture in the app that we're working on. And we settled on something that I kind of like, so I wanted to talk about it, directory structure, probably the most interesting topic in the world. I think there's some good stuff in here. So we have the normal stuff. There are app models, app controllers, all those; those make sense. We have app jobs which right now, I would say, is in a state of flux. We're in the sad place where some things are application records, and some things are Sidekiq workers. We have made the decision to consolidate everything onto Sidekiq workers, which is just strictly more powerful as to the direction we're going to go. But for right now, I'm not super happy with the state of app jobs, but whatever, we have that. But the things that I like so we have app commands; I've talked about app commands before. Those are command objects. They use dry-rb do notation, and they allow us to sequence a bunch of things that all may fail, and we can process them all in a much more reasonable way. It's been really interesting exploring that, building on it, introducing it to new developers who haven't worked in that mode before. And everyone who's come into the project has both picked it up very quickly and enjoyed it, and found it to be a nice expressive mode. So app commands very happy with that. App queries is another one that we have. We've talked about this before, query objects. I know we're a big fan. [laughs] I got a golf clap across the room here, which I could see live in person. It was amazing. I could feel the wind wafting across the room from the golf clap. [chuckles] But yeah, query objects, they're fantastic. They take a relation, they return a relation, but they allow us to build more complex queries outside of our models. The new one, here we go. So this stuff would all normally fall into app services, which services don't mean anything. So we do not have an app services directory in our application. But the new one that we have is app clients. So these are all of our HTTP clients wrapping external third parties that we're interacting with. But with each of them, we've taken a particular structure, a particular approach. So for each of them, we're using the adapter pattern. There's a blog post on the Giant Robots blog that I can point to that sort of speaks to the adapter pattern that we're using here. But basically, in production mode, there is an HTTP backend that actually makes the real requests and does all that stuff. And in test mode, there is a test backend for each of these clients that allows us to build up a pretty representative fake, and so we're faking it up before the HTTP layer. But we found that that's a good trade-off for us. And then we can say, like, if this fake backend gets a request to /users, then we can respond in whatever way that we want. And overall, we found that pattern to be really fantastic. We've been very happy with it. So it's one more thing. All of them were just gathering in-app models. And so it was only very recently that we said no, no, these deserve their own name. They are a pattern. We've repeated this pattern a bunch. We like this pattern. We want to even embrace this pattern more, so long live app clients. STEPH: I love it. I love app clients. It's been a while since I've been on a project that had that directory. But there was a greenfield project that I was working on. I think it might have been I was working with Boston.rb and working on giving them a new site or something like that and introduced app clients. And what you just said is perfect in terms of you've identified a pattern, and then you captured that and gave it its own directory to say, "Hey, this is our pattern. We've established it, and we really like it." That sounds awesome. It's also really nice as someone who's new to a codebase; if I jump in and if I look at app clients, I can immediately see what are the third parties that we're working with? And that feels really nice. So yeah, that sounds great. I'm into it. CHRIS: Yeah, I think it really was the question of like, is this a pattern we want to embrace and highlight within the codebase, or is this sort of a duplication but irrelevant like not really that important? And we decided no, this is a thing that matters. We currently have 17 of these clients, so 17 different third-party external things that we're integrating with. So for someone who doesn't really like service-oriented architecture, I do seem to have found myself in a place. But here we are, you know, we do what we have to with what we're given. But yes, 17 and growing our app clients. STEPH: That is a lot. [laughs] My eyes widened a bit when you said 17. I'm curious because you highlighted that app services that's not really a thing. Like, it doesn't mean anything. It doesn't have the same meaning of the app queries directory or app commands or app clients where it's like, this is a pattern we've identified, and named, and want to propagate. For app services, I agree; it's that junk drawer. But I guess in some ways...well, I'm going to say something, and then I'm going to decide how I feel about it. That feels useful because then, if you have something but you haven't established a pattern for it, you need a place for it to go. It still needs to live somewhere. And you don't necessarily want to put it in app models. So I'm curious, where do you put stuff that doesn't have an established pattern yet? CHRIS: It's a good question. I think it's probably app models is our current answer. Like, these are things that model stuff. And I'm a big believer in the it doesn't need to be an application record-backed object to go on app models. But slowly, we've been taking stuff out. I think it'd be very common for what we talk about as query objects to just be methods in the respective application record. So the user record, as a great example, has all of these methods for doing any sort of query that you might want to do. And I'm a fan of extracting that out into this very specific place called app queries. Commands are now another thing that I think very typically would fall into the app services place. Jobs naming that is something different. Clients we've got serializers is another one that we have at the top level, so those are four. We use Blueprinter within the app. And again, it's sort of weird. We don't really have an API. We're using Inertia. So we are still serializing to JSON across the boundary. And we found it was useful to encapsulate that. And so we have serializers as a directory, but they just do that. We do have policies. We're using Pundit for authorization, so that's another one that we have. But yeah, I think the junk drawerness probably most goes to app models. But at this point, more and more, I feel like we have a place to put things. It's relatively clear should this be in a controller, or should this be in a query object, or should this be in a command? I think I'm finding a place of happiness that, frankly, I've been searching for for a long time. You could say my whole life I've been searching for this contented state of I think I know where stuff goes in the app, mostly, most of the time. I'm just going to say this, and now that you've asked the excellent question of like, yeah, but no, where are you hiding some stuff? I'm going to open up models. Next week I'm going to be like, oh, I forgot about all of that nonsense. But the things that we have defined I'm very happy with. STEPH: That feels really fair for app models. Because like you said, I agree that it doesn't need to be ActiveRecord-backed to go on app models. And so, if it needs to live somewhere, do you add a junk drawer, or do you just create app models and reuse that? And I think it makes a lot of sense to repurpose app models or to let things slide in there until you can extract them and let them live there until there's a pattern that you see. CHRIS: We do. There's one more that I find hilarious, which is app lib, which my understanding...I remember at one point having one of those afternoons where I'm just like, I thought stuff works, but stuff doesn't seem to work. I thought lib was a directory in Rails apps. And it was like, oh no, now we autoload only under the app. So you should put lib under app. And I was just like, okay, whatever. So we have app lib with very little in it. [laughs] But that isn't so much a junk drawer as it is stuff that's like, this doesn't feel specific to us. This goes somewhere else. This could be extracted from the app. But I just find it funny that we have an app lib. It just seems wrong. STEPH: That feels like one of those directories that I've just accepted. Like, it's everywhere. It's like in all the apps that I work in. And so I've become very accustomed to it, and I haven't given it the same thoughtfulness that I think you have. I'm just like, yeah, it's another place to look. It's another place to go find some stuff. And then if I'm adding to it, yeah, I don't think I've been as thoughtful about it. But that makes sense that it's kind of silly that we have it, and that becomes like the junk drawer. If you're not careful with it, that's where you stick things. CHRIS: I appreciate you're describing my point of view as thoughtfulness. I feel like I may actually be burdened with historical knowledge here because I worked on Rails apps long, long ago when lib didn't go in-app, and now it does. And I'm like, wait a minute, but like, no, no, it's fine. These are the libraries within your app. I can tell that story. So, again, thank you for saying that I was being thoughtful. I think I was just being persnickety, and get off my lawn is probably where I was at. STEPH: Oh, full persnicketiness. Ooh, that's tough to say. [laughs] CHRIS: But yeah, I just wanted to share that little summary, particularly the app clients is an interesting one. And again, I'll share the adapter pattern blog post because I think it's worked really well for us. And it's allowed us to slowly build up a more robust test suite. And so now our feature specs do a very good job of simulating the reality of the world while also dealing with the fact that we have these 17 external situations that we have to interact with. And so, how do you balance that VCR versus other things? We've talked about this a bunch of times on different episodes. But app clients has worked great with the adapter pattern, so once more, rounding out our organizational approach. But yeah, that's what's up in my world. What's up in your world? STEPH: So I have a small update to give. But before I do, you just made me think of something in regards to that article that talks about the adapter pattern. And there's also another article that's by Joël Quenneville that's testing third-party interactions. And he made me reflect on a time where I was giving the RSpec course, and we were talking about different ways to test third-party interactions. And there are a couple of different ways that are mentioned in this article. There are stub methods on adapter, stub HTTP request, stub request to fake adapter, and stub HTTP request to fake service. All that sounds like a lot. But if you read through the article, then it gives an example of each one. But I've found it really helpful that if you're in a space that you still don't feel great about testing third-party interactions and you're not sure which approach to take; if you work with one API and apply all four different strategies, it really helps cement how to work through that process and the different benefits of each approach and the trade-offs. And we did that during the RSpec course, and I found it really helpful just from the teacher perspective to go through each one. And there were some great questions and discussions that came out of it. So I wanted to put that plug out there in the world that if you're struggling testing third-party interactions, we'll include a link to this article. But I think that's a really solid way to build a great foundation of, like, I know how to test a third-party app. Let me choose which strategy that I want to use. Circling back to what's going on in my world, I am still working on migrating unit tests over to RSpec. It's a thing. It's part of the work that I do. [laughs] I can't say it's particularly enjoyable, but it will have a positive payoff. And along that journey, I've learned some things or rediscovered some things. One of them is read expectations very carefully. So when I was migrating a test over to RSpec, I read it as where we expected a record to exist. The test was actually testing that a record did not exist. And so I probably spent an hour understanding, going through the code being like, why isn't this record getting created like I expect it to? And I finally went back, and I took a break, and I went back. And I was like, oh, crap, I read the expectation wrong. So read expectations very carefully. The other one...this one's not learned, but it is reinforced. Mystery Guests are awful. So as I've been porting over the behavior over to RSpec, the other tests are using fixtures, and I'm moving that over to use factorybot instead. And at first, I was trying to be minimal with the data that I was bringing over. That failed pretty spectacularly. So I have learned now that I have to go and copy everything that's in the fixtures, and then I move it over to factorybot. And it's painful, but at least then I'm doing that thing that we talked about before where I'm trying to load as little context as possible for each test. But then once I do have a green test, I'm going back through it, and I'm like, okay, we probably don't care about when you were created. We probably don't care when you're updated because every field is set for every single record. So I am going back and then playing a game of if I remove this line, does the test still pass? And if I remove that line, does the test still pass? And so far, that has been painful, but it does have the benefit of then I'm removing some of the setup. So Mystery Guests are very painful. I've also discovered that custom error messages can be tricky because I brought over some tests. And some of these, I'm realizing, are more user error than anything else. Anywho, I added one of the custom error messages that you can add, and I added it over to RSpec. But I had written an incorrect, invalid statement in RSpec where I was looking...I was expecting for a record to exist. But I was using the find by instead of where. So you can call exist on the ActiveRecord relation but not on the actual record that gets returned. But I had the custom error message that was popping up and saying, "Oh, your record wasn't found," and I just kept getting that. So I was then diving through the code to understand again why my record wasn't found. And once I removed that custom error message, I realized that it was actually because of how I'd written the RSpec assertion that was wrong because then RSpec gave me a wonderful message that was like, hey, you're trying to call exist on this record, and you can't do that. Instead, you need to call it on a relation. So I've also learned don't bring over custom error messages until you have a green test, and even then, consider if it's helpful because, frankly, the custom error message wasn't that helpful. It was very similar to what RSpec was going to tell us in general. So there was really no need to add that custom step to it. For the final bit that I've learned or the hurdle that I've been facing is that migrating tests descriptions are hard unless they map over. So RSpec has the context and then a description for it that goes with the test. Test::Unit has methods like method names instead. So it may be something like test redemption codes, and then it runs through the code. Well, as I'm trying to migrate that knowledge over to RSpec, it's not clear to me what we're testing. Okay, we're testing redemption codes. What about them? Should they pass? Should they fail? Should they change? What are they redeeming? There's very little context. So a lot of my tests are copying that method name, so I know which tests I'm focused on, and I'm bringing over. And then in the description, it's like, Steph needs help adding a test description, and then I'm pushing that up and then going to the team for help. So they can help me look through to understand, like, what is it that this test is doing? What's important about this domain? What sort of terminology should I include? And that has been working, but I didn't see that coming as part of this whole migrating stuff over. I really thought this might be a little bit more of a copypasta job. And I have learned some trickery is afoot. And it's been more complicated than I thought it was going to be. CHRIS: Well, at a minimum, I can say thank you for sharing all of your hard-learned lessons throughout this process. This does sound arduous, but hopefully, at the end of it, there will be a lot of value and a cleaned-up test suite and all of those sorts of things. But yeah, it's been an adventure you've been on. So on behalf of the people who you are sharing all of these things with, thank you. STEPH: Well, thank you. Yeah, I'm hoping this is very niche knowledge that there aren't many people in the world that are doing this exact work that this happens to be what this team needs. So yeah, it's been an adventure. I've certainly learned some things from it, and I still have more to go. So not there yet, but I'm also excited for when we can actually then delete this portion of the build process. And then also, I think, get rid of fixtures because I didn't think about that from the beginning either. But now that I'm realizing that's how those tests are working, I suspect we'll be able to delete those. And that'll be really nice because now we also have another single source of truth in factory_bot as to how valid records are being built. Mid-Roll Ad: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/ bikeshed. That's buildpulse.io/bikeshed. Pivoting just a bit, there's a listener question that I'm really excited for us to dive into. And this listener question comes from Joël Quenneville. Hey, Joël. All right, so Joël writes in, "As senior-level developers, how do you set goals to ensure that you keep growing? How do you know what are high-value areas for you to improve? How do you stay sharp? Do you just keep adding new languages to your tool belt? Do you pull back and try to dig into more theoretical concepts? Do you feel like you have enough tech skills and pivot to other things like communication or management skills? At the start of a dev career, there's an overwhelming list of things that it feels like you need to know all at once. Eventually, there comes a point where you no longer feel like you're drowning under the list of things that you need to learn. You're at least moderately competent in all the core concepts. So what's next?" This is a big, fun, scary question. I really like this question. Thank you, Joël, for sending it in because I think there's so much here that can be discussed. I can kick us off with a few thoughts. I want to first highlight one of the things that...or one of the things that resonates with me from this question is how Joël describes going through and reaching senior status how it can really feel like working through a backlog of features. So as a developer, I want to understand this particular framework, or as a developer, I want to be able to write clear and fast tests, or as a developer, I want to contribute to an open-source project. But now that that backlog is empty, you're wondering what's next on your roadmap, which is where I think that sort of big, fun scariness comes into play. So the first idea is to take a moment and embrace that success. You have probably worked really hard to get where you're at in your career. And there's nothing wrong with taking a pause and enjoying the view and just being appreciative of the fact that you are able to get your work done quickly or that you feel very confident in the work that you're doing. Growth is often very important to our careers, but I also think it's important to recognize when you've achieved certain growth and then, if you want to, just enjoy that and pause. And you're not constantly pushing yourself to the next level. I think that is a totally reasonable and healthy thing to do. The second thing that comes to mind is that you're on a Choose Your Own Adventure mode now, so you get to...I would encourage folks, once you've reached this stage, to reflect on where you're at and consider what is your dream? What are your aspirations? Maybe they're related to tech; maybe they're not. And consider where is it that you want to go next? And then, what are the concrete steps that will help you achieve those goals? So there's a really great article by Jen Dary, who's a career coach and owner of Plucky Manager Training, that describes this process. And there's a really great blog post that I'll be sure to include a link to in the show notes. But she has a couple of great questions that will then help you identify, like, what are my goals? Some of those questions are, "If I could do anything and money wasn't an object, I'd spend my time doing dot dot dot." And that doesn't necessarily mean sitting on a beach with your toes in the sand all day. I mean, it could, but then that probably just means you need a vacation. So take the vacation. And then, once you start to get bored, where does your mind start to wander? What are the things that you want to do? Where are you interested in spending your time? And then, once you have an idea of how you'd like to spend your time, you can consider what actions you could take next that will point you in that direction. There's also the benefit that by this point, you probably have an idea of the type of things that you like to do and where you like to spend your time. And so you can figure out which areas of expertise you want to invest in. So do you like more greenfield projects? Do you like architecture discussions? Do you like giving talks? Do you like teaching? Or maybe you're interested in management. I think there's also a more concrete approach that you can take that. You can just talk to your managers in your team and say, "Hey, what big, hard problems are you looking to solve? And then, you can get some inspiration from them and see if their problems align with your interests. Maybe it's not even your own team, but you can talk to other companies and see what other problems industries are trying to solve. That might be an area that then spurs some curiosity or some interest. And then, where do you feel underutilized? So with your current day-to-day, are there areas where you feel that you wish you had more responsibilities or more opportunities, but you feel like you don't have access to those opportunities? Maybe that's an area to explore as well. This feels like a wonderful coaching question in terms of you have done it; congratulations. You've reached a really great spot in your career. And so now you're figuring out that big next step. This is going to be highly customized to each person to then figuring out what it is that's going to help you feel fulfilled over the next five years, ten years, however long you want to project out. Those are some of my thoughts. How about you? What do you think? CHRIS: Well, first, those are some great thoughts. I appreciate that I get to follow them now. It's going to be a hard act to follow. But yeah, I think Joël has asked a fantastic question. And coming from Joël, I know how intentional and thoughtful a learner, and sharer, and teacher and all of these things are. So it's all the more sort of framed in that for me knowing Joël personally. I think to start, the kernel of the question is as senior developers, that's the like...or senior level developers is the way Joël phrased it, but it treats it as sort of this discrete moment in time, which I think there's maybe even something to unpack. And I think we've probably talked about this in previous episodes, but like, what does that even mean? And I think part of the story here is going from reactive where it's like, I don't know how anything works. I know a little bit. I can code some. And every day, I'm presented with new problems that I just don't understand. And I'm trying to build up that base of knowledge. Slowly, you know, you start, and it's like 95% of the time you feel like that. And slowly, the dial switches over, and maybe it's only 25% of the time you feel like that. Somewhere along that spectrum is the line of senior developer. I don't actually know where it is, but it's somewhere in there. And so I think it's that space where you can move from reactive learning things as necessitated by the work that's coming at you to I want to proactively choose the things that I want to be learning to try and expand the stuff that I know, and the ways that I can think about the work without being in direct response to a piece of work coming at me. So with that in mind, what do you do with this proactive space? And I think the way Joël frames the question, again, to what I know of Joël, he's such an intentional person. And I wouldn't be surprised if Joël is very purposeful and thinks about this and has approached it as a specific thing that he's doing. I have certainly been in more of “I'll figure it out when I get there.” I'll explore. Or actually, probably the most pointed thing that I did was I joined a consultancy. And that was a very purposeful choice early on in my career because I'm like, I think I know a little bit. I don't think I know a lot. I would like to know a lot. That seems fun. So what do I think is the best way to do that? My guess, and it turned out to be very much true, is if I join a consultancy, I'm going to see a bunch of different projects, different types of technologies, organizations, communication structure, stuff that works, stuff that doesn't work. And to be honest, I actually thought I would try out the consultancy thing for a little while, like a year or two, and then go on to my next adventure. Spoiler alert: I stayed for seven years. It was one of the best periods of my professional life. And I found it to be a much deeper well than I expected it to be. But for anyone that's looking for, like, how can I structure my career in a way that will just automatically provide the sort of novelty and space to grow? I would highly recommend a consultancy like thoughtbot. I wonder if they're hiring. STEPH: Well, yes, we are hiring. That was a perfect plug that I wasn't expecting for that to come. But yes, thoughtbot is totally hiring. We'll include a link in the show notes to all the jobs. [laughs] CHRIS: Sounds fantastic. But very sincerely, that was the best choice that I could have made and was a way to flip the situation around such that I don't have to be thinking about what I want to be learning. The learning will come to me. But even within that, I still tried to be intentional from time to time. And I would say, again, I don't have a holistic theory of how to improve. I just have some stuff that's kind of worked out well. One thing is focusing on fundamentals wherever I can, or a different way to put it is giving myself permission to spend a little bit more time whenever my work brushes up against what I would consider fundamentals. So things that are in that space are like SQL. Every time I'm working on something, I'm like, ah, I could use like a CROSS JOIN here, but I don't know what that is. Maybe I'll spend an extra 30 minutes Googling around and trying to figure out what a CROSS JOIN is. Is that a thing? Is a CROSS JOIN a real thing? I may be making it up. [laughs] A window function, I know that those are real. Maybe I'll learn what a window function is. I think a CROSS JOIN is a real thing. A LEFT OUTER JOIN that's a cool thing. And so, each time I've had that, SQL has been something that expanding my knowledge; I've continually felt like that was a good investment. Or fundamentals of HTTP, that's another one that really has served me well. With Ruby being the primary language that I program in, deeply understanding the language and the fundamentals and the semantics of it that's another place that has been a good investment. But by contrast, I would say I probably haven't gone as deep on the frameworks that I work with. So Rails is maybe a little bit different just because, like many people, I came to Ruby through Rails, and I've learned a lot of Rails. But like in JavaScript, I've worked with many different JavaScript frameworks. And I have been a little more intentional with how much time I invest into furthering my skills in them because I've seen them change and evolve enough times. And if anything, I'm trying to look for ones that are like, what if it's less about the framework and more about JavaScript and web fundamentals underneath? Thus, I've found myself in Svelte land. But I think it's that choice of trying to anchor to fundamentals wherever possible. And then I would say the other thing that's been really beneficial for me is what can I do that's wildly outside the stuff that I already know? And so probably the most pointed example I have of this is learning Elm. So I previously spent most of my career working in Ruby and JavaScript, so primarily object-oriented languages without a strong type system. And then, I was able to go over and experience this whole different paradigm way of working, way of structuring programs, feedback loop. There was so much about it that was really, really interesting. And even though I don't get to work in Elm, frankly, as much as I would want at this point or really at all, it informs everything that I do moving forward. And I think that falls out of the fact that it was so different than what I was doing. So if I were to do that again, probably the next type of language I would learn is Lisp because those are like, well, that's a whole other category of thing that I've heard about. People say some fun stuff about them, but I don't really know. So it's that fundamentals and weird stuff is how I would describe it. And by weird, I mean outside of the core base of knowledge that I have. STEPH: I love that categorization, and I'll stick with it, fundamentals and weird stuff, to stretch and grow and find some other areas. I also really like your framing, the reactive versus proactive. I think that's a really nice way to put it because so much of your career is you are just learning what your company needs you to learn, or you're learning what you need to keep progressing and to feel more competent with the types of features or the work that you're handling. And I think that's why Jen Dary's blog post resonates with me so much because it's probably...up until now, a lot of someone's career, maybe not Joël's particularly, but I know probably for my career, a lot of it has been reactive in terms of what are the things that I need to learn? And so then once you reach that point of like, okay, I feel competent and reasonably good at all the things that I needed to know, where do I want to go next? And rather than focus on necessarily the plans that are laid out in front of me, I can then go wide and think about what are some of the bigger things that I want to tackle? What are the things that are meaningful to me? Because then I can now push forward to this bigger goal versus achieving a particular salary band or title or things like that. But I can focus on something else that I really want to contribute to. And there do seem to be two common paths. So once you reach that level, either you typically go into management, or you become that more like principal and then onward and upward, whatever is after principal. I don't even know what's after that, [laughs] but the titles that come after principal. So there's management, and then I've seen the other very strong contributors, so Aaron Patterson comes to mind. And I feel like those people then typically will migrate to places where they get to contribute to a language or to a framework. And I think it comes down to the idea of impact because both of those provide a greater impact. So if you go into management, you can influence and affect a team of individuals, and you can increase the value created by that team. Then you've likely exceeded the value that you would have created as your own individual contributor. Or, if you contribute to a language or a framework, then your technical decisions impact a larger community. So I think that would be another good thing to reflect on is what type of impact am I looking for? What type of communities do I want to have a positive impact on? And that may spur some inspiration around where you want to go next, the things that you want to focus on. CHRIS: Yeah, I think one of the things we're picking up in that that Joël mentioned in his question is the idea of there is the individual contributor path. But then there's also the management path, which is the typical sort of that's the progression. And I think, for one, naming that the individual contributor path and the idea of going to principal dev and those sorts of outcomes is a fantastic path in and of itself. I think often it's like, well, you know, you go along for a certain amount of time, and then you become a manager. It's like, those are actually distinctly different things. And people have different levels of interest and aptitude in them, and I think recognizing that is critically important. And so, not expecting that management just comes after individual contributor is an important thing. The other thing I'll say is Charity Majors, who we had on the podcast a bit back, has a wonderful blog post about the pendulum swing called The Engineer/Manager Pendulum. And so in it, she talks about folks that have taken an exploration over into manager land and then come back to the individual contributor path or vice versa sort of being able to move between them, treating them as two potentially parallel career tracks but ones that we can move between. And her assertion is that often folks that are particularly strong have spent some time in both camps because then you gain this empathy, this understanding of what's the whole picture? How are we doing all of this? How do we think about communication, et cetera? So, again, to name it, like, I think it's totally fine to stay on one of those tracks to really know which of those tracks speaks to you or to even move between them a little bit and to explore it and to find out what's true. So we'll include a link to that in the show notes. And we'll also include a link to the previous episode a while back when we had Charity on. But yeah, those I think are some critical thoughts as well because those are different areas that we can grow as developers and expand on our impact within the team. And so, we want to make sure we have those options on the table as well. STEPH: Absolutely. I love where teams will support individuals to feel comfortable shifting between experiences like that because it does make you a more well-rounded contributor to that product team, not just as an engineer, but then you will also understand what everybody else is working on and be able to have more meaningful conversations with them about the company goals and then the type of work that's being done. So yeah, I love it. If you're in a place that you can maybe fail a little bit, hopefully not in a too painful way, but you can take a risk and say, "Hey, I want to try something and see if I like it," then I think that's wonderful. And take the risk and see how it goes. And just know that you have an exit strategy should you decide that you don't like that work or that type of work isn't for you. But at least now you know a little bit more about yourself. Overall, I want to respond directly to something that Joël highlighted around how do you know what are high-value areas for you to improve? And I think there are two definitions there because you can either let the people around you and your team define that high value for you, and maybe that really resonates with you, and it's something that you enjoy. And so you can go to your manager and say, "Hey, what are some high-value areas where I can make an impact for the team?" Or it could be very personal. And what are the high-value areas for you? Maybe there's a particular industry that you want to work on. Maybe you want to hit the public speaking circuit. And so you define more intrinsically what are those high-value areas for you? And I think that's a good place to start collecting feedback and start looking at what's high value for you personally and then what's high value to the company and see if there's any overlap there. With that, I think we've covered a good variety of things to explore and then highlighted some of the different ways that you and I have also considered this question. I think it's a fabulous question. Also, I think it's one, even if you're not at that senior level, to ask this question. Like, go ahead and start asking it early and often and revisit your answers and see how they change. I think that would be a really powerful habit to establish early in your career and then could help guide you along, and then you can reflect on some of your earlier choices. So, Joël, thank you so much for that question. STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
339: What About Pictures?

The Bike Shed

Play Episode Listen Later May 24, 2022 45:03


Steph has a baby update and thoughts on movies, plus a question for Chris related to migrating Test Unit tests to RSpec. Chris watched a video from Google I/O where Chrome devs talked about a new feature called Page Transitions. He's also been working with a tool called Customer.io, an omnichannel communication whiz-bang adventure! Page transitions Overview (https://youtu.be/JCJUPJ_zDQ4) Using yield_self for composable ActiveRecord relations (https://thoughtbot.com/blog/using-yieldself-for-composable-activerecord-relations) A Case for Query Objects in Rails (https://thoughtbot.com/blog/a-case-for-query-objects-in-rails) Customer.io (https://customer.io/) Turning the database inside-out with Apache Samza | Confluent (https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/) Datomic (https://www.datomic.com/) About CRDTs • Conflict-free Replicated Data Types (https://crdt.tech/) Apache Kafka (https://kafka.apache.org/) Resilient Management | A book for new managers in tech (https://resilient-management.com/) Mixpanel: Product Analytics for Mobile, Web, & More (https://mixpanel.com/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Golden roads are golden. Okay, everybody's got golden roads. You have golden roads, yes? That is what we're -- STEPH: Oh, I have golden roads, yes. [laughter] CHRIS: You might should inform that you've got golden roads, yeah. STEPH: Yeah, I don't know if I say might should as much but might could. I have been called out for that one a lot; I might could do that. CHRIS: [laughs] STEPH: That one just feels so natural to me than normal. Anytime someone calls it out, I'm like, yeah, what about it? [laughter] CHRIS: Do you want to fight? STEPH: Yeah, are we going to fight? CHRIS: I might could fight you. STEPH: I might could. I might should. [laughter] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. I have a couple of fun updates. I have a baby Viccari update, so little baby weighs about two pounds now, two pounds. I'm 25 weeks along. So not that I actually know the exact weight, I'm using all those apps that estimate based on how far along you are, so around two pounds, which is novel. Oh, and then the other thing I'm excited to tell you about...I'm not sure how I should feel that I just got more excited about this other thing. I'm very excited about baby Viccari. But the other thing is there's a new Jurassic Park movie coming out, and I'm very excited. I think it's June 10th is when it comes out. And given how much we have sung that theme song to each other and make references to what a clever girl, I needed to share that with you. Maybe you already know, maybe you're already in the loop, but if you don't, it's coming. CHRIS: Yeah, the internet likes to yell things like that. Have you watched all of the most recent ones? There are like two, and I think this will be the third in the revisiting or whatever, the Jurassic World version or something like that. But have you watched the others? STEPH: I haven't seen all of them. So I've, of course, seen the first one. I saw the one that Chris Pratt was in, and now he's in the latest one. But I think I've missed...maybe there's like two in the middle there. I have not watched those. CHRIS: There are three in the original trilogy, and then there are three now in the new trilogy, which now it's ending, and they got everybody. STEPH: Oh, I'm behind. CHRIS: They got people from the first one, and they got the people from the second trilogy. They just got everybody, and that's exciting. You know, it's that thing where they tap into nostalgia, and they take advantage of us via it. But I'm fine. I'm here for it. STEPH: I'm here for it, especially for Jurassic Park. But then there's also a new Top Gun movie coming out, which, I'll be honest, I'm totally going to watch. But I really didn't remember the first one. I don't know that I've really ever watched the first Top Gun. So Tim, my partner, and I watched that recently, and it's such a bad movie. I'm going to say it; [laughs] it's a bad movie. CHRIS: I mean, I don't want to disagree, but the volleyball scene, come on, come on, the volleyball scene. [laughter] STEPH: I mean, I totally had a good time watching the movie. But the one part that I finally kept complaining about is because every time they showed the lead female character, I can't think of her character name or the actress's name, but they kept playing that song, Take My Breath Away. And I was like, can we just get past the song? [laughs] Because if you go back and watch that movie, I swear they play it like six different times. It was a lot. It was too much. So I moved it into bad movie category but bad movie totally worth watching, whatever category that is. CHRIS: Now I kind of want to revisit it. I feel like the drinking game writes itself. But at a minimum, anytime Take My Breath Away plays, yeah. Well, all right, good to know. [laughs] STEPH: Well, if you do that, let me know how many shots or beers you drink because I think it will be a fair amount. I think it will be more than five. CHRIS: Yeah, it involves a delicate calibration to get that right. I don't think it's the sort of thing you just freehand. It writes itself but also, you want someone who's tried it before you so that you're not like, oh no, it's every time they show a jet. That was too many. You can't drink that much while watching this movie. STEPH: Yeah, that would be death by Top Gun. CHRIS: But not the normal way, the different, indirect death by Top Gun. STEPH: I don't know what the normal way is. [laughs] CHRIS: Like getting shot down by a Top Gun pilot. [laughter] STEPH: Yeah, that makes sense. [laughs] CHRIS: You know, the dogfighting in the plane. STEPH: The actual, yeah, going to war away. Just sitting on your couch and you drink too much poison away, yeah, that one. All right, that got weird. Moving on, [laughs] there's a new Jurassic Park movie. We're going to land on that note. And in the more technical world, I've got a couple of things on my mind. One of them is I have a question for you. I'm very excited to run this by you because I could use a friend in helping me decide what to do. So I am still on that journey where I am migrating Test::Unit test over to RSpec. And as I'm going through, it's going pretty well, but it's a little complicated because some of the Test::Unit tests have different setup than, say, the RSpec do. They might run different scripts beforehand where they're loading data. That's perhaps a different topic, but that's happening. And so that has changed a few things. But then overall, I've just been really just porting everything over, like, hey, if it exists in the Test::Unit, let's just bring it to RSpec, and then I'm going to change these asserts to expects and really not make any changes from there. But as I'm doing that, I'm seeing areas that I want to improve and things that I want to clear up, even if it's just extracting a variable name. Or, as I'm moving some of these over in Test::Unit, it's not clear to me exactly what the test is about. Like, it looks more like a method name in the way that the test is being described, but the actual behavior isn't clear to me as if I were writing this in RSpec, I think it would have more of a clear description. Maybe that's not specific to the actual testing framework. That might just be how these tests are set up. But I'm at that point where I'm questioning should I keep going in terms of where I am just copying everything over from Test::Unit and then moving it over to RSpec? Because ultimately, that is the goal, to migrate over. Or should I also include some time to then go back and clean up and try to add some clarity, maybe extract some variable names, see if I can reduce some lets, maybe even reduce some of the test helpers that I'm bringing over? How much cleanup should be involved, zero, lots? I don't know. I don't know what that...[laughs] I'm sure there's a middle ground in there somewhere. But I'm having trouble discerning for myself what's the right amount because this feels like one of those areas where if I don't do any cleanup, I'm not coming back to it, like, that's just the truth. So it's either now, or I have no idea when and maybe never. CHRIS: I'll be honest, the first thing that came to mind in this most recent time that you mentioned this is, did we consider just deleting these tests entirely? Is that on...like, there are very few of them, right? Like, are they even providing enough value? So that was question one, which let me pause to see what your thoughts there were. [chuckles] STEPH: I don't know if we specifically talked about that on the mic, but yes, that has been considered. And the team that owns those tests has said, "No, please don't delete them. We do get value from them." So we can port them over to RSpec, but we don't have time to port them over to RSpec. So we just need to keep letting them go on. But yet, not porting them conflicts with my goal of then trying to speed up CI. And so it'd be nice to collapse these Test::Unit tests over to RSpec because then that would bring our CI build down by several meaningful minutes. And also, it would reduce some of the complexity in the CI setup. CHRIS: Gotcha. Okay, so now, having set that aside, I always ask the first question of like, can you just put Derek Prior's phone number on the webpage and call it an app? Is that the MVP of this app? No? Okay, all right, we have to build more. But yeah, I think to answer it and in a general way of trying to answer a broader set of questions here... I think this falls into a category of like if you find yourself having to move around some code, if that code is just comfortably running and the main thing you need to do is just to get it ported over to RSpec, I would probably do as little other work as possible. With the one consideration that if you find yourself needing to deeply load up the context of these tests like actually understand them in order to do the porting, then I would probably take advantage of that context because it's hard to get your head into a given piece of code, test or otherwise. And so if you're in there and you're like, well, now that I'm here, I can definitely see that we could rearrange some stuff and just definitively make it better, if you get to that place, I would consider it. But if this ends up being mostly a pretty rote transformation like you said, asserts become expects, and lets get switched around, you know, that sort of stuff, if it's a very mechanical process of getting done, I would probably say very minimal. But again, if there is that, like, you know what? I had to understand the test in order to port them anyway, so while I'm here, let me take advantage of that, that's probably the thing that I would consider. But if not that, then I would say even though it's messy and whatnot and your inclination would be to clean it, I would say leave it roughly as is. That's my guess or how I would approach it. STEPH: Yeah, I love that. I love how you pointed out, like, did you build up the context? Because you're right, in a lot of these test cases, I'm not, or I'm trying really hard to not build up context. I'm trying very hard to just move them over and, if I have to, mainly to find test descriptions. That's the main area I'm struggling to...how can I more explicitly state what this test does so the next person reading this will have more comprehension than I do? But otherwise, I'm trying hard to not have any real context around it. And that's such a good point because that's often...when someone else is in the middle of something, and they're deciding whether to include that cleanup or refactor or improvement, one of my suggestions is like, hey, we've got the context now. Let's go with it. But if you've built up very little context, then that's not a really good catalyst or reason to then dig in deeper and apply that cleanup. That's super helpful. Thank you. That will help reinforce what I'm going to do, which is exactly let's migrate RSpec and not worry about cleanup, which feels terrible; I'm just going to say that into the world. But it also feels like the right thing to do. CHRIS: Well, I'm happy to have helped. And I share the like, and it feels terrible. I want to do the right thing, but sometimes you got to pick a battle sort of thing. STEPH: Cool. Well, that's a huge help to me. What's going on in your world? CHRIS: What's going on in my world? I watched a great video the other day from the Google I/O. I think it's an event; I'm not actually sure, conference or something like that. But it was some Google Chrome developers talking about a new feature that's coming to the platform called Page Transitions. And I've kept an eye on this for a while, but it seems like it's more real. Like, I think they put out an RFC or an initial sort of set of ideas a while back. And the web community was like, "Oh, that's not going to work out so well." So they went back to the drawing board, revisited. I've actually implemented in Chrome Canary a version of the API. And then, in the video that I watched, which we'll include a show notes link to, they demoed the functionality of the Page Transitions API and showed what you can do. And it's super cool. It allows for the sort of animations that you see in a lot of native mobile apps where you're looking at a ListView, you click on one of the items, and it grows to fill the whole screen. And now you're on the detail screen for that item that you were looking at. But there was this very continuous animated transition that allows you to keep context in your head and all of those sorts of nice things. And this just really helps to bridge that gap between, like, the web often lags behind the native mobile platforms in terms of the experiences that we can build. So it was really interesting to see what they've been able to pull off. The demo is a pretty short video, but it shows a couple of different variations of what you can build with it. And I was like, yeah, these look like cool native app transitions, really nifty. One thing that's very interesting is the actual implementation of this. So it's like you have one version of the page, and then you transition to a new version of the page, and in doing so, you want to animate between them. And the way that they do it is they have the first version of the page. They take a screenshot of it like the browser engine takes a screenshot of it. And then they put that picture on top of the actual browser page. Then they do the same thing with the next version of the page that they're going to transition to. And then they crossfade, like, change the opacity and size and whatnot between the two different images, and then you're there. And in the back of my mind, I'm like, I'm sorry, what now? You did which? I'm like, is this the genius solution that actually makes this work and is performant? And I wonder if there are trade-offs. Like, do you lose interactivity between those because you've got some images that are just on the screen? And what is that like? But as they were going through it, I was just like, wait, I'm sorry, you did what? This is either the best idea I've ever heard, or I'm not so sure about this. STEPH: That's fascinating. You had me with the first part in terms of they take a screenshot of the page that you're leaving. I'm like, yeah, that's a great idea. And then talking about taking a picture of the other page because then you have to load it but not show it to the user that it's loaded. And then take a picture of it, and then show them the picture of the loaded page. But then actually, like you said, then crossfade and then bring in the real functionality. I am...what am I? [laughter] CHRIS: What am I actually? STEPH: [laughs] What am I? I'm shocked. I'm surprised that that is so performant. Because yeah, I also wouldn't have thought of that, or I would have immediately have thought like, there's no way that's going to be performant enough. But that's fascinating. CHRIS: For me, performance seems more manageable, but it's the like, what are you trading off for that? Because that sounds like a hack. That sounds like the sort of thing I would recommend if we need to get an MVP out next week. And I'm like, what if we just tried this? Listen, it's got some trade-offs. So I'm really interested to see are those trade-offs present? Because it's the browser engine. It's, you know, the low-level platform that's actually managing this. And there are some nice hooks that allow you to control it. And at a CSS level, you can manage it and use keyframe animations to control the transition more directly. There's a JavaScript API to instrument the sequencing of things. And so it's giving you the right primitives and the right hooks. And the fact that the implementation happens to use pictures or screenshots, to use a slightly different word, it's like, okey dokey, that's what we're doing. Sounds fun. So I'm super interested because the functionality is deeply, deeply interesting to me. Svelte actually has a version of this, the crossfade utility, but you have to still really think about how do you sequence between the two pages and how do you do the connective tissue there? And then Svelte will manage it for you if you do all the right stuff. But the wiring up is somewhat complicated. So having this in the browser engine is really interesting to me. But yeah, pictures. STEPH: This is one of those ideas where I can't decide if this was someone who is very new to the team and new to the idea and was like, "Have we considered screenshots? Have we considered pictures?" Or if this is like the uber senior person on the team that was like, "Yeah, this will totally work with screenshots." I can't decide where in that range this idea falls, which I think makes me love it even more. Because it's very straightforward of like, hey, what if we just tried this? And it's working, so cool, cool, cool. CHRIS: There's a fantastic meme that's been making the rounds where it's a bell curve, and it's like, early in your career, middle of your career, late in your career. And so early in your career, you're like, everything in one file, all lines of code that's just where they go. And then in the middle of your career, you're like, no, no, no, we need different concerns, and files, and organizational structures. And then end of your career...and this was coming up in reference to the TypeScript team seems to have just thrown everything into one file. And it's the thing that they've migrated to over time. And so they have this many, many line file that is basically the TypeScript engine all in one file. And so it was a joke of like, they definitely know what they're doing with programming. They're not just starting last week sort of thing. And so it's this funny arc that certain things can go through. So I think that's an excellent summary there [laughs] of like, I think it was folks who have thought about this really hard. But I kind of hope it was someone who was just like, "I'm new here. But have we thought about pictures? What about pictures?" I also am a little worried that I just deeply misunderstood [laughs] the representation but glossed over it in the video, and I'm like, that sounds interesting. So hopefully, I'm not just wildly off base here. [laughs] But nonetheless, the functionality looks very interesting. STEPH: That would be a hilarious tweet. You know, I've been waiting for that moment where I've said something that I understood into the mic and someone on Twitter just being like, well, good try, but... [laughs] CHRIS: We had a couple of minutes where we tried to figure out what the opposite of ranting was, and we came up with pranting and made up a word instead of going with praising or raving. No, that's what it is, raving. [laughs] STEPH: No, raving. I will never forget now, raving. [laughs] CHRIS: So, I mean, we've done this before. STEPH: That's true. Although they were nice, I don't think they tweeted. I think they sent in an email. They were like, "Hey, friends." [laughter] CHRIS: Actually, we got a handful of emails on that. [laughter] STEPH: Did you know the English language? CHRIS: Thank you, kind Bikeshed audience, for not shaming us in public. I mean, feel free if you feel like it. [laughs] But one other thing that came up in this video, though, is the speaker was describing single-page apps are very common, and you want to have animated transitions and this and that. And I was like, single-page app, okay, fine. I don't like the terminology but whatever. I would like us to call it the client-side app or client-side routing or something else. But the fact that it's a single page is just a technical consideration that no user would call it that. Users are like; I go to the web app. I like that it has URLs. Those seem different to me. Anyway, this is my hill. I'm going to die on it. But then the speaker in the video, in contrast to single-page app referenced multi-page app, and I was like, oh, come on, come on. I get it. Like, yes, there are just balls of JavaScript that you can download on the internet and have a dynamic graphics editor. But I think almost all good things on the web should have URLs, and that's what I would call the multiple pages. But again, that's just me griping about some stuff. And to name it, I don't think I'm just griping for griping sake. Like, again, I like to think about things from the user perspective, and the URL being so important. And having worked with plenty of apps that are implemented in JavaScript and don't take the URL or the idea that we can have different routable resources seriously and everything is just one URL, that's a failure mode in my mind. We missed an opportunity here. So I think I'm saying a useful thing here and not just complaining on the internet. But with that, I will stop complaining on the internet and send it back over to you. What else is new in your world, Steph? STEPH: I do remember the first time that you griped about it, and you were griping about URLs. And there was a part of me that was like, what is he talking about? [laughter] And then over time, I was like, oh, I get it now as I started actually working more in that world. But it took me a little bit to really appreciate that gripe and where you're coming from. And I agree; I think you're coming from a reasonable place, not that I'm biased at all as your co-host, but you know. CHRIS: I really like the honest summary that you're giving of, like, honestly, the first time you said this, I let you go for a while, but I did not know what you were talking about. [laughs] And I was like, okay, good data point. I'm going to store that one away and think about it a bunch. But that's fine. I'm glad you're now hanging out with me still. [laughter] STEPH: Don't do that. Don't think about it a bunch. [laughs] Let's see, oh, something else that's going on in my world. I had a really fun pairing session with another thoughtboter where we were digging into query objects and essentially extracting some logic out of an ActiveRecord model and then giving that behavior its own space and elevated namespace in a query object. And one of the questions or one of the things that came up that we needed to incorporate was optional filters. So say if you are searching for a pizza place that's nearby and you provide a city, but you don't provide what's the optional zip code, then we want to make sure that we don't apply the zip code in the where clause because then you would return all the pizza places that have a nil zip code, and that's just not what you want. So we need to respect the fact that not all the filters need to be applied. And there are a couple of ways to go about it. And it was a fun journey to see the different ways that we could structure it. So one of the really good starting points is captured in a blog post by Derek Prior, which we'll include a link to in the show notes, and it's using yield_self for composable ActiveRecord relations. But essentially, it starts out with an example where it shows that you're assigning a value to then the result of an if statement. So it's like, hey, if the zip code is present, then let's filter by zip code; if not, then just give us back the original relation. And then you can just keep building on it from there. And then there's a really nice implementation that Derek built on that then uses yield_self to pass the relation through, which then provides a really nice readability for as you are then stepping through each filter and which one should and shouldn't be applied. And now there's another blog post, and this one's written by Thiago Silva, A Case for Query Objects in Rails. And this one highlighted an approach that I haven't used before. And I initially had some mixed feelings about it. But this approach uses the extending method, which is a method that's on ActiveRecord query methods. And it's used to extend the scope with additional methods. You can either do this by providing the name of a module or by providing a block. It's only going to apply to that instance or to that specific scope when you're using it. So it's not going to be like you're running an include or something like that where all instances are going to now have access to these methods. So by using that method, extending, then you can create a module that says, "Hey, I want to create this by zip code filter that will then check if we have a zip code, let's apply it, if not, return the relation. And it also creates a really pretty chaining experience of like, here's my original class name. Let's extend with these specific scopes, and then we can say by zip code, by pizza topping, whatever else it is that we're looking to filter by. And I was initially...I saw the extending, and it made me nervous because I was like, oh, what all does this apply to? And is it going to impact anything outside of this class? But the more I've looked at it, the more I really like it. So I think you've seen this blog post too. And I'm curious, what are your thoughts about this? CHRIS: I did see this blog post come through. I follow that thoughtbot blog real close because it turns out some of the best writing on the internet is on there. But I saw this...also, as an aside, I like that we've got two Derek Prior references in one episode. Let's see if we can go for three before the end. But one thing that did stand out to me in it is I have historically avoided scopes using scope like ActiveRecord macro thing. It's a class method, but like, it's magic. It does magic. And a while ago, class methods and scopes became roughly equivalent, not exactly equivalent, but close enough. And for me, I want to use methods because I know stuff about methods. I know about default arguments. And I know about all of these different subtleties because they're just methods at the end of the day, whereas scopes are special; they have certain behavior. And I've never really known of the behavior beyond the fact that they get implemented in a different way. And so I was never really sold on them. And they're different enough from methods, and I know methods well. So I'm like, let's use the normal stuff where we can. The one thing that's really interesting, though, is the returning nil that was mentioned in this blog post. If you return nil in a scope, it will handle that for you. Whereas all of my query objects have a like, well, if this thing applies, then scope dot or relation dot where blah, blah, blah, else return relation unchanged. And the fact that that natively exists within scope is interesting enough to make me reconsider my stance on scopes versus class methods. I think I'm still doing class method. But it is an interesting consideration that I was unaware of before. STEPH: Yeah, it's an interesting point. I hadn't really considered as much whether I'm defining a class-level method versus a scope in this particular case. And I also didn't realize that scopes handle that nil case for you. That was one of the other things that I learned by reading through this blog post. I was like, oh, that is a nicety. Like, that is something that I get for free. So I agree. I think this is one of those things that I like enough that I'd really like to try it out more and then see how it goes and start to incorporate it into my process. Because this feels like one of those common areas of where I get to it, and I'm like, how do I do this again? And yield_self was just complicated enough in terms of then using the fancy method method to then be able to call the method that I want that I was like, I don't remember how to do this. I had to look it up each time. But including this module with extending and then being able to use scopes that way feels like something that would be intuitive for me that then I could just pick up and run with each time. CHRIS: If it helps, you can use then instead of yield_self because we did upgrade our Ruby a while back to have that change. But I don't think that actually solves the thing that you're describing. I'd have liked the ampersand method and then simple method name magic incantation that is part of the thing that Derek wrote up. I do use it when I write query objects, but I have to think about it or look it up each time and be like, how do I do that? All right, that's how I do that. STEPH: Yeah, that's one of the things that I really appreciate is how often folks will go back and update blog posts, or they will add an addition to them to say, "Hey, there's something new that came out that then is still relevant to this topic." So then you can read through it and see the latest and the greatest. It's a really nice touch to a number of our blog posts. But yeah, that's what was on my mind regarding query objects. What else is going on in your world? CHRIS: I have this growing feeling that I don't quite know what to do with. I think I've talked about it across some of our conversations in the world of observability. But broadly, I'm starting to like...I feel like my brain has shifted, and I now see the world slightly differently, and I can't go back. But I also don't know how to stick the landing and complete this transition in my brain. So it's basically everything's an event stream; this feels true. That's life. The arrow of time goes in one direction as far as I understand it. And I'm now starting to see it manifest in the code that we're writing. Like, we have code to log things, and we have places where we want to log more intentionally. Then occasionally, we send stuff off to Sentry. And Sentry tells us when there are errors, that's great. But in a lot of places, we have both. Like, we will warn about something happening, and we'll send that to the logs because we want to have that in the logs, which is basically the whole history of what's happened. But we also have it in Sentry, but Sentry's version is just this expanded version that has a bunch more data about the user, and things, and the browser that they were in. But they're two variations on the same event. And then similarly, analytics is this, like, third leg of well, this thing happened, and we want to know about it in the context. And what's been really interesting is we're working with a tool called Customer.io, which is an omnichannel communication whiz-bang adventure. For us, it does email, SMS, and push notifications. And it's integrated into our segment pipeline, so events flow in, events and users essentially. So we have those two different primitives within it. And then within it, we can say like, when a user does X, then send them an email with this copy. As an aside, Customer.io is a fantastic platform. I'm super-duper impressed. We went through a search for a tool like it, and we ended up on a lot of sales demos with folks that were like, hey, so yeah, starting point is $25,000 per year. And, you know, we can talk about it, but it's only going to go up from there when we talk about it, just to be clear. And it's a year minimum contract, and you're going to love it. And we're like, you do have impressive platforms, but okay, that's a bunch. And then, we found Customer.io, and it's month-by-month pricing. And it had a surprisingly complete feature set. So overall, I've been super impressed with Customer.io and everything that they've afforded. But now that I'm seeing it, I kind of want to move everything into that world where like, Customer.io allows non-engineer team members to interact with that event stream and then make things happen. And that's what we're doing all the time. But I'm at that point where I think I see the thing that I want, but I have no idea how to get there. And it might not even be tractable either. There's the wonderful Turning the Database Inside Out talk, which describes how everything is an event stream. And what if we actually were to structure things that way and do materialized views on top of it? And the actual UI that you're looking at is a materialized view on top of the database, which is a materialized view on top of that event stream. So I'm mostly in this, like, I want to figure this out. I want to start to unify all this stuff. And analytics pipes to one tool that gets a version of this event stream, and our logs are just another, and our error system is another variation on it. But they're all sort of sampling from that one event stream. But I have no idea how to do that. And then when you have a database, then you're like, well, that's also just a static representation of a point in time, which is the opposite of an event stream. So what do you do there? So there are folks out there that are doing good thinking on this. So I'm going to keep my ear to the ground and try and see what's everybody thinking on this front? But I can't shake the feeling that there's something here that I'm missing that I want to stitch together. STEPH: I'm intrigued on how to take this further because everything you're saying resonates in terms of having these event streams that you're working with. But yet, I can't mentally replace that with the existing model that I have in my mind of where there are still certain ideas of records or things that exist in the world. And I want to encapsulate the data and store that in the database. And maybe I look it up based on when it happens; maybe I don't. Maybe I'm looking it up by something completely different. So yeah, I'm also intrigued by your thoughts, but I'm also not sure where to take it. Who are some of the folks that are doing some of the thinking in this area that you're interested in, or where might you look next? CHRIS: There's the Kafka world of we have an event log, and then we're processing on top of that, and we're building stream processing engines as the core. They seem to be closest to the Turning the Database Inside Out talk that I was thinking or that I mentioned earlier. There's also the idea of CRDTs, which are Conflict-free Replicated Data Types, which are really interesting. I see them used particularly in real-time application. So it's this other tool, but they are basically event logs. And then you can communicate them well and have two different people working on something collaboratively. And these event logs then have a natural way to come together and produce a common version of the document on either end. That's at least my loose understanding of it, but it seems like a variation on this theme. So I've been looking at that a little bit. But again, I can't see how to map that to like, but I know how to make a Rails app with a Postgres database. And I think I'm reasonably capable at it, or at least I've been able to produce things that are useful to humans using it. And so it feels like there is this pretty large gap. Because what makes sense in my head is if you follow this all the way, it fundamentally re-architects everything. And so that's A, scary, and B; I have no idea how to get there, but I am intrigued. Like, I feel like there's something there. There's also Datomic is the other thing that comes to mind, which is a database engine in the Clojure world that stores the versions of things over time; that idea of the user is active. It's like, well, yeah, but when were they not? That's an event. That transition is an event that Postgres does not maintain at this point. And so, all I know is that the user is active. Maybe I store a timestamp because I'm thinking proactively about this. But Datomic is like no, no, fundamentally, as a primitive in this database; that's how we organize and think about stuff. And I know I've talked about Datomic on here before. So I've circled around these ideas before. And I'm pretty sure I'm just going to spend a couple of minutes circling and then stop because I have no idea how to connect the dots. [laughs] But I want to figure this out. STEPH: I have not worked with Kafka. But one of the main benefits I understand with Kafka is that by storing everything as a stream, you're never going to lose like a message. So if you are sending a message to another system and then that message gets lost in transit, you don't actually know if it got acknowledged or what happened with it, and replaying is really hard. Where do you pick up again? While using something like Kafka, you know exactly what you sent last, and then you're not going to have that uncertainty as to what messages went through and which ones didn't. And then the ability to replay is so important. I'm curious, as you continue to explore this, do you have a particular pain point in mind that you'd like to see improve? Or is it more just like, this seems like a really cool, novel idea; how can I incorporate more of this into my world? CHRIS: I think it's the latter. But I think the thing that I keep feeling is we keep going back and re-instrumenting versions of this. We're adding more logging or more analytics events over the wire or other things. But then, as I send these analytics events over the wire, we have Mixpanel downstream as an analytics visualization and workflow tool or Customer.io. At this point, those applications, I think, have a richer understanding of our users than our core Rails app. And something about that feels wrong to me. We're also streaming everything that goes through segment to S3 because I had a realization of this a while back. I'm like, that event stream is very interesting. I don't want to lose it. I'm going to put it somewhere that I get to keep it. So even if we move off of either Mixpanel or Customer.io or any of those other platforms, we still have our data. That's our data, and we're going to hold on to it. But interestingly, Customer.io, when it sends a message, will push an event back into segments. So it's like doubly connected to segment, which is managing this sort of event bus of data. And so Mixpanel then gets an even richer set there, and the Rails app is like, I'm cool. I'm still hanging out, and I'm doing stuff; it's fine. But the fact that the Rails app is fundamentally less aware of the things that have happened is really interesting to me. And I am not running into issues with it, but I do feel odd about it. STEPH: That touched on a theme that is interesting to me, the idea that I hadn't really considered it in those terms. But yeah, our application provides the tool in which people can interact with. But then we outsource the behavior analysis of our users and understanding what that flow is and what they're going through. I hadn't really thought about those concrete terms and where someone else owns the behavior of our users, but yet we own all the interaction points. And then we really need both to then make decisions about features and things that we're building next. But that also feels like building a whole new product, that behavior analysis portion of it, so it's interesting. My consulting brain is going wild at the moment between like, yeah, it would be great to own that. But that the other time if there's this other service that has already built that product and they're doing it super well, then let's keep letting them manage that portion of our business until we really need to bring it in-house. Because then we need to incorporate it more into our application itself so then we can surface things to the user. That's the part where then I get really interested, or that's the pain point that I could see is if we wanted more of that behavior analysis, that then we want to surface that in the app, then always having to go to a third-party would start to feel tedious or could feel more brittle. CHRIS: Yeah, I'm definitely 100% on not rebuilding Mixpanel in our app and being okay with the fact that we're sending that. Again, the thing that I did to make myself feel better about this is stream the data to S3 so that I have a version of it. And if we want to rebuild the data warehouse down the road to build some sort of machine learning data pipeline thing, we've got some raw data to work with. But I'm noticing lots of places where we're transforming a side effect, a behavior that we have in the system to dispatching an event. And so right now, we have a bunch of stuff that we pipe over to Slack to inform our admin team, hey, this thing happened. You should probably intervene. But I'm looking at that, and we're doing it directly because we can control the message in Slack a little bit better. But I had this thought in the back of my mind; it's like, could we just send that as an event, and then some downstream tool can configure messages and alerts into Slack? Because then the admin team could actually instrument this themselves. And they could be like; we are no longer interested in this event. Users seem fine on that front. But we do care about this new event. And all we need to do as the engineering team is properly instrument all of that event stream tapping. Every event just needs to get piped over. And then lots of powerful tools downstream from that that can allow different consumers of that data to do things, and broadly, that dispatch events, consume them on the other side, do fun stuff. That's the story. That's the dream. But I don't know; I can't connect all the dots. It's probably going to take me a couple of weeks to connect all these dots, or maybe years, or maybe my entire career, something like that. But, I don't know, I'm going to keep trying. STEPH: This feels like a fun startup narrative, though, where you start by building the thing that people can interact with. As more people start to interact with it, how do we start giving more of our team the ability to then manage the product that then all of these users are interacting with? And then that's the part that you start optimizing for. So there are always different interesting bits when you talk about the different stages of Sagewell, and like, what's the thing you're optimizing for? And I'm sure it's still heavily users. But now there's also this addition of we are also optimizing for our team to now manage the product. CHRIS: Yes, you're 100%. You're spot on there. We have definitely joked internally about spinning out a small company to build this analytics alerting tool [laughs] but obviously not going to do that because that's a distraction. And it is interesting, like, we want to build for the users the best thing that we can and where the admin team fits within that. To me, they're very much customers of engineering. Our job is to build the thing for the users but also, to be honest, we have to build a thing for the admins to support the users and exactly where that falls. Like, you and I have talked a handful of times about maybe the admin isn't as polished in design as other things. But it's definitely tested because that's a critical part of how this application works. Maybe not directly for a user but one step removed for a user, so it matters. Absolutely we're writing tests to cover that behavior. And so yeah, those trade-offs are always interesting to me and exploring that space. But 100%: our admin team are core customers of the work that we're doing in engineering. And we try and stay very close and very friendly with them. STEPH: Yeah, I really appreciate how you're framing that. And I very much agree and believe with you that our admin users are incredibly important. CHRIS: Well, thank you. Yeah, we're trying over here. But yeah, I think I can wrap up that segment of me rambling about ideas that are half-formed in my mind but hopefully are directionally important. Anyway, what else is up with you? STEPH: So, not that long ago, I asked you a question around how the heck to manage themes that I have going on. So we've talked about lots of fun productivity things around managing to-dos, and emails, and all that stuff. And my latest one is thinking about, like, I have a theme that I want to focus on, maybe it's this week, maybe it's for a couple of months. And how do I capture that and surface it to myself and see that I'm making progress on that? And I don't have an answer to that. But I do have a theme that I wanted to share. And the one that I'm currently focused on is building up management skills and team lead skills. That is something that I'm focused on at the moment and partially because I was inspired to read the book Resilient Management written by Lara Hogan. And so I think that is what has really set the idea. But as I picked up the book, I was like, this is a really great book, and I'd really like to share some of this. And then so that grew into like, well, let's just go ahead and make this a theme where I'm learning this, and I'm sharing this with everyone else. So along that note, I figured I would share that here. So we use Basecamp at thoughtbot. And so, I've been sharing some Basecamp posts around what I'm learning in each chapter. But to bring some of that knowledge here as well, some of the cool stuff that I have learned so far...and this is just still very early on in the book. There are a couple of different topics that Laura covers in the first chapter, and one of them is humans' core needs at work. And then there's also the concept of meeting your team, some really good questions that you can ask during your first one-on-one to get to know the person that then you're going to be managing. The part that really resonated with me and something that I would like to then coach myself to try is helping the team get to know you. So as a manager, not only are you going out of your way to really get to know that person, but how are you then helping them get to know you as well? Because then that's really going to help set that relationship in regards of they know what kind of things that you're optimizing for. Maybe you're optimizing for a deadline, or for business goals, or maybe it's for transparency, or maybe it would be helpful to communicate to someone that you're managing to say, "Hey, I'm trying some new management techniques. Let me know how this goes." [chuckles] So there's a healthier relationship of not only are you learning them, but they're also learning you. So some of the questions that Laura includes as examples as something that you can share with your team is what do you optimize for in your role? So is it that you're optimizing for specific financial goals or building up teammates? Or maybe it's collaboration, so you're really looking for opportunities for people to pair together. What do you want your teammates to lean on you for? I really liked that question. Like, what are some of the areas that bring you joy or something that you feel really skilled in that then you want people to come to you for? Because that's something that before I was a manager...but it's just as you are growing as a developer, that's such a great question of like, what do you want to be known for? What do you want to be that thing that when people think of, they're like, oh, you should go see Chris about this, or you should go see Steph about this? And two other good questions include what are your work styles and preferences? And what management skills are you currently working on learning or improving? So I really like this concept of how can I share more of myself? And the great thing about this book that I'm learning too is while it is geared towards people that are managers, I think it's so wonderful for people who are non-managers or aspiring managers to read this as well because then it can help you manage whoever's managing you. So then that way, you can have some upward management. So we had recent conversations around when you are new to a team and getting used to a manager, or maybe if you're a junior, you have to take a lot of self-advocacy into your role to make sure things are going well. And I think this book does a really good job for people that are looking to not only manage others but also manage themselves and manage upward. So that's some of the journeys from the first chapter. I'll keep you posted on the other chapters as I'm learning more. And yeah, if anybody hasn't read this book or if you're interested, I highly recommend it. I'll make sure to include a link in the show notes. CHRIS: That was just the first chapter? STEPH: Yeah, that was just the first chapter. CHRIS: My goodness. STEPH: And I shortened it drastically. [laughs] CHRIS: Okay. All right, off to the races. But I think the summary that you gave there, particularly these are true when you're managing folks but also to manage yourself and to manage up, like, this is relevant to everyone in some capacity in some shape or form. And so that feels very true. STEPH: I will include one more fun aspect from the book, and that's circling back to the humans' core needs at work. And she references Paloma Medina, a coach, and trainer who came up with this acronym. The acronym is BICEPS, and it stands for belonging, improvement, choice, equality, predictability, and significance. And then details how each of those are important to us in our work and how when one of those feels threatened, then that can lead to some problems at work or just even in our personal life. But the fun example that she gave was not when there's a huge restructuring of the organization and things like that are going on as being the most concerning in terms of how many of these needs are going to be threatened or become vulnerable. But changing where someone sits at work can actually hit all of these, and it can threaten each of these needs. And it made me think, oh, cool, plus-one for being remote because we can sit wherever we want. [laughs] But that was a really fun example of how someone's needs at work, I mean, just moving their desk, which resonates, too, because I've heard that from other people. Some of the friends that I have that work in more of a People Ops role talk about when they had to shift people around how that caused so much grief. And they were just shocked that it caused so much grief. And this explains why that can be such a big deal. So that was a fun example to read through. CHRIS: I'm now having flashbacks to times where I was like, oh, I love my spot in the office. I love the people I'm sitting with. And then there was that day, and I had to move. Yeah, no, those were days. This is true. STEPH: It triggered all the core BICEPS, all the things that you need to work. It threatened all of them. Or it could have improved them; who knows? CHRIS: There were definitely those as well, yeah. Although I think it's harder to know that it's going to be great on the way in, so it's mostly negative. I think it has that weird bias because you're like, this was a thing, I knew it. I at least understood it. And then you're in a new space, and you're like, I don't know, is this going to be terrible or great? I mean, hopefully, it's only great because you work with great people, and it's a great office. [laughs] But, like, the unknown, you're moving into the unknown, and so I think it has an inherent at least questioning bias to it. STEPH: Agreed. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
338: Meticulously Wrong

The Bike Shed

Play Episode Listen Later May 17, 2022 45:52


Chris switched from Trello over to Linear for product management and talks about prioritizing backlogs. Steph shares and discusses a tweet from Curtis Einsmann that super resonated with the work she's doing right now: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one." This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Linear (https://linear.app/) Curtis Einsmann Tweet (https://twitter.com/curtiseinsmann/status/1521451508943843329) Louie Bacaj Tweet (https://twitter.com/LBacaj/status/1478241322637033474?s=20) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Good morning, and welcome to Tech Talk with Steph and Chris. Today at the top of the hour, it's tech traffic hits. STEPH: Ooh, tech traffic. [laughs] I like that statement. CHRIS: Yeah. The Git lanes are clogged up with...I don't know. I got nothing. STEPH: [laughs] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What's new in my world? Actually, I have a specific new thing that I can share, which is, as of the past week, I would say, switched from Trello over to Linear for product management. It's been great. It was a super straightforward transfer. They actually had an importer. We lost some of the comment threads on the Trello cards. But that was easy enough to like each Linear ticket has a link back to Trello. So it's easy enough to keep the continuity. But yeah, we're in a whole new world, a system actually built for maintaining a product backlog, and, man, it shows. Trello was a bunch of lists and cards and stuff that you could link between, which was cool. But Linear is just much more purpose-built and already very, very nice. And we're very happy with the switch. STEPH: I feel like you came in real casual with that news, but that is big news, that you did a switch. [laughter] CHRIS: How are you going to bury the lead like that? You switched project management...[laughter] I actually didn't think it was...I'm excited about it but low-key excited, which is weird because I do like productivity and task management software. So you would think I would be really excited about this. But I've also tried enough of them historically to know that that's never going to be the thing that actually makes or breaks your team's productivity. It can make things worse, but it can't make you great. That's the thing that I believe. And so it's a wonderful piece of software. I'm very excited about it but -- STEPH: Ooh, I like that. It can make you worse, but it doesn't make you great. That's so true, yeah, where it causes pain. Well, and it does make sense. You've been complaining a bit about the whole login with Trello and how that's been frustrating. But I haven't even heard of Linear. That's just...that's, I mean, you're just doing what you do where you bring that new-new. I haven't heard of Linear before. CHRIS: I try to live on the cutting edge. Actually, I deeply try to not live on the cutting edge at this point in my life. That early adopter wave, no, no, no, that's not for me anymore. But I've known a few folks who've moved to Linear. And everyone that I've spoken to who has moved to it has been like, "Yeah, it's been great." I've not heard anything negative. And I've heard or experienced negative things about every other product management tool out there. And so, it seemed like an easy thing. And it was a low-cost enough switch in terms of opportunity costs or the like, it took the effort of someone on our team moving those cards over and setting up the new system and training, but it was relatively straightforward. And yeah, we're super happy with it. And it feels different now. I feel like I can see the work in a different way which is interesting. I think we had brought in a Chrome extension for Trello. I think it's like Hello Epics or something like that that allows...it abuses the card linking functionality in Trello and repurchases that feature as an epic management thing. But it's quarter-baked is how I would describe it, or it's clearly built on top of existing things that were not intended to be used exactly in that way. So it does a great job. Hello Epics does a great job of trying to make something like parent-child list management stuff happen in Trello. But it's always going feel like an afterthought, a secondary feature, something that's bolted on. Whereas in Linear, it's like, no, no, we absolutely have the idea of projects, of course, and you can see burndown charts and things. And the thing that I do want to be careful about is not leaning too much into management. Linear has the idea of cycles or sprints, as many other folks think of them, or iterations or whatever you want to call them. But we've largely not been working in that mode. We've just continued to work through the next up list; that's it. The next up list should be prioritized and well defined at the top and roughly in priority order. So just pick up the next card and work on it. And we just do that every single day. And now we're in a piece of software that has the idea of cycles, and I'm like, oh, this is vaguely interesting. Do we want to do that? Oh, but if you're going to do that, you probably do some estimation, right? And I was like, oh no, now we're into a place that's...okay, I have feelings. I got to decide how to approach that. And so, I am intrigued. And I wonder if we could just say like ten carts that's how many come into a cycle, and that's it. And we use the loosest heuristics possible to define how we structure a cycle so that we don't fall into the trap of, oh, what's our roadmap going to look like six months from now? JK, what's anything going to look like six months from now? That's not a knowable fact. STEPH: I was just thinking where you said that you're moving into sprints or cycles, and then there's that push, well, now you got to estimate. And I just thought, do you? Do you have to estimate? [laughs] CHRIS: We need a burndown chart through 2024, and it must be meticulously accurate down to the hour. STEPH: I think meticulously wrong is how that goes. [laughs] CHRIS: Which is the best kind of wrong. If you're going to be wrong, be meticulous about it. STEPH: Be thorough about it. [laughs] Yeah, the team that I'm on right now, we have our bi-weekly planning, and we go through the board, and we pull stuff in. But there's never a discussion about estimation. And I hadn't really appreciated that until just now. How we don't think about how long is this going to take? We just talked about, well, what's in-flight? And how much work do people still have going on? And then here's the list of things we can pull in. But there's always a list that you can go back to. Like, it's very...we pull in the minimum and knowing that if we run out of work, there's another place to go where there's stuff that's organized. And I just love that cadence, that idea of like, let's not try to make guesses about the future; let's just have it lined up and ready for us to go when we're ready to pull it in. Although I know, that's also coming from a very developer's perspective, and there are product managers who are trying to communicate as to when features are going to get out into the world. So I get that there's a balance, but I still have strong feelings and hesitations around estimating work. CHRIS: Well, I feel like there is a balance there. And so many things in history are like, well, this is an overcorrection against that, and that's an overcorrection against this. And the idea that we can estimate our work that far out into the future that's just obviously false to me based on every project I've ever worked on that has tried to do it. And it has always failed without question. But critically, there is the necessity to sync up work and like, oh, marketing needs to plan the launch of this feature, and it's a critical one. What's it going to look like? When's it going to be ready? You know, we're trying to go for an event, it's not just know...we developers never estimate anything past the immediate moment where like, that's not acceptable. We got to find a middle ground here. But where that middle ground is, is interesting. And so, just operating in the sort of we do work as it comes up is the easiest thing because no one's lying about anything at that point. But sometimes you got to make some guesses and make some estimations. And then it gets into the murky area of I believe with 75% confidence that in three weeks, we will have this feature ready. But to be clear, I said with 75% confidence that means one-quarter of the time; we will not be there at that date. What does that mean? What does that failure mode look like? Let's talk about that. And can you have honest, open, transparent, useful conversations there? That's the space that it becomes more subtle if you need to do that. STEPH: You're reminding me of a conversation that I had with someone where they shared with me some very aggressive team goals. And it was a very friendly conversation. And they're like, "How do you feel about aggressive goals?" And I was like, "Well, it depends. How do you feel about aggressive failure?" Because then once I know where you stand there, then we can talk about aggressive goals. Now, if we're being aggressive, but then we fail to achieve that, and it's one of those, okay, we didn't meet the goal that we'd expected, but everything is fine, and it's not a big deal, then I am okay. Sure, let's shoot for the stars. But if it's one of those, we are communicating these goals to the outside world, and it's going to become incredibly important that we meet these goals, and if we don't, then things are going to go on fire, people are going to be in trouble, and it's just going to be awful, then let's not set aggressive goals. Let's not box ourselves into a space where we are setting ourselves up to fail or feel pain in a meaningful way. I agree that estimations are important, especially in terms of you need to collaborate with other departments, and then also just to provide some sense of where the product is headed and when things may be released. I think estimations then just become problematic when they do become definite, and they're based on so many unknowns, and then when I don't know is not an answer. So if someone asked, "What's your estimate for this?" And the very honest real answer is I don't know, like, we haven't done this type of work before, or these are all the unknowns, and then someone's like, "Well, let's just put an estimation of like two weeks on it," and they just sort of try to force-fit it into being what they want, then that's where it starts to just feel incredibly problematic. CHRIS: Yeah, estimation is a very murky area that we could spend entire episodes talking about, and in fact, I think we have a handful of times. So with that, Linear has been great. We're going to see just how much or how little estimation we actually want to do. But it's been a very nice addition to the toolset. And I'll let you know as we deepen our usage of it what the experience is like, but that's the main thing that's new in my world. What's new in your world? STEPH: Well, before we bounce over to my world, you said something that has intrigued me that has also made me start reflecting on some of the ways that I like to work. And you'd mentioned that you have this prioritized backlog that people are pulling tickets from. And I don't know exactly if there's a planning session or how that looks, but I have recognized that when I am working with a team, and we don't have any planning session, if everybody is just pulling from this backlog, that's being prioritized by someone on the team, that I find that a bit overwhelming. Because the types of work being done can vary so drastically that I find I'm less able to help my colleagues or my teammates because I don't have the context for what they're working on. It surprises me. I'm like, oh, I didn't even know we're working on that feature, or I don't have the context for what's the problem that we're trying to solve here. And it makes it just a lot harder to review and then have conversations with them. And I get overwhelmed in that environment. And I've recognized that about myself based on previous projects that were more similar to that versus if I'm on a project where the team does get together every so often, even if it's high level to be like, hey, here's the theme of the tickets that we're working on, or here's just some of the stuff, then I feel much more prepared for the work that is coming in and to be able to context switch and review. And yeah, so I've kind of learned that about myself. I'm curious, are you similar, or how does that work for you? CHRIS: I'm definitely similar. And I think probably the team is closer to what you're describing. So we do have a planning session every week, just a quick 30-minute scan through the backlog, look at the things that are coming up and also the larger themes. Previously, Epics and Trello now projects and Linear. But talking about what are the bigger pieces of work that we're moving on, and then what are the individual tickets associated with that that we'll be expecting to work on in the next week? And just making sure that everyone has broad clarity around what that feature set is. Also, we're a very small team at this point. Still, we're four people total, but one of the developers is on a break for a couple of weeks this summer. And so there are really only three of us that are driving on the code. And so, with three of us working on the projects, we try very intentionally to have significant overlap between the various...like, we don't want any one person to own any portion of things at this point. And so we're doing a good amount of pairing to cross-pollinate and make sure everyone's...not everyone's aware of everything, but at least one other person is sufficiently aware of everything between the three of us. And I think that's been working well. I don't think we have any major gaps, save for the way that we're doing our mobile architecture that's largely managed by one of the developers on the team and a contractor that we're working with to help do a lot of the implementation. That's a known we chose to silo that thing. We've accepted the cost of that for now. And architecturally, the rest of us are aware of it, but we're not like in the Swift code writing anything because I don't know how to write Swift at this point. I'd love to learn it. I hear good things about the language. [12:26] So yeah, I think conceptually very similar to what you're describing of still want to have people be able to review. Like, I don't want to put up a PR and people be like, I don't know, that looks like code, I guess. I'm not sure what it does. Like, I want it to be very...I want us all to be roughly on the same page, and thus far, we are. As the team grows, that will become trickier to maintain because there are just inherently probably more things that are moving, more different feature areas and surface area that we're tackling in any given week, or there are different ways to approach that. I know you've talked about having a limited number of themes for a given cycle, so that's an idea that can pop up. But that's something that we'll figure out as we get further. I think I'm happy with where we're at right now, so yeah, that's the story there. STEPH: Okay, cool. Yeah, all of that resonates with me, and thinking about it a little more deeply in this moment, I'm realizing I think something you said helped me put this together where when I'm reviewing someone's change, I'm not necessarily just looking to see does your code work? I'm going to trust you that your code works. I may have thoughts about design and other things, but I really want to understand more what's the change to the product that we're making? What's the goal that we're looking to achieve? How are we measuring this? And so if I don't have that context, that's what contributes to that feeling of like, hard context switching of not just context switching, but now I have to level myself up to then understand the problem that's being solved by this. Versus had I known some of the themes going into that particular cycle or sprint, I would have felt far more prepared for that review session versus having to then dig through all the data and/or tickets or talk to someone. Well, switching back to what's going on in my world, I have a particular tweet that I want to share, and it's one that Joël Quenneville brought to my attention. And it just resonates so much with all the type of work that I'm doing right now. So I'm going to read the tweet, and then we'll link to it in the show notes as well. But it's from Curtis Einsmann, and Curtis wrote: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one." And that describes all the work that I'm doing right now. It's a lot of exploratory, a lot of data-driven work, and finding ways that we can reduce the time that it takes to run RSpec on CI. And it also ties in nicely to one of the things that I think we talked about last week, where we discovered that a number of files have a high runtime variance. And I've really dug into the data there to understand, okay, is it always specific files that have these high runtime variants? Are there any obvious contributions to what's causing this? Are we making real network calls that then could sometimes take a long time to return? And the result is there's nothing obvious. They're giant files. The number of SQL commands that are being run for each file varies drastically. They're all high, but it's still very different. There's no single fact about these files that has really been like, yes, this is what's causing these files to have such a runtime variance. And so while I've been in the data, I'm documenting it, and I'm making a list and putting it all together in a ticket so at least it's there to look at later. But I'm going to move on. It's one of those I would love to know what's causing this. I would love to address it because it would put us in an ideal state for how we're distributing tests, which would have a significant impact on our runtime. But it also feels a little bit like chasing my tail because I'm worried, like with some of the other experiments that we've done in the past where we've addressed tentpoles, that as soon as you address the issue for one or two files, then other files start having the same problem. And you're just going to continue to chase and chase, and I don't want to be in that. So upfront, this was one of those; hey, let's look at the data. If there's something obvious, let's address it; if not, move on. So I'm at that point today where I'm wrapping up all of that data, and then I'm going to move on, move on to the next thing. CHRIS: There's deep truth in that tweet that you shared at the start of this segment. The idea like if we knew the work that we had to do at the front, yeah, we would just do that, but often, we don't. And so, being able to not treat it as a failure when something doesn't work out is, I think, so critical. I think to expand on the idea just a tiny bit, the idea of the scientific method, failure is totally an option and is part of science. I remember watching MythBusters, and Adam Savage is just kind of like, "Failure is always an option," and highlighting that as part of it. Like, it's an outcome. You've learned something. You have a new data point. You can take that and then carry it forward with you. But it's rough in the moment. And so, I do think that this is a worthwhile thing to meditate on. And it's something that I've had to revisit a handful of times in my career of just like, man, I feel like I've just been spinning my tires all week. I'm like, we know what we want to get done, but just each approach I take isn't working for one reason or another. And then, finally, you get to the end. And then you've got this paragraph-long summary of all the things that didn't work in your PR and one-line change sort of thing. And those are painful, but they're part of the game. Like, that is unavoidable. I have not found a way to just know how to do the work upfront always. I would love that. I would sign up for whatever seminar was selling that. I wouldn't. I would know that that seminar is a lie, actually. But broadly, I'm intrigued by the idea if someone were selling that, I'd be like, well, I mean, pitch me on it. Tell me why I should believe you; I don't, just to be clear. But yeah. STEPH: This project has really helped me embrace always setting a goal or a question upfront about what I'm wanting to achieve or what I'm looking to answer because a number of times while Joël and I have been spelunking through data...And then so originally, with the saga, we started out with why doesn't our math match reality? We understand that if these tests are distributed perfectly across the CPUs, then that should cut the runtime in half. But yet, we weren't seeing that even though we had addressed the tentpoles. So we dug into understanding why. And the answer is because they're not perfectly distributed, and it's because of the runtime variance. And that was a critical moment to look back and say, "Did we achieve the goal?" Yes, we identified the problem. But once you see a problem, it's just so easy to dig in and keep going. It's like, well, now I want to know what's causing these files to have a runtime variance. But it's one of those we achieved our goal. We acknowledged upfront that we wanted to at least understand why. Let's make a second decision, do we keep going? And I'm at that point where, frankly, I probably dug in a little more than I should because I'm stubborn. But I'm recognizing that now's the time to back away and then go back and move on to the next high-priority item, which is converting for funsies; I'll share. The next thing is converting Test::Unit test over to RSpec because we have, I think, around 25 tests that are written in Test::Unit. And we want to move them over to RSpec because that particular just step in the build process takes a good three to four minutes. And part of that is just booting up Rails and then running the tests very fast. And we're underutilizing the machine that's running them because it's only 25 tests, but there are 86 CPUs to run it. So we'd really like to combine those 25 tests with the rest of the RSpec suite and drop that step. So then that should add minimal time to the overall build but then should take us down at least a couple of minutes. And then also makes it easier to manage and help folks so that way, there's one consistent testing framework that's in use versus having to manage some of these older tests. CHRIS: It's funny; I think it was just two episodes back where we talked about why RSpec, and I think both of us were just like, well yeah. But I mean, if there are tests and another, like, it's fine, you just leave them with the exception that if there's like 2% of our tests are in Test::Unit, and everything else is in RSpec, yeah, maybe that that conversion efforts seem totally worth it. But again, I think as you're describing that, what I'm hearing is just like the scientific method, just being somewhat structured in the approach to what's the hypothesis? And what's the procedure we're going to use to determine if that hypothesis is true or false? And then what do we do? And then what are the results? And then you just do that on loop. But being not just sort of exploring. Sometimes you have to be on exploratory mode. But I definitely find that that tiny bit of rigor of just like, wait, okay, before I actually do anything, what do I think is going on here? What's my guess? And then, okay, if that guess were true, what would I be able to observe in the world? Okay, here we go. And just that tiny bit of structure is so...it sometimes feels highly formal to go into that mode and be like, no, no, no, let me take a step back. Let me write down my thoughts. I'm going to have a little checklist and do the thing. But I've never regretted doing it. I will say that. I have deeply regretted not doing it. I feel like I should make a list of things that fit that structure. I've never regretted committing in Git ever. That's been great. I've always been able to unwind it, but I've never been able to not unwind it or the opposite. I've regretted not committing. I have not regretted committing. I have regretted not writing out my hypothesis or approach. I have not regretted doing it. And so, yeah, this feels like it falls firmly in that category of like, it's worth just a tiny bit of structure. There's a reason it is the scientific method. STEPH: Yeah, I agree. I've not regretted documenting upfront what it is I look to achieve and how I think I'm going to answer the question. That has been immensely helpful. Because then I also forget, like, two weeks ago, I'll be like, wasn't there a question around why this was happening, and I need to understand? And all of that was so context-heavy that as soon as I'm out of the thick of it, I completely forget it. So if I care about it deeply or if I want to be able to revisit it, then I need to document it for myself. It's given me a lot of empathy for people who do more scientific research around, oh my gosh, like, you have to document everything you do and then still be able to prove it five years from now or however long. I'm just throwing out numbers. And it needs to be organized enough that someone, if they do question your research that, then you have it there. My research documents would not withstand scrutiny at this point because they are still just more personal notes. But yes, it's given me a lot of empathy and respect for people who do run very serious research, experiments, and trials, and then have to be able to prove it to the world. Pivoting just a bit, there's a particular topic that resonated with both you and I; in fact, it's a particular tweet. And, Louie, I do apologize if I mispronounce your last name, but Louie Bacaj. And we'll include a link in the show notes to the tweet, but Louis shared, "I managed multiple engineering teams before quitting tech. Now that I quit, I can speak freely. Here are 12 things your manager may not be telling you, but I know for a fact will help you." So there are a number of interesting discussions and comments that are in this thread. The one thing in particular that really caught my attention is number 10, and it's "Advocate for junior developers." So they said that their friend reminded them that just because you don't have 10-plus years of experience does not mean that they won't be good. Without junior engineers on the team, no one will grow. Help others grow; you'll grow too. And that's the part that I love so much is that without junior engineers on the team, no one will grow because that was very thought-provoking for me. It's something that I find that I agree with deeply, but I hadn't really considered why I agree with that so much. So I'm excited to dive into that topic with you. And then, as a second topic to go along with that is, can juniors start with a remote team? I think that's one of the other questions when you and I were chatting about this. And I'm intrigued to hear your thoughts. CHRIS: A bunch of stuff there. Starting with the tweet, I love elements of this. Some of it feels like it's intentionally somewhat provocative. So like, without junior engineers on the team, no one will grow. That feels maybe a little bit past the bar because I think we can technically grow, and we can build things and whatnot. But I think what feels deeply true to me is truly help others grow; you'll grow too. The act of mentoring, of guiding, of training, of helping someone on their journey will inherently help you grow and, I think, change the way that you think about the work. I think the beginner mind, the earlier in the career folks coming into a codebase, they will see things fundamentally differently in a really useful way. It's possible that along your career, you've just internalized things. You've been like, yeah, no, that was weird. But then I smashed my head against it for a while, and now I understand this thing. And it just makes sense to me. But it's like, that thing actually doesn't make sense. You have warped your mind to match the thing, not, quote, unquote, "come to understand it." This is sounding too judgmental to people who've been in the industry for a while, but I found this of myself. Or I can just take for granted things that took a long time to adapt my head to, and if anything, maybe I should have pushed back a little more. And so, I find that junior engineers can be a really fantastic lens for the complexity of a project. Like, the world is truly a complex place, and that's just true. But our job as software engineers is to tame that complexity and manage it. And so, I love the mindset that can come or the conversations that can come out of that. And it's much like test-driven development is a pressure on the complexity of your code, having junior engineers join the team and needing to explain how all of the different features work, and what the overall architecture is, and the message passing under this and that, it's a really useful conversation to have. And so that "Help others grow; you'll grow too" feels deeply, deeply true to me. STEPH: Yeah, I couldn't agree more in regards to how juniors really help our team and especially, as you mentioned, with complexity and ¬having those conversations. Some of the other things that have come to mind for me as well around the importance of having junior developers on your team...and maybe it's not specifically they're junior developers but that you just have a variety of experience on your team. It's going to help you lean into a culture of learning because you have people that are at different stages of their career. And so you want an environment where people can learn together, that they can fail together, and they can be public about it. And having people that are at different stages of their career will lead, well, at least ideally, it'll lead to more pair programming. It's going to lead to more productive code reviews because then people can ask more questions around why did you choose this, or why are you doing that? Versus if everybody is at the same level, then they may just intuitively have reasons that they think someone did something. But it takes someone that's a bit new to say, "Hey, why did you choose this?" or to bring up some other ideas or ways that they could pursue it. They may bring in new ideas for, like, why has the team always done something this way? Let's think about new ways that we could do this. Or maybe this is really unfriendly, the way that we're doing this, not just for junior people but for people that are new to the team. And then there's typically less knowledge siloing because then you're going to want to pair the newer folks with the more experienced folks. So that way, you don't have this more senior developer who's then off in a corner working by themselves. And it's going to improve your communication skills. There's just...I realized I'm just rambling because I feel like there are so many benefits that go along with having a variety of people on your team, especially in terms of experience. And that just leads to such a better learning environment and, ultimately, better software and better products. And yet, I find that so many companies won't embrace people that are newer to software. They always want the senior developers. They want the 10x-er or whatever those are. They want the people that have many, many years of experience. And there's so much value that comes from mentoring the next group of developers. And it's incredibly frustrating to me that one, companies often aren't open to it. But honestly, more than that, as long as you're upfront and honest about like, hey, this is the team that we need right now to build what we're looking to build, I can get past that; I can understand that. But please don't then mislead people and say that you're a junior-friendly team, and then not be prepared. I feel like some teams will go so far as to say, "Yes, we are junior-friendly," and they may even tweak their interview process to where it is a bit more junior-friendly. But then, by the time that person joins the team, they're really not prepared. They don't have an onboarding plan. They don't have a mentorship plan. And then they fail that person because that person has worked hard to get there. And they've worked hard to bring that person onto the team, but then they don't have a plan from there. And I've seen it too many times. And it just frustrates me so much because it puts that junior person in such a vulnerable state where they really have to be an incredible self-advocate to then overcome those hurdles from a lack of preparation on that company's part. Okay, I think that's my event. I'm sure I could vent about this a lot more, but I will cut it off there. That's the heart of it. CHRIS: I do think, like, with anything else, it's something that we have to be intentional about. And so what you're saying of like, yeah, we're a junior-friendly company, but then you're just hiring folks, trying to find folks that may work at a slightly lower pay grade, and that's what that means to you. So like, no, no, that's not what this is. This needs to be something different. We need to have a structure and an organization that can support folks at different points in their career. But it's interesting to me to think about the sort of why of it. And the earlier part of this conversation, we talked about some of the benefits that can come organizationally from it, and I do sincerely believe in that. But I also believe that it is fundamentally one of the best ways to find really talented people early on in their career and be in a position to hire them where maybe later on in their career, that just wouldn't happen naturally. And I've seen this play out in a number of organizations. I went to Northeastern University for my college, and Northeastern is famous for the co-op program. Northeastern sounds really fancy. Now I learned that they have like a 7% acceptance rate for college applications right now, which is wildly low. When I went to Northeastern, it was not so fancy. So just in case anyone's hearing that and they're like, "Oh, Northeastern, wow." I'm not that fancy. [laughs] But they did have the co-op then, and they still have it now. And the co-op really is a differentiating thing. You do three six-month rotations. And it is this fundamental differentiator in terms of when you're graduating. Particularly, I was in mechanical engineering. I came out, and I actually knew what that meant in the world. And I'd used Outlook, and I knew what a water cooler was and how to talk near one because that's a critical thing to learn in the world. And really transformative experience for me. But also, a thing that I observed was many of my friends ended up working at companies that they had co-opted for. I'm one of those people. I would say more than 50% of my friends ended up with a position at a company that they had done a co-op rotation with. And it really worked out fantastically. That organization and the individual got to try things out, experience. And then, I ended up staying at that company for a number of years, and it was a wonderful experience. But I don't know that I would have ended up there otherwise. That's not necessarily the way that would have played out. And similarly like, thoughtbot has the apprenticeship. And I have seen so many wonderful developers start at that very early point in their career. And there was this wonderful structure around them joining the thoughtbot team, intentional, structured, supported. And then those folks went on to be some of the most talented developers that I've ever worked with at a wonderfully talented organization. And so the story of like, you should do this, organizations. This is a thing that you should invest in for yourself, not just for the individual, like, for both. Everybody wins in this case, in my mind. I will say, though, in terms of transparency, I currently manage a team of three developers. And we hired very intentionally for senior folks this early on in where we're at. And that was an intentional choice because I do believe that if you're going to be hiring more junior developers, that needs to be something that you do very intentionally, that you have a support structure in place, that you're able to invest the time in where they're at and make sure we have sort of... I think a larger team makes more sense to bring juniors into broadly. That's the thing that I'm saying out loud that I'm like, I should push on that a little bit. Is that true? Do I really believe that? But I think so, my actions obviously point to it. But it is an interesting trade-off space of how do you think about that? My hope is that as we grow as an organization, that we would then very intentionally start hiring folks in a more junior, mid-level to junior and be very intentional about how we support them, bring them into the organization, et cetera. I do believe it is a win-win situation for everyone when done with intention and with focus. STEPH: That's such an interesting bit that you just said because I very much appreciate when companies recognize do we have the bandwidth to support someone that's more junior? Because at thoughtbot, we go through periods where we don't have our apprenticeship that's open because we recognize we're not in a place that we can support someone. And we don't want to bring someone in unless we can help them be successful. I very much admire that and appreciate that about companies when they can perform that self-assessment. I am so intrigued. You'd mentioned being a smaller team. So you more intentionally hire senior developers. And I think that also makes sense because then you need to build up who's going to be in that mentorship pool? Because then people could leave, people could take vacations, and so then you need to have that support system in place. But yeah, I don't know what that then perfect balance is. It's like, okay, so then as soon as you have like five people available to mentor or interested in mentorship, it's like, then do you start bringing in the conversation of like, let's bring in someone that we can help build up and help them be successful and join our team? And I don't know what that magical number is. I do think it's important for teams to reflect to say, "Can we take on someone that's junior?" All the benefits of having someone that's junior. And then just being very honest and then having a plan for once that junior person does arrive. What does their career path look like while they've joined that team, and who's going to be that person that's going to help them level up? So not only make that choice upfront of yes, we are bringing someone on but let's also think about like the first six months of their work here at the company and what that's going to look like. It feels like an important step that a lot of companies fail to do. And I think that's why there are so many articles that then are like, hey, if you're a junior dev, here's all the things that you should do to be the best junior dev. That's fabulous. And we're constantly shoring up junior devs to be like, hey, here's all the things that you need to be great at. But we don't have as many conversations around; hey, here's all the things that your manager or the rest of your team should be great at to then support you equally as you are also doing your best to meet them. Like, they need to meet you halfway. And I'm not completely unsympathetic to the plight; I understand. It's often where I've seen with teams the more senior developers that have very strong mentorship communication skills are then also the teammates that get pulled into all the meetings and all the different projects, so then they are less available to be that mentor. And then that's how this often fails. So I don't think anybody is going into this intentionally, but yet, it's what happens for when someone is new and joining a team, and it hasn't been determined the next six months what that person's onboarding and career path looks like. Circling back just a bit, there's the question around, can juniors start with a remote team? I can go first. And I'm going to say unequivocally yes. There's no reason a junior can't start with a remote team. Because all the things that I feel strongly about come down to how is your team going to plan for this person? And how are they going to support this person? And all the benefits that you get from being in an office with a team, I think those do exist. And frankly, for someone like myself, it can be easier to establish a bond with someone that you get to see each day, get to see in person. You can walk up to their desk and can say, "Hey, I've got a question for you." But I think all those benefits just need to be transferred into a remote-friendly way. So I think it does ratchet up how intentional you have to be with your team and then onboarding a junior developer. But I absolutely think it's doable, and we should do it. CHRIS: You went with unequivocally yes as your answer. I'm going to go with a qualified maybe as my answer. I want this to be true, and I think it can be true. But I think it takes all the more intentionality than even what we've been describing. To shift the question around a little bit, what does remote work mean? It doesn't just mean we're doing the work, but we're separate. I think remote work inherently is at its best when we also are largely async first. And so that means more structured writing. The nature of the conversation tends to be more well-formed in each interaction. So it's like I read a big document, and then I pass it over to you. And at your leisure, you respond to it with a bunch of notes, and then it comes back to me. And I think that mode of interaction, while absolutely wonderful and something that I love, I think it fits really well when you're a little bit further on in your career when you understand things a little bit better. And I think the dance of conversation is more useful earlier on and so forth. And so, for someone who's newer to a team, I think having the ability to ask a quick question over and over is really useful to someone who's early on in their career. And remote, again, I think it's at its best when it's async. And those two are sort of at odds. And so it's that mild tension that gives me pause of like, something that I think that makes remote work great I do think is at least a hurdle that you would have to get over in supporting someone who's a little bit newer. Because I want to be deeply present for someone who's newer to their journey so that they can ask a lot of questions so that I am available to be interrupted regularly. I loved at thoughtbot sitting next to someone and being their mentor and being like, yeah, anytime you want, just tap on my desk. If I got my headphones on, that doesn't mean I'm ignoring you; it means I just need to make the sounds go away for a minute because that's the only way my brain will work. But feel free to just tap on my desk or whatever and grab my attention for a moment. And I'm available for that. That's an intentional choice. That's breaking up my continuity of the day, but we're choosing that for a reason. I think that's just a little harder to do in a remote context and all the more so if we're saying, hey, we're going to try this async thing where we write structured documents, and we communicate in these larger, more well-formed, communicates back and forth. But I do believe it can be done. I think it should be done. I just think it's all the harder for all of those reasons. STEPH: I agree that definitely makes it harder. But I'm going to push a little bit and say that when you mentioned being deeply present, I think we can be deeply present with someone and be remote. We can reduce the async requirements. So if you are someone that is more senior or more accustomed to the team, you can fall back to more of those async ways to communicate. But if someone is new, and needs more mentorship, then let's just set up time where we're going to literally hang out for a couple of hours each day or whatever pairing environment works best for them because pairing can also be exhausting. But hey, we're going to have a check-in each day; maybe we close out each day and touchpoint. And feel free to still message me and interrupt me. Like, you're going to just heighten your availability, even though it is remote. And be aware, like, hey, this person could message me at more times, and I'm okay with that. I have opted into this form of communication. So I think we just take that mindset of, hey, there's this person next to me, and I'm their mentor to like, hey, they're not next to me, but I'm still their mentor, and I'm still here for them. So I agree that it's harder. I think it falls on us and the team and the mentors to change ourselves versus saying to juniors, "Hey, sorry, it's remote. That's not going to work for you." It totally works for them. It's us, the mentors, that need to figure out how to make it work. I will say being on that mentor side that then not being able to see someone so if they are next to me, I can pick up on body language and facial expressions, and I can tell when somebody's stuck. And I can see that they're frustrated, or I can see that now's a good time for me to just be like, "Hey, how's it going? What are you working on? Or do you need help with something?" And I don't have that insight when I'm away. So there are real challenges like that that I don't know how to address. I have gone the obnoxious route [laughs] where I just message people, and I'm like, "Hey, how's it going? How's it going? How's it going?" And I try not to do that too much. But I haven't found a better way to manage that other than to constantly check in because I do have less feedback from that person that I'm working with unless they are just incredibly open about sharing when they're stuck. But typically, when you're newer to a team or newer to a career, you're going to be less willing to share when you're stuck. But yeah, there are some real challenges, but I still think it's something for us to figure out. Because otherwise, if we cut off access for remote teams to junior folks, I mean, that's where we're headed. There are so many companies and jobs that are headed remote that not being junior friendly and being remote in my mind is just not an option. It's something that we need to figure out. And it's hard, but we need to figure it out. CHRIS: Yeah, 100% on we need to figure that out and that that's on us as the people managing and structuring and bringing folks into teams. I think my stance would be like, let's just be clear that this is hard. It takes effort to make sure that we've provided a structure in which someone newer to a team can be successful. It takes all the more effort to do so in a remote context, I think. And it's that recognition that I think is critical. Because if we go into this with the wrong mindset, it's like, oh yeah, it's great. We got this new person on the team. And yeah, they should be ready to go in like two weeks, right? It's like, no, no, this is a different thing. We need to be very clear about it. This is going to require that we have someone who is able to work with them and support them in this. And that means that that person's output will likely be a little bit reduced for the period of time that we're talking about. But we're playing a long game here. Let's make sure we're clear on that. This is intentional. And let's be clear, the world of hiring and software right now it's not like super easy. There aren't way more software developers than there are jobs; at least, that's been my experience. So this is something absolutely worth investing in for just core business reasons and also good for people. So hey, it's a win-win. Let's do it. Let's figure it out. But also, let's be clear that it's going to be a little tricky along the way. So, you know, let's be intentional about that. But yeah, obviously do it, got to do it. STEPH: Wait, so I feel like we might have circled back to unequivocally yes. [laughs] Have we gotten there, or are you still on the fence? CHRIS: I was unequivocally yes from the beginning, but I couched it in, but...yeah, I said other things. You're right. I have now come around; let's say to unequivocally yes. STEPH: [laughs] Cool. I don't want to feel like I'm forcing you to agree with me. [laughs] But I mean, we just so rarely disagree. So we've either got to identify this as something that we disagree on, which would be one of those rare occasions like beer and Pop-Tarts. CHRIS: A watershed moment. Beer and Pop-Tarts. STEPH: Yeah, those are the only two so far. [laughter] CHRIS: Not together also. I just want to go on record beer and Pop-Tarts; I don't think would be...anyway. STEPH: Ooh, I don't know. It could work. It could work. CHRIS: Well, there's another thing we disagree on. STEPH: I would not turn it down. If I was eating a Pop-Tart, and you're like, "Hey, you want a beer?" I'd be like, "Sure," vice versa. I'm drinking a beer. "Hey, you want a Pop-Tart?" "Totally." CHRIS: Okay. Well yeah, if I'm making bad decisions, I'm obviously going to chain them together, but that doesn't mean that they're a good decision. It's just a chain of bad decisions. STEPH: I feel like one true thing I know about you is that when you make a decision, you're going to lean into it. So like, this is why you are all about if you're going to have a Pop-Tart, you're going to have the highest sugary junk content Pop-Tart possible. So it makes sense to me. CHRIS: It's the Mountain Dew theorem, yeah. STEPH: I didn't know this had a theorem. The Mountain Dew theorem? CHRIS: No, that's just my name. Well, yeah, if I'm going to drink soda, I'm going to drink Mountain Dew, the nonsense nuclear option of soda. So yeah, I guess you're describing me, although as you say it back to me, I suddenly feel very, like, oh God, is this who I am as a person? [laughs] And I'm not going to say you're wrong. I'm just going to spend a little while thinking about some stuff. STEPH: I mean, you embrace it. I think that's lovely. You know what you want. It's like, all right, let's do this. Let's go all in. CHRIS: Thank you for finding a wonderfully positive way to frame it here at the end. But I think on that note, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
334: Name That Bike

The Bike Shed

Play Episode Listen Later Apr 19, 2022 42:24


Chris got a bike. Specifically, he bought a bike to use in a triathlon he signed up to participate in. Now he needs to name the bike, and speaking of naming things, a more technical topic that he talks about is the Crispy Brussels Snack Hour. Steph talks about Rescue Rails projects and increasing developer acceleration. They answer a listener question asking, "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec?" This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Translate frustrations into professional corporate (https://twitter.com/MeanestTA/status/1509936432625897474) Learn Hotwire by Building a Forum (https://twitter.com/afomera/status/1512287468078264322) parallel_tests (https://github.com/grosser/parallel_tests) parallelsplittest (https://github.com/grosser/parallel_split_test) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Oh, but I recently learned that Robert Downey Jr. in the Marvel movies he's snacking a lot, maybe not Iron Man, but something...oh no, he's stacking a lot. And I'd read that he was snacking a lot on set, and so they just built it in to where like, sure, you can snack as your character while you're doing stuff. CHRIS: [laughs] STEPH: And I think that's so cool because I find that I am eating every time I show up to record with you. So I would like the same special star treatment as Robert Downey Jr., [laughs] and I just get to eat during each Bike Shed. [laughs] CHRIS: All right. [chuckles] My understanding is also that he was wildly the highest paid of all the actors, so I think that should also come along with this. STEPH: [laughs] CHRIS: Yeah, there's a lot that we can sort of layer on here, but it makes sense to me, and I'm fully on board. STEPH: You're an excellent agent. Thank you for fighting for my higher pay. [laughter] CHRIS: You are welcome. STEPH: What a good co-host you are. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. One of these days, I'm going to say, "I'm Chris Toomey," and then I'm just going to see how you roll with it, although now I'm ruining it, I should have just gone for it. [laughs] CHRIS: Nothing can prepare me for this despite the fact that you're telling me in this moment. In that future moment when you do it, I will still be completely knocked out of whack. Just like for anyone out there listening, the thing that Steph would normally have said instead of what [laughs] she just said was, "What's new in your world?" STEPH: [laughs] CHRIS: And I contractually require that that is the only way she starts this question to me because I get completely lost. She's like, "How are you doing?" I just overthink it, and I get lost, and then we end up in a place like this where I'm just rambling. STEPH: Every podcast contract you have from here on out must begin with hey, Chris, what's new in your world? [laughs] I will still get to that question. I just also had to tell you my future joke. I'm going to play that. Hopefully, you'll forget, and one day I will resurface. CHRIS: I can pretty much promise you that I'm going to forget it. [laughter] STEPH: Excellent. Well, to make sure I stick within the Chris Toomey contract guidelines, hey, Chris, what's new in your world? CHRIS: What's new in my world? Now I just want to spend a lot of time putting together my rider. There can be no brown M&M's in the bowl. No eye contact, please. And I can only be addressed with this one question which is, to be clear, very not true, Steph. And I always record with a video because we actually like to have human faces attached to things. Anyway, I'm going to tighten this all up. When we get to the technical segment of my world, I'm going to tell you about Crispy Brussels Snack Hour, so just throwing that out there as an idea. But before we do that, I'm going to share a fun little thing which is I bought a bike, which is exciting. It's not that exciting. People have bikes. This is exciting for me. But the associated thing that is more exciting/a little terrifying is I'm going to try and run a triathlon. I'm going to try and run, swim, and bike a triathlon as they go, specifically a sprint triathlon for anyone out there that's listening and thinking, oh wow, that sounds like a thing. The sprint is the shortest of the distances, so that's what I'm going to go for. But yeah, that's a thing that I'm thinking about in my world now. STEPH: I know next to nothing about triathlons. So what is a sprint in terms of like, what is the shortest? What does that mean? CHRIS: I think there actually maybe even shorter distances but of the common, there's sprint, Olympic. I want to say half Ironman, and then Ironman are the sequence. And an Ironman, as far as I understand it, I think it's a full marathon. It's like a century bike ride or something like that. It's an astronomical amount of everything. Whereas the sprint triathlon being the shortest, I think it's a 3.6-mile run, so a little over a 5K run, a 10-mile bike ride, and a quarter-mile swim, I want to say, something like that. But they're each scaled down to the rough equivalent of a 5K but in each of the different events. So you swim, and then you bike, and then you run. And so I'm going to try that, or at least I'm going to try to try. It's in September, and now is not September. So I have a lot of time between now and then to do some swimming, which I haven't done...like, I've swum but not in a serious way, not in an intentional way. So I got to figure out if I still know how to swim, probably get better at biking, and do a little bit of running, and it's going to be great. It's going to be a lot of fun. I'm super excited about it. Only a little terrified. STEPH: I think this is where as your co-host coach, which you have not asked me to be, where I would say something about there is no try, to mimic Yoda. [laughs] CHRIS: Yep, yep. Yep. Do or do not. Sprint or sprint not. There is no trying. Oh, were you making a try pun there? STEPH: I didn't go that far, but you just brought it home. I see where you're going. [laughs] CHRIS: This is pretty much what I do professionally is I just take words, and I roll them around until I find something else to do with them. So glad that we got there together. STEPH: Well, I'm really excited to hear about this. I don't know anyone that's trained for a triathlon. I think that's true. Yeah, I don't think I know anyone that's trained for a triathlon. So I'm curious to hear about how that goes because that sounds intense, friend. CHRIS: I think so. None of the individual segments sound that bad but stitching them all together, and I think the transitions are some of the tricky parts there. So yeah, it'll be fun. It's something I find...I used to never run; that was the thing. Like, deeply true in my head was that I'm not a runner. This is just a true fact about me. And then I ran a 5K one year for...it was like a holiday 5K fun run with friends. And every bit of the training leading up to it was awful. I did Couch to 5K. I hated it. My story in my head of I'm not a runner was proven with every single training run. Man, did I hate it. And then something magical happened on the day that I actually ran the race, and it was fun. And I was out there, and there was the energy of being in this group of people. But it was competitive and not competitive in this really interesting way. And then it ended, and we were just hanging out in a parking lot, and they gave us beer. And I was like, well, this is actually delightful. Maybe I actually like this thing. And so I've run a bunch of different races. And I've found that having a race to train for, and by train, I just mean some structured attempt at running, has been really enjoyable and useful for me. So yeah, this is just ratcheting that up a tiny bit. I've done a couple of half marathons is the high watermark so far. It's a good distance. But I don't know that a full marathon makes sense; that's a real commitment. And I'm looking to move laterally rather than just keep getting more complex in my running. So we're trying the shortest possible triathlon that I know of. STEPH: I am such a believer that exercise should be fun, so I love that. Like, I'm not a runner, but then you get around people, and it's exciting. And then there's that motivation, and then there's a fun ending with beers that totally jives with me. Because sure, I can go to the gym; I can lift weights, I can make myself exercise. There's some fun to it. But I strongly prefer anything that's more of like a sport or group exercise; that's just so much more fun. Well, super cool. Well, I'm excited. I would ask you all the details about your bike, but I know nothing. Do you want to share details about your bike? There may be other people that are interested. CHRIS: Oh yeah, my bike. I went to the bike store, and I said, "Could I have a bike, please?" And then they toured me around and showed me all the fancy...they were like, "This is our most modest entry-level bike." And then they kept walking around and showing me fancier bikes. And I was like, "Can we go back to that first one? That one seemed great." STEPH: [laughs] CHRIS: Because it got all of the checkboxes I was looking for, which is basically it's a bike. So actually, the specifics on it are it's a hybrid bike, so like a mix between road, and I don't even know the other road bikes I know of, and maybe it's trail. But I don't think it's meant for going on the trail. But for me, it'll be fine for what I'm trying to do as far as I understand it. It's technically a fitness hybrid, which I was like, oh, fancy. It's a fitness bike; look at me go. But it was basically just like, I would like a bike. General-purpose hybrid seems like the thing that makes sense. So I got a hybrid bike. And that's where I'm at. Oh, and I got a helmet because that seems like a smart move. STEPH: Nice. Yeah, the bike I own is also one of those hybrids where it's like…because when I moved to Boston...and lots of people have the road bikes, but their tires are just so skinny; it made me nervous. And so I saw one of the hybrid bikes, and I was like, that one. That looks a little more steady and secure, so I went with that one even though it's heavier. Do you have a name for your bike? Are you going to think of a name for your bike? CHRIS: I didn't, and I wasn't planning on it. But now that you've incepted me with this idea that I have to name my bike, of course, I have to name my bike. I'm going to need a couple of weeks to figure it out, though. We're going to have to get to know each other. And you know, something will become true in the universe for me to answer that question. But as of so far, no, I do not have a name for the bike. STEPH: Cool. I'll check back in. Yeah, it takes time to find that name. I feel you. CHRIS: [laughs] Yeah, don't make up a name. I have to find what's already true and then just say it out loud. Speaking of naming things and perhaps doing so in a frivolous way, as I mentioned earlier, the more technical topic that I want to talk about, oddly, is called Crispy Brussels Snack Hour. [laughs] So, within our dev team, we have started to collect together different things that don't quite belong on the product board, or at least they're a little more confusing. They're much more technical. In a lot of cases, they are...our form handling is a little rough. And it's the sort of thing that comes up a lot in pull requests where we'll say, "I feel like this could be improved." And we're like, "Yeah, but not in this pull request." And so then it's what do you do with that? Do you put a tech debt card in the product board? You and I have talked about tech debt cards plenty of times, and it's a murky topic. But we're trying within the team to make space and a way and a little bit of process around how do we think about these sorts of things? What are the pain points as a developer is working on the system? So to be clear, this isn't there is a bug because bugs we should just fix; that's my strong feeling, or we should prioritize them relative to the rest of the work. But this is a lower level. This is as a developer; I'm specifically feeling this sort of pain. And so we decided we should have a Trello board for it. And they were like, "Oh, what should we name the Trello board?" [laughs] And I decided in this moment I was like, "You know, if we're being honest, I've named everything very boring, very straight up the middle. We don't even have that many things to name. So we have zero frivolous names within our team. I think this is our opportunity. We should go with a frivolous name. Anybody have any ideas?" And someone had worked on a team previously where maybe it was a microservice or something like that was called crispy Brussels, like, crispy Brussels sprouts but just crispy Brussels. And so I was like, "Sure, something like that. That sounds great." And then they ended up naming it that which was funny, and fun, and playful in and of itself. But then we were like, "Oh, we should have a time to get together and discuss this." So we're now exploring how regularly we're going to do it. But we were like, let's have a meeting that is the dev team getting together to review that board. And we were like, "What do we call the meeting?" And so we went around a little bit, but we ended with the Crispy Brussels Snack Hour. STEPH: That's delightful. I love the idea of onboarding new people, and they just see on their calendar it's Crispy Brussels Snack Hour, come on down. [laughs] CHRIS: It's also got an emoji Brussel sprout and an emoji TV on either side of the words Crispy Brussels Snack Hour. So it's really just a fantastic little bit of frivolity in our calendars. STEPH: [laughs] That's delightful. How's that going? I don't think we've tried something like that explicitly in terms of, like you said, there are discussions we want to have, but they're not in the sprint. They're not tech debt cards that we want to create because, like you said, we've had conversations. So yeah, I'm curious how that's working for you. CHRIS: Well, so we've only had the one so far; it went quite well. We had a handful of different discussions. We were able to relatively prioritize this type of work within that. But one of the other things that we did was we had a conversation about this process, about this meeting, and the board. And whatnot. So we identified a couple of rules of the road or how we want to approach this that I think will hopefully be useful in trying to constrain this work because it's very easy to just like; nothing's ever perfect. And so this could very easily be a dumping ground for half-formed ideas that sound good but aren't necessarily worth the continued effort, that sort of thing. So the agenda for the meeting as described right now is async between meetings. Any of us can add new cards, ideally stated as problems and not solutions. So our form handling could use improvement. And then in the card, you can maybe make a suggestion of I think we could use this library or something like that. But rather than saying use this library or move to this library, we frame in terms of the problem, not necessarily the solution. And then, at the start of the meeting, any individual can champion a card so they can say, "Here's the thing that I really want everyone to know about that I've been feeling a lot of pain on." So it's a way for individuals who have added things to this to add a little bit more detail. Then using Trello as voting functionality, we each get a couple of votes, and we get to sprinkle them across different cards, and then using that now allows us collectively to prioritize based on those votes. And so the things that get voted up to the top we talk about; we prioritize some amount of work coming into the sprint. If it's actually going to turn into work, then it'll go onto the product board because ideally, it's moved from problem space to more of solution space even if the solution, the work to be done is do a spike on XYZ library or approach to form handling or whatever it is. But so ideally, it then moves on to the other board. The other thing that I felt was important is it's very easy for this to be a dumping ground for ideas. So my suggestion is at the end of the meeting, we sort by date, and we prune the oldest things. So it's like, if it's still hanging around and we haven't done it yet, and it's not getting voted up, then yes, we might feel some pain but not enough. It's not earning its place on this board. So that's my hope is we're weeding the Brussel sprouts garden that we have at the end of the meeting. That's roughly what we have now. We really only had the one, so that idea of pruning will probably come in later on. And it may be that this doesn't work out at all, and this ends up being tech debt cards that get stale and don't capture the truth. But I'm hopeful because there's definitely...there's a conversation to be had here. It's just whether or not we can make sure that conversation is useful and capturing the right amount of context and at the right points in time and all of that. STEPH: Yeah, I like it. I like the whole process you outlined. You know what it made me think of? It sounds like a technical retro, not that retros can't be technical; we bring up technical stuff all the time. But this one sounds like there was more technical discussion that was still looking for space to bring up. So the way that you mentioned that people add their thoughts, that it can be done async, and then you vote up, and then as things get stale, you remove them and focus on the things that the team voted for, that's really cool. I've never thought of having just a technical-specific retro. CHRIS: Yeah, definitely informed by retro. But again, just that slight honing the specific focus of this is just the dev team chatting about deeply dev-y things and making a little bit of space for that. I think the difficulty will be does this encourage us to work on this stuff too much? And that's the counterbalance that we have to have because this work can be critically important. But it can also be a distraction from features that we got to ship or bugs that are in the platform or other things like that. So that balancing act is something that I'm keeping in mind, but thus far, the way we structured it, I'm hopeful. And I'm interested in exploring it more, so we'll see where we get to. And I'll certainly report back as we refine the Crispy Brussels Snack Hour over time. STEPH: I feel like the opposite is true as well, where you have these types of concerns and things that you want to bring up. And even if they're on the board, once you get to sprint planning, there's a lot of context and conversation there that maybe the whole team doesn't have. It doesn't feel like the right moment to dive into this because you're trying to plan a new sprint. So then that stuff gets bumped down to the bottom or just never really discussed, or it gets archived. So I feel like the opposite is totally true, too, where you have this stuff, but then it never gets talked about because sprint planning is not the right place. So yeah, I'm really intrigued to see how that balance works out for y'all as well. CHRIS: Yeah, I think it's an exciting time, and we'll see where it goes. But like I said, I'm hopeful on it. But yeah, bikes, triathlons, and crispy Brussels, that's my world. Mid-roll Ad: Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: I have a couple of fun things that I want to share and then something that's a little more in the techie space. The first one is there's a delightful Twitter thread that caught my attention recently that I just want to share; totally not tech-related. But this person shared a thread talking about how they help everyone on their team who's older than they are, making sure that the slang that they're using is correct in its context. And so they provided some funny examples. And then, in return, they also will translate this person's frustrations into professional corporate-speak, and it's such a good thread. So if you need a good laugh, I will make sure to include a link in the show notes. The slang is really funny, but it's actually the translation of frustrations into professional corporate-speak that that's the part that resonated with me. That was really good. [laughs] CHRIS: You shared this with me outside of this conversation, and I've read through them. Listeners out there, do not sleep on this. I highly suggest reading through this thread because it is fantastic. STEPH: The other thing that I saw is Andrea Fomera, who is a Rails developer and creates a fair amount of content...I haven't been through some of that content, but I know there's content around Rails. And specifically, there is a newer course called Learn Hotwire by Building a Forum. And she has made this totally free, and I just think that is so cool. And she shared that on Twitter, so I'll be sure to include a link in that to the show notes because Hotwire is something I haven't used yet. And so I saw this free course, and I think it would be fun to dabble and go through the course. And I know there are some other people at thoughtbot that have used it and seem really happy with it or interested in using it as well. Is that something that you've used? CHRIS: I have not. I skipped over Hotwire in my adventures. I'd found Inertia and was quite happy with that. And then, in that way that, I sometimes limit the amount of things that I'm allowed to explore on the internet in hopes of actually getting some work done; I have not spent much time. But enough folks that I deeply respect are very excited about Hotwire that it remains in the like; I would love to have an afternoon just to poke around with that. So I may take a look at this, although I don't know, I'm probably still in my moratorium. I'm not allowed to look at new frameworks for a little while time period. But I hear great things. STEPH: That's fair. That's also what I've heard. I've heard great things. So yeah, I just figured I would share that in case anybody else is interested in looking for a course that they could take and also dabble at Hotwire. The other thing that's on my mind is more the type of projects that I'm really getting a lot of joy from. Because I've known about myself for a while that greenfield projects are nifty, but they're not my thing. They're not the thing that brings me a lot of joy. It's just kind of nice. You got your own space, and you're building from the ground up, cool, cool, cool. But this one, I found that the projects that I'm really starting to gravitate towards are what I've heard someone else call Rails Rescue projects. So those are the projects where they have been around for a while, or they've just been built in a way that the data modeling structure makes it really hard to implement new features. Maybe there's a lack of test coverage that makes it really risky to ship new work or to make changes. There are lots of bug reports and errors that the team is fighting with. So then that type of work comes down to where you're trying to either increase stability for the application and for users and/or you're looking to increase developer acceleration. And I really, really liked those projects. That's the type of project that I've been a part of for...I think my last couple of clients have been in that way. I don't know that they would describe it that way, that it's a Rails Rescue project. But if I can see that opportunity where I see there's a stability issue or developers are feeling a lot of pain in one area, then that's the portion of the application, the portion of the team that I'm going to gravitate towards. Or like the current work that I'm doing where we're really focused on testing and making some improvements there or reducing that pain that the team is feeling around how long CI takes to run or the flakiness because then you're having to re-verify your CI runs. I like that work. It's a bit slow and frustrating, so it does seem to require a patient person. You also have to have lots of metrics that are guiding you because you can have a lot of assumptions around I'm going to make this improvement, but it's going to take effort to get there. And it'd be great if I can validate that effort upfront. So I feel like a lot of my time is spent more around metrics, and data, and excel sheets than necessarily coding. I don't know if that's great, but it's part of the work. There's a balance there. So I just found that interesting. I don't think I would have thought this is something I was interested in until now that I've been on these projects for a while. And I've started noticing a theme where I really enjoy them. Although I realize looking back at former Stephanie days when I was going through Launch Academy and learning to code, I really thought I wanted to be in DevOps. DevOps seemed like the cool kids' corner. They knew how the internet worked. They knew what was happening. They were making it live. And I just thought it seemed really cool. For the record, it is still a cool kids' corner. But I have also learned that the work-life balance isn't great with DevOps because you just never know when you're going to be on call. And that really stood out to me as something that I didn't want to do. And I do like building some features. But essentially, it's that developer acceleration that I really liked because they were the ones that were coming and often building tools and making it easier for then people to then ship their code and get it out into the world and triage. And so I liked the fact that their users were developers versus the people using the application as much, although, I guess, technically both. But the people they were often striving to help the most was the internal team, and that resonated with me. So I guess I have eventually found my way into that space. It wasn't through DevOps, but it is now through this idea of projects that need some rescuing. CHRIS: I love that you've spent enough time now to figure out what it is that draws you in the work and the shape of projects that is meaningful to you. Interestingly, I find myself not on the opposite side of things...you know, we're always looking for a disagreement, and this isn't a disagreement, but this is a thing on which we differ a surprising amount because I do like the early-stage stuff, the new, the breaking ground, all of that exciting whatnot. But how do I not make this a more complicated statement? I appreciate that you have the point of view that you do. I think the world needs more of what you're doing than the inclination that I have, like; I want to start something bright, and fresh, and new, and I can see so much progress immediately in front of me. And this is amazing. But the hard, meaningful work like maintenance, and support, and legacy, and rescue where necessary is such a critical aspect of the work. I see this in open source so often where there are people who are like; I made an open-source project; this is great. I hacked for a bunch of weekends, and look; I made a thing. And then the support burden builds up. And open source can be this wildly undervalued thing overall. And the maintenance of open source is even more so, and you have this asymmetry between the people that are using it and don't think that their voice is one of the thousands that are out there requesting a new feature or anything like that. The handful of people that I see out there in the world that come along later in the lifespan of an open-source project and just step in to do maintenance, my goodness, is that heroic work, just quiet, necessary heroic work. And what you're describing feels sort of similar but at the project level. And I don't know; I'm sort of like silent. I'm out loud on a podcast, not silently at all judging myself because I'm like, I feel like you're doing the thing over there. That seems like a good thing. But I also like my early projects... [laughs] STEPH: I think they're...I mean, we need each other. I need you to start the code, and the applications for them to then need some help down the road [laughs] to [crosstalk 24:30]. CHRIS: But I need to do a bad enough job that we have to be rescued by you. STEPH: [laughs] CHRIS: Hey, don't you worry, friend, I'm doing a terrible...no, I think I'm doing an okay [laughter] job. Hopefully, I'm avoiding those traps, but it's hard to know when you're writing legacy code, you know. STEPH: It is hard for the reasons we were talking about earlier. Like, those technical discussions build-up, and then if you don't really have a space to then address it, then it just keeps getting sidelined until you suddenly get to this point of it's either we come to a grinding halt because we can't ship work, or we find ways to start bringing this into our process. And so that's the other part of the Rails Rescue projects is often looking at the team's process and figuring out, okay, instead of hiring consultants to come in and then try to help with this, how else can they also integrate this into their own project? So then, once thoughtbot lives, they now have ownership of this, and they can carry it forward as well. There is an aspect of this work that I'm still working on, and it comes around to the definition of work because if you go into a team or a project that's like, hey, we really need help with X. We really need help with addressing all these errors. Or we really need help improving developer happiness or getting test coverage in place. Finding out exactly how you're going to tackle that, are you going to join a team of the other developers? Like, are you looking for more of a mentorship? Like, hey, we're going to work alongside your team to then mentor them to then bring this into their own process and their own habits, so then they feel empowered to address this in the future. Are we doing this more as a triage where then we have a specific goal or two that then we're going to meet? And then once we get stuff out of this on fire state, then maybe we start pairing with other people. Or are we going to work closely with the people who are fighting fires with the bug reports and the errors? There are a bunch of different ways that you can tackle that. And I think it really helps define the success of that engagement and then your outcomes because otherwise, I feel like you can get distracted by so much. Because there's so much that's going to try to get your attention that you want to work on and fix. So you have to be very upfront about there are different areas that we can work on. Let's figure out some metrics together that we're really going after to then help define what does success look like for this first iteration of our work? And then what's the long-term plan for this work? Then how do we keep it going forward? How do we empower the team to keep this work going forward? And that's an area that I've learned just from trial and error from being part of these projects. And I'm very interested in still cultivating that skill and figuring out what's the area that we're focused on? CHRIS: There's something that you said in there that I want to hone in on, which is the idea of you've learned from going on so many of these different projects, and you're carrying forward ideas that you have. But I think more generally, there's something interesting in what you were just saying there around you've worked on a bunch of different projects at different organizations with certain things that they were great at, with certain things that they struggled with at different sizes. And you're able to bring all that experience to bear on each project. But I think also taking a step back, as you were describing, you're like, I think I've figured out what it is that I like and the type of projects that I want to do. I cannot say enough good things about working in a consultancy for a while because, my goodness, you get to try out a bunch of different stuff. And A, you get to learn a ton about how to do the work, and how to communicate, and different technologies and all of that. But you also get to figure out what it is that you might want to double down on and lean into in terms of the work. That's definitely a big part of my story. Seven years at thoughtbot, I tried a lot of different stuff, worked at a lot of different companies. And I would describe it as I found a lot of things that I didn't want. And then there's that handful of things that I really did want, and I was able to then more intentionally pursue that. So for anyone out there that's considering it, working at a consultancy is fantastic, or at least it has fantastic elements to it. It also can be complicated as you talk about finding organizations and having to, you know, if you're brought in for a certain job, but when you get there, you're like, "Ooh, I know you want me to fix bugs, but actually, I think I just need to work with your team because they're the ones writing the bugs. And why are they writing the bugs" "Well, because the salespeople are selling things, and then we have timelines." Like, we got to start at the very top of this whole pyramid and fix it. And so it can be very complicated. But there's so much that you can learn about yourself in the process, in the work, and I adored that portion of my career. STEPH: Yeah, I totally agree. Anytime someone mentions, they're like, "Oh, consultancy work. What's that like?" And I remember it was a couple of years ago I mentioned I was working for a consultancy, and they were like, "Oh, you must travel a lot." I was like, "No, [laughter] I stay put. I just work from an office in Boston." But I remember that caught me off guard because I hadn't considered that I was supposed to travel, but that makes sense that you think of consultants that travel. But when I meet people or talk to people, and they're like, "Oh, you've been at thoughtbot for five-plus years, and how's that going? And what's it like to be at a consultancy?" And exactly what you just said, it's the variety that I really like and getting to try on so many different hats and see how different teams and processes work and then identify like, oh, that worked really well for that team, or this isn't working well for that team. I have really enjoyed that. And it can be a roller coaster because you have to get really good at onboarding. You have to go through that initial phase of like; I swear I'm smart. I will get up to speed quickly, and I will learn things. But it's a period that you just have to go through with each team that you join, but you do it twice a year, maybe three times a year. And so you get comfortable with that over time. So there are definitely some challenges that then have to fit your personality and things that work for you and bring you joy. And I completely understand that it's not for everybody, just kind of I really enjoy product work, but I also really enjoy being able to move around to different teams and help folks. CHRIS: I love the idea that as a consultant, your job is to just walk through airports and high-five every Accenture billboard in it and just go up to the wall and pay your respects. But no, no, that is not our version of consulting. [laughs] STEPH: That's why I have so much time for The Bike Shed. It's because I'm just, you know, I'm in different airports high-fiving signs. And then this is my real job; Bike Shed is my real job. CHRIS: Oh, that would be fun. STEPH: [laughs] You know, I have such a fondness of Bike Shed that now something interesting has happened where someone was like, "Oh, you're bike-shedding." And they're not being mean, but they're just like, "Oh, we're totally bike-shedding," or "This is dissolving into bike-shedding." And I'm like, oh, bike-shedding, hooray. And I'm like, oh, wait, bad. [laughter] And I have to catch myself each time. CHRIS: Yeah, we've taken away a lot of the meaning. Well, I mean, have we or do we live up to it every single week? Who can say? But I, too, have a fondness for this phrase, perhaps not aligned with what it is actually meant to signify. STEPH: On a slightly different tech-related note, there is a gem that I'm really excited to check out. I saw it mentioned on the paralleltests gem, which is what helps you run your tests in parallel, and it's what we're currently using. But you can group your tests in different ways. And right now, we're using the runtime strategy where essentially then we use the output from RSpec where we know how long each file took to run. And then paralleltests will then use that data to then figure out, okay, how should I split up your test file? So then try to balance them as evenly as possible. We're at that point, though, where we've talked about tentpoles, so we have certain files that, say, take 10 minutes; other files will only take two minutes. And that balance is really throwing off our ability to then bring down the CI build time. So on paralleltests, there's reference to another gem called parallelsplit_test, where then you can run multiple test scenarios that are in one file but then split them out across different processes or different machines. And that is exactly what I want in my life right now. I haven't checked it out yet, so I feel like I'm giving a daily sync update of like, I'm going to go off and explore this thing. I will report back and see how it goes. [laughs] In the past, I usually try to say, "I've tried this thing, and this is how it went," nope, opposite today. I am sharing the thing I'm going to try, and then hopefully, it goes well. CHRIS: Well, either way, we should definitely report back. That's the truth. I like that you're leading us into this and giving us a preview. But then yeah, we'll see where we get to. That does sound like the thing you want, though. So I hope it goes well. STEPH: Yeah, we've learned at this point where we are splitting work across different machines that until we address some of those tentpole concerns, adding more machines won't help us because then a machine's going to run as long as the longest file. So we've been doing some manual work to split up those files. That's not the best, but it does help you see some results. So then, at least you know you're making progress. So now we really need to find a way to automate that because we don't want someone to have to manually figure out where are the tentpoles, split those files up, commit that, and then keep track of, like, do we have another tentpole on the horizon? We really need a gem or something to help us automate that process. So yeah, I will be happy to report back. MIDROLL AD: And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free. That's studio3t.com/free. STEPH: Pivoting just a bit, we have a listener question. This question comes from Steve Polito. And Steve wrote in, "Longtime listener, first-time thoughboter." Yay. Yay is my addition. Anything that goes up in voice is probably my addition, [laughs] just so people know. All right, back to what Steve said. "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec? Not only does Rails provide a fully functional test suite by default," looking at you Minitest, "but it's also well-documented and even provides the ability to run system tests. Rails is built on the principle of convention over configuration. And it seems odd to me that so many developers want to override such a fundamental piece of framework." Thanks in advance, [singing] Steve Polito. Steve, I hope it's okay I sang your name [laughs] because we're here now. That is an awesome question. I'm going to give what may be less of an awesome answer which is, well, one; Steve highlights that people will then replace Minitest with RSpec. I haven't done that. I haven't actually gone into a project and said, "Okay, we need to replace your test suite and bring in RSpec instead." But if I'm starting out a project, I do have a heavy preference for RSpec, and frankly, that's just from experience. Like, that's what I was raised on, to say it in that way. [laughs] RSpec is what I know; it's what I'm used to. It's what, even when I joined thoughtbot, was just the framework that we used for all of our testing and what we focused on so heavily. So frankly, for me, it's just a really strong bias. I know it's something that I'm really good at. I know it's something that works really well. I know it's well-documented. I know it's also very accessible for other people to use. But actually replacing it on a different project, I don't think I would do that. I'd have to have a really strong reason, or maybe if we haven't actually started testing anything yet, to then replace it because that feels a bit aggressive to me. But then it just depends on the situation, I suppose. But yeah, overall, I just default to RSpec because that's what I'm accustomed to, and it's the testing framework that I know. CHRIS: Yeah, I think my answer is largely the same. It's the thing that I've worked with by far the most. Similarly, I've been on projects that were using Minitest, and therefore I used Minitest because it's definitely not worth the effort to switch. But in a lot of...well, I will say this, I've much less experience, and this may be less true over time. But there were many things that drew me to RSpec, and that continues to be interesting to me in the RSpec world. Even things as small as the assertion syntax, assertequal is the method that's, you know, this is how you do an assertion in Minitest, and it's assertequal expected, actual. That's the order of the arguments. It's expected first and then actual. That makes sense, probably with the expected, but I would get that wrong constantly. I do get that sort of thing wrong. They're just positional arguments that there's nothing about this that tells me which way to go. And so it's very easy to get failure messages that are inverted, and so it's just this tiny little thing. But with RSpec, we end up with expect and then in parentheses, the thing that we are expecting to equal the other thing, and it just reads a little more honestly. It fits within the Ruby mindset in my world. I want my code to be as expressive as possible, and Minitest feels much lower level to me. It feels more, you know, assert as a word is just...I'm not asserting. That just feels so formal. And so these are, again, to be clear, very, very small things, but they all add up. And there's a reason that we're using Ruby overall. And there's a reason that we're using Rails is this expressiveness is a big part of it for me, so I'll cling to that. I'll hold on to that as something that's true. Also, Rspec's mocking support, rspec-mocks as the library, I found to be really fantastic, and I've grown very comfortable working with it. And I know how and where to use that. I also have so much built-up knowledge, like the idea of when to use let and not use let in RSpec. It's just this deep thing that I know about. I'm sure there's an equivalent in the Minitest world, but I would have to have a different understanding in argument, and that conversation would just feel different. I think the other thing that's worth saying is this is a default for us at this point that I personally have not felt the need to reconsider. When I've worked on projects that have used Minitest, I certainly wasn't called to it. I wasn't like, oh, this seems really interesting; I'm going to lean into this more. I was like, I miss RSpec. And some of that is, again, just familiarity. But at the end of the day, we only have so much time to do things. And so, I firmly stand by my not reconsidering my testing option at this point. Like, RSpec does the things that I want. It does it really well. Critically, I'm able to build a system and write a test suite and maintain that test suite over time and have it tell me the truth as to whether or not my application should be deployed to production. That is the measure. That's the thing that I care about. I think it's maybe a little bit slower than Minitest, but I'm fine with that. I have solutions to that problem. And the thing that I care about is when the test suite is green, do I feel confident deploying? RSpec has helped me for years on that journey. And I've never questioned whether or not I should go back to the drawing board and revisit that consideration. So initially, it was probably because it was the thing that we were all using, and then that is for me why it has stuck around. And I love RSpec. I think how many episodes have we just said, "Thanks, RSpec," as a little aside? So we do love it in a deep way. STEPH: Probably not enough episodes have we said that. [laughs] Yeah, I like what you said where you haven't felt the need to switch over or to move away from RSpec. And I wonder, looking back at some of the earlier projects that I joined that were using RSpec, I don't know if maybe they chose RSpec at that time because RSpec had more of those features built-in, and Minitest was still working on those. Maybe they were parallel at the time; I'm not sure. But I like what you said about you just haven't had a need to go back and change. At this point, if I switched over to Minitest, it would definitely be a learning curve for me, which is totally fine. But yeah, I'm just happy with it, so I stick with it. And I also appreciate that idea that, yeah, unless you're new in a project, I wouldn't encourage someone to then switch over to something else unless I feel like there's just a lot of pain for some reason with the current testing setup. There has to be a reason. There has to be a drive. It can't be just a personal bias of like, I know this thing, so I want to use it. There's got to be a better reason that benefits the whole team versus just a personal preference. But overall, I think it comes down to for us; it's just a choice because it's the familiar choice. It's the one that we know. But I think Minitest and RSpec are both so widely supported. I was thinking about that convention over configuration. And yes, Rails ships with Minitest, but RSpec is so common that I don't feel like I'm breaking convention at that point. They're both so widely supported and used that I feel very comfortable going with either option. And then it's just my personal preference for RSpec. So thanks, Steve, for sending in that question. And for anyone else that has a question that you would love to share with Chris and I, you can reach us in a couple of different ways. You can reach us on Twitter via @_bikeshed. You can also go to the website, bikeshed.fm/content. We will drop some links in the show notes. But if you go there, then you can send a question or also email us directly at hosts@bikeshed.fm. And we're running a little low on listener questions, so we would love to have a listener question from you. And we would love to talk about anything that y'all want to talk about, okay, within reason, you know, triathlons, Brussel sprouts, things like that. All of that falls within the wheelhouse. CHRIS: Normal stuff. STEPH: Normal stuff, yeah. CHRIS: And to be clear, despite the fact that Steve did recently become a thoughtboter, you don't have to be a thoughtboter to send in a listener question. [laughs] In fact, it's much more common to not be a thoughtboter when sending in a listener question. But we'll take them from anybody. We're happy to chat with you. STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
333: Tapas

The Bike Shed

Play Episode Listen Later Apr 12, 2022 41:53


Being pregnant is hard, but this tapas episode is good! Steph discovered and used a #yelling Slack channel and attended a remote magic show. Chris touches on TypeScript design decisions and edge cases. Then they answer a question captured from a client Slack channel regarding a debate about whether I18n should be used in tests and whether tests should break when localized text changes. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Emma Bostian (https://twitter.com/EmmaBostian) Ladybug Podcast (https://www.ladybug.dev/) Gerrit (https://www.gerritcodereview.com/) Gregg Tobo the Magician (https://astonishingproductions.com/) Sean Wang - swyx - better twitter search (https://twitter.com/swyx/status/1328086859356913664) Twemex (https://twemex.app/) GitHub Pull Request File Tree Beta (https://github.blog/changelog/2022-03-16-pull-request-file-tree-beta/) Sam Zimmerman - CEO of Sagewell Financial on Giant Robots (https://www.giantrobots.fm/414) TypeScript 4.1 feature (https://devblogs.microsoft.com/typescript/announcing-typescript-4-1/) The Bike Shed: 269: Things are Knowable (Gary Bernhardt) (https://www.bikeshed.fm/269) TSConfig Reference - Docs on every TSConfig option (https://www.typescriptlang.org/tsconfig#noUncheckedIndexedAccess) Rails I18n (https://guides.rubyonrails.org/i18n.html) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. There are a couple of new things in my world, so one of them that I wanted to talk about is the fact that being pregnant is hard. I feel like this is probably a known thing, but I feel like I don't hear it talked about as much as I'd really like, especially in sort of like a professional context. And so I just wanted to share for anyone else that may be listening, if you're also pregnant, this is hard. And I also really appreciate my team. Going through the first trimester is typically where you experience a lot of morning sickness and fatigue, and I had all of that. And so I was at the point that most of my days, I didn't even start till about noon and even some days, starting at noon was a struggle. And thankfully, the thoughtbot client that I'm working with most of the teams are on West Coast hours, so that worked out pretty well. But I even shared a post internally and was like, "Hey, I'm not doing great in the mornings. And so I really can't facilitate any morning meetings. I can't be part of some of the hiring intros that we do," because we like to have a team lead provide a welcoming and then closing for anyone that's coming for interview day. I couldn't do those, and those normally happen around 9:00 a.m. for Eastern Time. And everybody was super supportive of it. So I really appreciate all of thoughtbot and my managers and team being so great about this. Also, the client team they're wonderful. It turns out growing a little human; I'm learning how hard it is and working full time. It's an interesting challenge. Oh, and as part of that appreciation because…so there's just not a lot of women that I've worked with. This may be one of those symptoms of being in tech where one, I haven't worked with tons of women, and then two, working with a woman who is also pregnant and going through that as well. So it's been a little bit isolating in that experience. But there is someone that I follow on Twitter, @EmmaBostian. She's also one of the co-hosts for the Ladybug Podcast. And she has been just sharing some of her, like, I am two months sleep deprived. She's had her baby now, and she is sharing some of that journey. And I really appreciate people who just share that journey and what they're going through because then it helps normalize it for me in terms of what I'm feeling. I hope this helps normalize it for anybody else that might be listening too. CHRIS: I certainly can't speak to the specifics of being pregnant. But I do think it's wonderful for you to use this space that we have here to try and forward that along and say what your experience is like and share that with folks and hopefully make it a little bit better for everyone else out there. Also, you snuck in a sneaky pro-tip there, which is work on the East Coast and have a West Coast team. That just sounds like the obvious correct way to go about this. STEPH: That has worked out really well and been very helpful for me. I'm already not a great morning person; I've tried. I've really strived at times to be a morning person because I just have this idea in my head morning people get more stuff done. I don't think that's true, but I just have that idea. And I'm not the world's best morning person, so it has worked out for many reasons but yeah, especially in helping me get through that first trimester and also just supporting family and other things that are going on. Oh, I also learned a pro-tip about Twitter. This is going to seem totally random, but it was relevant when I was searching for stuff on Twitter [laughs] that was related to tech and pregnancy. But I learned...because I wanted to be able to search for something that someone that I follow what they said but I couldn't remember who said it. And so I found that in the search bar, I can add filter:follows. So you can have your search term like if you're looking for cake or pregnancy, or sleep-deprived and then look for filter:follows, and then that will filter the search results to everybody that you follow. I imagine that that probably works for followers too, but I haven't tried it. CHRIS: I like the left turn you took us on there but still keeping it connected. On the topic of Twitter search, they apparently have a very powerful search, but it's also hidden, and you got to know the specific syntax and whatnot. But there is a wonderful project by Shawn Wang, AKA Swyx, on the internet, bettertwitter.netlify.com is the URL for it. I will share a link to his tweet introducing it. But it's a really wonderful tool that just provides a UI for all of these different filters and configurations. And both make discoverability that much better and then also make it easy to just compose one of these searches and use that. The other thing that I'll recommend is, I think it's a Chrome plugin. I'm guessing is what I'm working with here like a browser extension, but it's called Twemex, T-W-E-M-EX. And there's a sidebar in Twitter now, which just seems wonderful and useful. So as I'm looking at a Swyx post here, or a tweet as they're called on Twitter because I know that vernacular, there's a sidebar which is specific to Shawn Wang. And there's a search at the top so I can search within it. But it's just finding their most popular tweets and putting that on a sidebar. It's a very useful contextual addition to Twitter that I found just awesome. So that combination of things has made my Twitter experience much better. So yeah, we'll have show notes for both of those as well. STEPH: Nice. I did not know about those. This may cause someone to laugh at me because maybe it's easier than I think. But I can never remember that advanced search that Twitter does offer; I have to search it every time. I just go to Google, and I'm like, advanced Twitter search, and then it brings up a site for me, and then I use that as the one that Twitter does provide. But yeah, from the normal UI, I don't know how to get there. Maybe I haven't tried hard enough. Maybe it's hidden. CHRIS: It's like they're hiding it. STEPH: Yeah, one of those. [laughs] CHRIS: It's very costly. They have to like MapReduce the entire internet in order to make that search work. So they're like, well, what if we hide it because it's like 50 cents per query? And so maybe we shouldn't promote this too much. STEPH: [laughs] CHRIS: And let's just live in the moment, everybody. Let's just swim in the Twitter stream rather than look back at the history. I make guesses about the universe now. STEPH: [laughs] On a different note, I also discovered at thoughtbot in our variety of Slack channels that we have a yelling channel, and I had not used it before. I had not hung out there before. It's a delightful channel. It's a place that you just go, and you type in all caps. You can yell about anything that you would like to. And I specifically needed to yell about Gerrit, which is the replacement or the alternative that we're using for GitHub or GitLab, or Bitbucket, or any of those services. So we're using Gerrit, and I've been working to feel comfortable with the UI and then be able to review CRs and things like that. My vernacular is also changing because my team refers to them as change requests instead of pull requests. So I'm floating back and forth between CRs and PRs. And because I'm in Gerrit world, I missed some of the updates that GitHub made to their pull request review screen. And so then I happened to hop in GitHub one day, and I saw it, and I was like, what is this? So that was novel. But going back to yelling, I needed to yell about Gerrit because I have not found a way to collaborate with someone who has already pushed up changes. I have found ways that I can pull their changes which then took a little while. I found it in a sneaky little tab called download. I didn't expect it to be there. But then the actual snippet it's like, run this in your terminal, and this is then how you pull down the changes. And I'm like, okay, so I did that. But I can't push to their existing changes because then I get like, well, you're not the owner, so we're going block you, which is like, cool, cool, cool. Okay, I kind of get that because you don't want me messing up somebody else's content or something that they've done. But I really, really, really want to collaborate with this person, and we're trying to do something together, and you're blocking me. And so I had to go to the yelling channel, and I felt better. And I'm yelling again. [laughs] Maybe I don't feel that great because I'm getting angry again talking about it. CHRIS: You vented a little into the yelling channel; maybe not everything, though. STEPH: Yeah, I still have more to vent because it's made life hard. Every time I wanted to push up a change or pull down someone else's changes, there are now all these CRs that then I just have to go and abandon, which is then the terminology for then essentially closing it and ignoring it, so I'm constantly going through. And if I do want to pull in changes or collaborate, then there's a flow of either where I abandon mine, or I pull in their changes, but then I have to squash everything because if you push up multiple commits to Gerrit, it's going to split those commits into different CRs, don't like that. So there are a couple of things that have been pain points. And yeah, so plus-one for yelling channels, let people get it out. CHRIS: Okay, so definitely some feelings that you are working through here. I'm happy to work together as a team to get through some of them. One thing that I want to touch on is you very quickly hinted at GitHub has got a bunch of new things that are cool. I want to talk about those. But I want to touch [laughs] on an anecdote. You talked about pushing something up to someone else's branch. You're like, oh, you know, I made some changes locally, and I'm going to push them up. I had an interesting experience once where I was interacting with another developer. I had done some code review. They weren't quite understanding where I was. They had a lot of questions. And finally, I said, you know what? This will just be easier. Here, I pushed up a commit to your branch, so now you can see what I'm talking about. And I thought of this as a very innocuous act, but it was not interpreted that way. That individual interpreted it in a very aggressive sort of; it was not taken well. And I think part of that was related to I think of Git commits as just these little ephemeral things where you're like, throw it out, feel free. This is just the easiest way for me to communicate this change in the context of the work that you're doing. I thought I was doing a nice favor thing here. That was not how it went. We had a good conversation after I got to the heart of where we both were emotionally on this thing. It was interesting. The interaction of emotion and tech is always interesting. But as a result, I'm very, very careful with that now. I do think it's a great way as long as I've gotten buy-in from the person beforehand. But I will always spot check and be like, "Hey, just to confirm, I can just push up a commit to your branch, but are you okay with that? Is that fine with you?" So I've become very cautious with that. STEPH: Yeah, that feels like one of those painful moments where it highlights that the people that you work with that you are accustomed to having a certain level of trust or default trust with those individuals, and then working with someone else that they don't have that where the cup is half-full in terms of that trust, or that this person means well kind of feelings towards a colleague or towards someone that they're working with. So it totally makes sense that it's always good to check and just to be like, "Hey, I'd love to push up some changes to your branch. Is that cool?" And then once you've established that, then that just makes it easier. But I do remember that happening, and yeah, that was a bit painful and shocking because we didn't see that coming and then learned from it. CHRIS: I do think it's an important thing to learn, though, because for me, in that moment, this was this throwaway operation that I thought almost nothing of, but then another individual interpreted it in a very different way. And that can happen, that can happen across tons of different things. And I don't even want to live in the idealized world where it's just tech; we're just pushing around zeros and ones; there's no human to this. But no, I actually believe it's a deeply human thing that we're doing here. It's our job to teach the computers to be a little closer to us humans or something like that. And so it was a really pointed clarification of that for me where it was this thing that I didn't even think once about, no less twice, and yet someone else interpreted it in such a different way. So it was a useful learning situation for me. STEPH: Yeah, I totally agree. I think that's a really wise default to have to check in with people before assuming that they'll be comfortable with something that we're comfortable with. CHRIS: Indeed. But shifting back to what you mentioned of GitHub, a bunch of new stuff came in GitHub, and you were super excited about it. And then you went on to say other things about another system. [laughs] But let's talk about the great things in GitHub. What are the particular ones that have caught your eye? I've seen some, but I'm intrigued. Let's compare notes. STEPH: So this is one of those where I hadn't seen GitHub in quite a while, and then I hopped in, and I was like, this is different. But some of the things that did stand out to me right away is that on the left-hand side, I can see all of the files that have been changed, and so that's a really nice tree where I just then immediately know. Because that was one of the things that I often did going to a PR is that I would see what files are involved in this change because it was just a nice overview of what part of the applications am I walking through? Are there tests for this? Have they altered or added tests? And so I really like that about it. I'm sure there's other stuff. But that is the main thing that stood out to me. How about you? CHRIS: Yeah, that sidebar file tree is very, very nice, which I find surprising because I don't use a file tree in my editor. I only do fuzzy finding to jump to files. But I think there's something about whenever GitHub had the file list; these are all the files that are changed. I'm like, this is just noise. I can't look at this and get anything out of it. But the file tree is so much more...there's a shape to it that my brain can sort of pattern match on. And it's just a much more discoverable way to observe that information. So I've really loved that. That was a wonderful one. The other one that I was surprised by is GitHub semantic code analysis; stuff has gotten much, much better over time subtly. I didn't even notice this happening. But I was discussing something with someone today, and we were looking at it on GitHub, and I just happened to click on an identifier, and it popped up a little thing that says, "Oh, do you want to hop to the references or the definition of this?" I was like, that is what I want to do. And so I hopped to the definition, hopped to the definition of another thing, and was just jumping around in the code in a way that I didn't know was available. So that was really neat. But then also, I was in a pull request at one point, and someone was writing a spec, and they had introduced a helper just like stub something at the bottom of a given spec file. And it's like, I feel like we have this one already. And I just clicked on the identifier. I think it might have actually been a matcher in RSpec, so it was like, have alert. And I was like, oh, I feel like we have this one, a matcher specific to flash message alerts on the page. And I clicked on it, and GitHub provided me a nice little inline dialog that showed me all of the definitions of have alert, which I think we were up to like four of them at that point. So it had been copied and pasted across a couple of different files, which I think is totally fine and a great way to start, but they were very similar implementations. I was like, oh, looks like we actually already have this in a couple of places, maybe we clean it up and extract it to a common spec support thing, and ta-da, I was able to do all of that from the GitHub pull requests UI. And I was like, this is awesome. So kudos to the GitHub team for doing some nifty stuff. Also, can I get into the merge queue? Thank you. ... STEPH: [laughs] There it is. That is very cool. I didn't know I could do that from the pull request screen. I've seen it where if I'm browsing code that, then I can see a snippet of where everything's defined and then go there, but I hadn't seen that from the pull request. I did find the changelogs for GitHub that talk about the introduction of having the tree, so we'll be sure to include a link in the show notes for that too. But yes, thank you for letting me use our podcast as a yelling channel. It's been delightful. [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Well, speaking of podcasts, actually, there was an interesting thing that happened where the CEO of Sagewell Financial, the company of which I am the CTO of, Sam Zimmerman is his name, and he went on the Giant Robots Podcast with Chad a couple of weeks ago. So that is now available. We'll link to that in the show notes. I'll be honest; it was a very interesting experience for me. I listened to portions of it. If we're being honest, I searched for my name in the transcript, and it showed up, and I was like, okay, that's cool. And it was interesting to hear two different individuals that I've worked with either in the past or currently talking about it. But then also, for anyone that's been interested in what I'm building over at Sagewell Financial and wants to hear it from someone who can probably do a much better job of pitching and describing the problem space that we're working in, and all of the fun challenges that we have, and that we're hopefully living up to and building something very interesting, I think Sam does a really fantastic job of that. That's the reason I'm at the company, frankly. So yeah, if anyone wants to hear a little bit more about that, that is a very interesting episode. It was a little weird for me to listen to personally, but I think everybody else will probably have a normal experience listening to it because they're not the CTO of the company. So that's one thing. But moving on, I feel like today's going to be a grab bag episode or tapas episode, lots of small plates, as we were discussing as we were prepping for this episode. But to share one little thing that happened, I've been a little more removed from the code of late, something that we've talked about on and off in previous episodes. Thankfully, I have a wonderful team that's doing an absolutely fantastic job moving very rapidly through features and bug fixes and all those sorts of things. But also, I'm just not as involved even in code review at this point. And so I saw one that snuck through today that, I'm going to be honest, I had an emotional reaction to. I've talked myself down; we're fine now. But the team collectively made the decision to move from a line length of 80 characters to a line length of 120 characters, and I had some feelings. STEPH: Did you fire everybody? [laughs] CHRIS: No. I immediately said, doesn't really matter. This is the whole conversation around auto-formatting tools is like we're just taking the decision away. I personally am a fan of the smaller line length because I like to have multiple files open left to right. That is my reason for it, but that's my reason. A collective of the developers that are frankly working more in the code than I am at this point decided this was meaningful. It was a thing that we could automate. I think that we can, you know, it's not a thing that we have to manage. So I was like, cool. There we go. The one thing that I did follow up on I was like, okay; y'all snuck this one in, it's fine, I'm fine with it. I feel fine; everything's fine. But let's add that to the git-blame-ignore-revs file, which is a useful thing to know about. Because otherwise, we have a handful of different changes like this where we upgrade Prettier, and suddenly, the manner in which it formats the files changes, so we have to reformat everything at once. And this magical file that exists in Git to say, "Hey, ignore this revision because it is not relevant to the semantic history of the app," and so it also takes that decision out of the consideration like yeah, should we reformat or not? Because then it'll be noisy. That magical file takes that decision away, and so I love that. STEPH: I so love the idea because you took vacation recently twice. So I love the idea of there was a little coup and people are applauding, and they're like, while Chris is on vacation, we're going to merge this change [laughs] that changes the character line. And yeah, that brings me joy. Well, I'm glad you're working through it. Sounds like we're both working through some hard emotional stuff. [laughs] CHRIS: Life's tricky, is all I'm going to say. STEPH: I am curious, what prompted the 80 characters versus 120? This is one of those areas that's like, yeah, I have my default preference like you said. But I'm more intrigued just when people are interested in changing it and what goes with it. So do you remember one of the reasons that 120 just suited their preferences better? CHRIS: Frankly, again, I was not super involved in the discussion or what led them to it. STEPH: [laughs] CHRIS: My guess is 120 is used...I think 80 is a pretty common one. I think 120 is another of the common ones. So I think it's just a thing that exists out there in the mindshare. But also, my guess is they made the switch to 120 and then reformatted a few files that had like, ah, this is like 85 characters, and that's annoying. What does it look like if we bump it up? And so 120 provided a meaningful change of like, this is a thing that splits to four lines if we have an 80 character thing, or it's one line if it's 120 characters, which is a surprising thing to say, but that's actually the way it plays out in certain cases because the way Prettier will break lines isn't just put stuff on the next line always. It's got to break across multiple lines, actually. All right, now that we're back in the opinion space, I have a strong one. STEPH: This is The Bikeshed. We can live up to that name. [laughs] CHRIS: So I do want an additional configuration in Prettier Ruby. This is the thing I'll say. Maybe I can chase down Kevin Newton and see if he's open to this. But when Prettier does break method call with arguments going into it but no parens on that method call, and it breaks out to multiple lines, it does the dangling indent thing, which I do not like. I find it distasteful; I find it noisy, the shape of the code. I'm a big fan of the squint test. I know that from Sandi Metz, I believe, or maybe it's Avdi Grimm. I associate it with both of them in my mind. But it's just a way to look at the code and kind of squint, and you see the shape of it, and it tells you something. And when the lines break in that weird way, and you have these arbitrary dangling indents, the shape of the code is broken up. And I don't feel so strongly. I actually regularly stop myself from commenting on pull requests on this because it's very easy. All you need to do is add explicit parens, and then Prettier will wrap the line in what I believe is a much more aesthetically pleasing, concise, consistent, lots of other good adjectives here that are definitely just my preferences and not facts about the world. But so what I want is, Prettier, hey, if you're going to break this line across multiple lines, insert the parens. Parens are no longer optional for breaking across multiple lines; parens are only optional within a given line. So if we're not breaking across lines, I want that configuration because this is now one of those things where I could comment on this. And if they added the optional parens, then Prettier would reform it in a different way. And I want my auto formatter don't give me ways to do stuff. Like, constrain me more but also within the constraints of the preferences that I have, please, thank you. STEPH: I love all the varying levels there [laughter] of you want a thing, but you know it's also very personal to you and how you're walking that line and hopping back and forth on each side. I also love the idea. We have the idea of clean code. I really want something that's called distasteful code now [laughs] where you just give examples of distasteful code, yes. Well, I wish you good luck in your journey [laughs] and how this goes and how you continue to battle. I also appreciate that you mentioned when you're reviewing code how you know it's something that you really want, but you will refrain from commenting on that. I just appreciate when people have that filter to recognize, like, is this valuable? Is it important? Or, like you said, how can we just make this more of the default so then we don't even have to talk about it? And then lean into whatever the default the team goes with. CHRIS: Well, thank you. I very much appreciate that because, frankly, it's been very difficult. STEPH: I do have something I want to yell about but in a very positive way or pranting as we determined or, you know, raving, the actual real term that wonderful listeners pointed out to us. CHRIS: Prant for life. That's my stance. STEPH: We had a magic show at thoughtbot. It was all remote, but the wonderful Gregg Tobo, the magician, performed a magic show for us where we all showed up on Zoom. And it was interactive, and it was delightful, and it was so much fun. And so if you need something fun for your team that you just want to bring folks together, highly recommend. I had no idea I was going to enjoy a magic show this much, but it was a lot of fun. So I'll be sure to include some links in the show notes in case that interests anyone. But yeah, magic. I'm doing jazz hands. People can't see it, but magic. I like how you referred earlier, saying that today is more of like a tapas episode. And I'm realizing that all of my tapas are related to being pregnant, yelling, and magic shows, and I'm okay with that. [laughs] But on that note, what else is on your tapas plate? CHRIS: Actually, a nice positive one that came into the world...I always like when we get those. So this is interesting because I was actually looking back at the history, and I had Gary Bernhardt on The Bike Shed back in Episode 269. We'll include a link in the show notes. But we talked a bunch about various things, including TypeScript. And I was lamenting what I saw as a pretty big edge case in TypeScript. So the goal of TypeScript is like, all right, JavaScript exists, this is true. What can we do on top of that? Let's not fundamentally change it, but let's build a type system on top of it and try and make it so that we can enforce correctness but understand that JavaScript is a highly dynamic language and that we don't want to overconstrain and that we've got to meet it where it is. And so one of the design decisions early on with TypeScript is if you have an array and you say like it's an array of integers, so you have typed that array to be this is an array of int, or it will be an array of number in JavaScript because JavaScript doesn't have integers; they only have numbers. Cool. [laughs] Setting aside other JavaScript variables here, you have an array of numbers. And so if you use element access to say, like, say the name of array is array of nums and then use brackets and you say zero, so get me the first element of that array. TypeScript will infer the type of that to be a number. Of course, it's a number, right? You got an array of numbers, you take a number out of it, of course, you're going to have a number, except you know what's also an array of numbers? An empty array. Well, of course. So there's no way for TypeScript because that's a runtime thing, whether or not the array is full of things or not. Or imagine you get the third element from the array. Well, JavaScript will either return you the third element, which indeed is a number, or undefined because there's no third element in this array. So that is an unfortunate but very understandable edge case that TypeScript was like, listen, this is how JavaScript works. So we're not going to…frankly, we don't think the people embracing TypeScript and bringing it into their world would accept this amount of noise because this is everywhere. Anytime you interact with an array, you are going to run into this, this sort of uncertainty of did I actually get the thing? And it's like, yeah, no, I know how many things are in the array that I'm working with. Spoiler, you maybe don't is the answer. And so, we ran into this edge case in our codebase. We were accessing an element, but TypeScript was telling us, "Yes, definitively, you have an object of that type because you just got it out of an array, which is an array of that type." But we did not; we had undefined. And so we had, you know blah is not a method on undefined or whatever that classic JavaScript runtime error is. And I was like, well, that's very sad. But now we get to the fun part of the story, TypeScript, as of version 4.1, which came out like the week that I recorded with Gary Bernhardt, which was interesting to look at the timeline here. TypeScript has added a new configuration. So a new strictness dial that you can configure in your tsconfig called noUncheckedIndexedAccess. So if you have an array and you are getting an element out of it by index, TypeScript will say, "Hey, you got to check if that's undefined," because to be clear, very much could be undefined. And I was so happy to find this. We turned it on in our codebase. It found the error in the place that we actually had an error and then found a few others that I think probably had errored at some times. But it was just one of those for me very nice things to be able to dial up the strictness and enforce correctness within our codebase, and so I was very happy about it. Other folks may say that seems like too much work. And, you know, I get that, I get that take. I'm definitely on the side of I'm willing to go through the effort to have enforced correctness, but you know, that's a choice. STEPH: Yeah, that's thoughtful. I like that, how you said you can dial up the strictness so then as you are introducing TypeScript, then people have that option. There is an argument there in the back of my head that's like, well, if you're introducing types, then you want to start more strict because then you're just creating problems for yourself down the road. But I also understand that that can make things very difficult to then introduce it to teams in existing codebases. So that seems like a really nice addition where then people can say, "Yeah, no, I really want the strictness. This is why I'm here," and then they can turn that on. CHRIS: So TypeScript in the configuration has strict mode, so you say strict true. And that is a moving target with each new version of TypeScript. But it's their sort of [inaudible 28:14] set of things that are part of strict, but apparently, this one's not in it. So now I'm like, wait, can I have a stricter? Can I have a strictest option? Can I have dial it to 11, please? [laughs] Really rough me up and make sure my code is correct. But it is the sort of thing like when we turn any of these on; it will find things in our codebase. Some of them, we have to appease the compiler even though we know the code to be correct. But the code is not provably correct as it sits in our file. So I am, again, happy to make that exchange. And I like that TypeScript as a project gives us configurability. But again, I am on team where's the strictest button? I would like to push that as hard as I can and live that life. STEPH: Yeah, I like that phrasing that you just said about provably correct. That's nice. CHRIS: That's the world I want to live in, everything you own in the box to the left, which is probably correct. STEPH: [laughs] That's how that song goes. CHRIS: Yeah. This is a reference to move errors to the left, which I think I've referenced before. But now that I'm just referencing Beyoncé and not the actual article, it's probably worth referencing the article, but the idea of, like, if a user hits an error, that's not great. So let's move it back to QA, that's a little further to the left in sort of the timeline. But what if we could move it to an automated test in CI? But what if we could move it into your editor? What if we could move it even further to the left? And so, a type system tends to be sort of very far ratcheted up to the left. It's as early as possible that you can catch these. So again, to reference Beyoncé, everything you own in a box to the left. STEPH: [singing] Everything you own in the box to the left. CHRIS: Thank you for doing the needful work there. STEPH: [laughs] Mid-roll Ad And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free that's studio3t.com/free. STEPH: I have a question for you that I'd really love to get your opinion on because I myself I'm waffling back and forth where someone brought up some really great points about a concern or just a question they had brought up around testing and i18n specifically. And I agree with the things that they're saying, but yet, there's also a part of me that doesn't, and so I'm Stephanie divided. And so, I'm trying to figure out where I stand on this. So let me dive in and give you some context; I'm going to share the statement/question that they had asked. So here we go. "One of my priorities has been I should be able to review a test without having to reference any other code. References to i18n means that I have to go over to YAML and make sure the right keys have the right values, and that seems error-prone. In some cases, a lack of a hit in the YAML defers to defaults. If the intent is to override the name of model attribute and error messages and it is coded incorrectly, the code fails silently without translating and uses the humanized attribute name, and that would go undetected. If libraries change structure, it might also fail silently as well, so to me, the only failsafe way is to be fully explicit in test." So this goes with the idea that if you're writing tests and then you're testing text, but it's on the screen or perhaps an email, that you're actually going to assert against that string that is shown to the user instead of referencing the i18n keys. And then that also backs up this person's idea that you really want to not have to jump around. If you're reading a test, everything you really need to know about that test should live very close by. And I really agree with that initial statement; I want everything that's very close to the test, especially if it's anywhere in that expectation line, I really want it close, so I can understand what's the expectation, what's under test, what are the inputs, what's the expected outcome. So I wholeheartedly support that idea. But yet, I am in the camp that I then will use YAML keys instead of providing that exact string because I do look at i18n as a helpful abstraction, and I want to trust that i18n is doing its job. And so that way, I don't have to provide that string that's there because then we're also choosing, okay, well, which language are we going to always use for our test? So this is the part where I feel divided. So I'm going to walk you through some of the reasons that I really support this idea and other reasons that I still use the i18n keys and then get your take on it. So there is a part of me that when I'm using the i18n YAML keys, it does make me sad because it reduces the readability in tests. Sometimes the keys are really well named where maybe it's a mailer.welcomemessage. And I'm like, okay, I understand the gist. I don't need to go see the actual string. I also think they highlighted a really good use case where if you're overriding behavior and it could default to something else, your test is still going to pass, and you don't actually know. So I could see the use case there where if you are overriding, then you want to be explicit about the string that you expect back. I also think there are some i18n messages that are fairly complex, and where then I really would like to see the string. So if you are formatting a date or a time or you're passing in just a lot of variables, then there's a chance that I do want to see how did that actually get generated for the person who's going to be reading it versus just maybe it's garbage text that came out? And I want to validate that the message that we think we're crafting is actually the one that the user is going to see. The case against actually being explicit, my biggest one is because then I do see i18n as a helpful abstraction. And I want to trust this abstraction that it's doing its job and it's doing it well. Because then if I do use explicit strings, it makes me sad if I change text from like hello to welcome, and now I have a failing test. I don't like that idea either. So I'm torn between these two worlds of it is very nice to have everything that you need in a test to be able to understand what is the expectation, but then I also lean into this abstraction and reference the i18n keys. So, Chris, with all of that, that was a bit of a whirlwind, [laughs] what are your thoughts? How do you test this stuff? CHRIS: Honestly, I'm surprised that you've got that much division in your own answer because for me, this is very obvious there's one...no, I'm kidding. This is obviously complicated. Similar to you, I think I'm going to have to give a grab bag of answers because I don't have a singular thought of like it is concretely this or that. I tend to go for explicit strings and tests all the way to...so like the readability of a test, and the conciseness of a test is interesting. I will often see developers extract. Say they're creating a user with a specific email, and then they log in with that email later, and then they expect something else. And so the email is referenced a few times, and they'll extract that into a variable called email. And I personally will tend to not do that. I will inline the literal string like user@example.com, and I'll do it in a few places. And I'm fine with that duplication because I like the readability of any given line that you're reading. So I will make that trade-off within tests. This is the thing I think we've talked about before, but the idea of DRY in tests is like I want to be careful applying that idea, Don't Repeat Yourself, to break apart the acronym. Those abstractions I will use them less than tests. And so I want the explicitness, I want the readability, I want to tell a little story, all of that feels true. That said, to flip it around, one of the things that I'm hearing...so I think I'm hearing a part of this that is around well, we can fail silently because we fail symmetrically in both the implementation and our test. Then an assertion may actually match even though it's matching on a fallback. I think that's a configurable thing. I would actually want my test to raise if I'm referencing an i18n key that is not defined. Now, granted, that's different for languages. And maybe this becomes a more complex story of like in production; in a different locale, it will fail because we don't have 100% parity across all our locale files. But fundamentally, I want to make sure that at least exists in our base, which I think typically would be en-US as the locale. I want to make sure all keys are looked up and found, and it's an error otherwise in our test. So that's a feeling. But am I misunderstanding that part of the story or how that configuration typically works? STEPH: No, I think you've got it. But just to make sure we're on the same page, so if you reference a key that doesn't exist, then it is going to fail. So at least you have your test failure is going to let you know that you've referenced something that doesn't exist. But if you are referencing, like if you want to override the defaults that Rails or i18n has provided for a model and say for an error message, if you reference that, but you want to override it, but then you've forgotten, that does exist. So you're not going to get the failure; you're going to get a different message. So it's probably not a terrible experience for the user. It's not going to crash. They're going to see something, but they're not going to see the custom message that you intended them to see. CHRIS: Gotcha. Okay, well, just to name it, the thing that I was describing, I don't know that that would be the configuration for every system. So I would strongly encourage any system where i18n just has a singular behavior which is we fall back to the key. I want my test to absolutely tell me if that's happening. And that should be a failure of the test. But to the discoverability documentation bit, I do wonder if tooling can actually help answer the question. And as I was describing the wonderful experience I had on GitHub the other day, viewing code as just static characters in a file is both true and also, I think increasingly, a limited view of it. We have editors, and we have code hosting tools that can understand semantically our code a little bit better. There's got to be like 20 Different VS Code plugins that, when you hover on an i18n reference, it will do the lookup for you. That feels like a thing that exists, and if it doesn't, well, now I've nerd-sniped myself, and I got a weekend project. JK, I'm definitely not building that this weekend. But that feels like can we use that to solve this? Maybe not. But that's just another thought of where we have these limitations where it's static, like those abstractions can be useful. But if we can very quickly dereference them, then the cost of the abstraction or that separation becomes smaller, and so the pain is reduced. And I wonder if that's a way to sort of offset it. STEPH: If I can poke at that a little bit more, because I think you're touching on something that I haven't expressed or thought through explicitly, but it's the idea of, like, why do I like the abstraction? What is it that's drawing me towards using these keys? And I think it's because most of the cases, I don't care. I don't care what the string is, and so that feels nice. Like, I understand that, yes, we're referencing something. If that key didn't exist, I'm going to see a failure. So I know that there's text there, and that's why I do lean into referencing the keys instead of the text because it feels good to not have to care about that stuff. And if we do make changes to the text, then it suddenly doesn't fail, and then I have to go update a test because we added a period or added a comma. I think that's the path of more sadness for me. And my goal is always a path of least sadness. So I think that's why I lean into it [laughs], I'm guessing. Is that why you lean into it as well? Or what do you like about referencing the keys over the explicit text? CHRIS: No, I think I share your inclination there, and the reason that you're in favor of it, and I think the consistency like if we're going to use i18n, then we should lean in because it's a non-trivial thing to do like porting to i18n projects, and they're tricky. Getting it right from the first step is also tricky. If you're going to do it, then let's lean in, and thus let's use that abstraction overall. But yeah, same ideas as you. STEPH: Cool. I think that helps validate where I'm at in terms of how I rationalize about this where ultimately, I do like leaning into that abstraction. And as you'd mentioned, some of those porting projects, I haven't been on one specifically, but I've seen that they are a lot of work. And so, if we have that in our system, then we want to continue to use it. It does reduce some of the readability. Like you said, maybe there's a VS Code plugin or some way that then we can help people be able to see if they want that full context in the test and not have to jump over to YAML. But yeah, otherwise, unless it's overriding default behavior or complex, then that's what I'm going to go with is with the keys. But I really appreciate this person's very thoughtful question and approach to testing because, normally or typically, I fully agree with I want full context in the test. And this one was one of those outliers that came up for me, and I had to really think through all the feelings and the reasons that I have for those feelings. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
332: Ludacris Speed

The Bike Shed

Play Episode Listen Later Apr 5, 2022 39:28


Chris is back from vacation and gives hiring and onboarding updates. Steph has an update about the CI slowdown and scaling CI. They tackle a listener question regarding having some fear around potential merge conflicts. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Deckset (https://www.deckset.com/) parallel_tests (https://github.com/grosser/parallel_tests) paralleltests - important line that may alter the `groupby` strategy (https://github.com/grosser/parallel_tests/blob/9bc92338e2668ca4c2df81ba79a38759fcee2300/lib/parallel_tests/cli.rb#L305) KnapsackPro (https://knapsackpro.com/) rspec-queue (https://github.com/conversation/rspec-queue) Vim Conflicted Overview (https://github.com/christoomey/vim-conflicted#overview) Mastering Git Course on Upcase (https://thoughtbot.com/upcase/mastering-git) Git Object Model (https://thoughtbot.com/upcase/videos/git-object-model) Git Object Model Operations (https://thoughtbot.com/upcase/videos/git-object-model-operations) The Opportunity Will Find You (https://thoughtbot.com/blog/the-opportunity-will-find-you) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Golden roads are golden. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Oh, I also have a new intro that I want to try out. This is thanks to Irmela from Twitter, where it's good morning and hooray; today is Bike Shed day. They technically said Tuesday, but we don't record on Tuesdays. So today is Bike Shed day, so happy Bike Shed day. And hey, Chris, what's new in your world? CHRIS: What is new in my world? Yeah, I loved when I saw that tweet come out. It really warmed my heart. So Tuesday, in theory, is Bike Shed day, but for you and I, Friday is Bike Shed day. It's confusing breaking the fourth wall, as I so often do. But yeah, what's new in my world? I'm back from vacation, which is the thing that I did. For listeners, well, I have been absent the previous week related to vacation and all those sorts of things. But I did what we're going to describe as a not smart thing. It wasn't intentional. The world just kind of conspired in this way. But I had two separate vacation islands that existed in my mind, and then they both kind of congealed, but as they did that, they moved towards each other, but they didn't connect. And so what I ended up with was two weeks back to back where I was out on Thursday and Friday of one week, and then I was back for Monday and Tuesday. And then I was out for Wednesday, Thursday, Friday of the following week. Protip: that's a terrible idea. It's just not enough time to sort of catch up. The whole of it was like the ramp-up to vacation and then the noise of vacation, then getting back and being like, oh, there are so many emails. Let me try and catch up on them. But also, on the very positive side, we had a new hire join the team, and so most of my focus on the days that I was in the office was around getting that new person comfortable on the team, onboarding, spending as much time as possible with them. And so, all total, it was an adventure. And again, I would strongly recommend against this. The world just kind of conspired, and suddenly these three different forces in my life came together. And this was just the shape of things. But yeah, I went on vacation, and it was great. The vacation part was great. STEPH: I will take your advice. So next time I have like two segments of PTO, I'm just going to stitch them together and just go ahead and take that whole intermittent time off. CHRIS: That probably would have been better. Again, someone new joining the team, it was very important to me to get some time with them early on, and so I opted not to do that. But yeah, the attempt to catch up in between was a completely lost effort, I would say. But I think I'm mostly caught up now, having been back in the office for about a week, so yeah. But let's see, what else has been up in my world? It's actually been a while since you and I have chatted based on the various timing and schedules and the nonsense vacation schedule that I had that you so kindly accommodated across a couple of weeks. Let's see, hiring and onboarding; the hiring went really well. We talked about that a bunch of weeks back. But now we're in the onboarding phase. And so next week will be the first week that all four of us on the engineering team are in the office together for the full week. I'm super excited to experience that. We've had different portions of it, with me being on vacation and other folks being on vacation. But now, for the first time, we're really going to feel what it's like as this team. And we're going to have our first retro as a group and all those sorts of things, so I'm very excited to do that. And thus far, all of the interactions that we've had have been really wonderful as a team. And so now it'll be the first time we're just bringing all of those various pieces together. STEPH: I just have to clarify; you said all of y'all in the office together. Do you still mean remotely? CHRIS: Oh, yes, yes, I just mean not on vacation, all present and accounted for on the internet. Remote is another interesting facet of what we're doing here and trying to figure out how to navigate that, particularly where there are some folks that are closer and can potentially get together in the city, that sort of thing, and then folks that are truly remote and making sure that we're...I'm very much of the opinion if we have anyone that's remote, we are remote team, and we must embrace async communication and really lean into that. And I think the benefits of async communication as its own consideration are so worth it. And it's one of those things that's hard to do. It requires careful, intentional thought. It requires more purposeful communication. But I think there are a lot of good things that fall out of that. It's similar to TDD in that way in my mind, like, it's not easy. It's actually quite difficult. But all the effort that I put into trying to learn how to do that has made me a better developer, I think, on all the various fronts. And I think similarly, async communication I believe in as a tool to force just better communication. And so I'm a big believer in it, and I've found a ton of benefit in remote that I'm also a big believer in that now. I, like everyone else, was forced into it as the world was, but I've really come to enjoy it a lot. And so yeah, so, no, not physically in the office, to answer your very short question with a long rambling aside. STEPH: [laughs] I like that comparison. I hadn't thought about it in that way but comparing that thoughtfulness and helpfulness of async communication and then also to TDD, where it's not easy, but the payoff is so worth it, the upfront cost of it. That is something that at thoughtbot, we've had conversations around where there are folks that really value...they want to be around people. They get energy from people, and so they want that option to be able to rent a WeWork space and maybe get together with a colleague once or twice a week, and that was supported by thoughtbot. But we also wanted to express well, if you are together, do treat everything still as a remote work environment. So let's say if you and your colleagues are on a project, but then there's a third person on that project that's remote, you still need to act like everything's remote to make sure that everyone else is still getting to participate and hear everything and be part of the conversation. So just keeping that in mind that yes, we want to support you doing your best work, and if that's around people, that's wonderful. But we are still remote-first, and communication needs to be in that fashion. Well, that's super exciting that you'll have all of the team together. That sounds like it will be wonderful to hear about and then also retros and meetings, and yeah, it sounds like you've got a fun week ahead. CHRIS: Indeed. I'm super excited to see what sort of new things come out of the new voices on the team and practices that each of the individuals have experienced at other companies that we can now fold together. The work that we've done so far has been very much inspired by thoughtbot ideas, and approaches, and workflows, and processes because that's what I brought to the table. But I'm super excited to bring in more voices and see what of that 100% stays on versus does anything change? Do we get entirely new things? So yeah, very excited about all of that. But to revisit a topic that we've talked about in the past, this week is catching up from vacation, so there's a certain amount that will constrain my work. But this was definitely another week of I did not do much coding. I'm trying to think if I did any coding this week. It's possible that the answer is no. The fact that I don't even know the answer to that is an interesting one. I still have in my mind the desire to get back to it, and I think I will. But there's so much other stuff to do. Recently, this week, there's been a lot of vendor selection and contract negotiation, which is an interesting facet of the work, but just trying to figure out, oh, we need platforms to do X, Y, and Z. And it turns out they're wildly costly and have long sales cycles. And how do you go through that, and how do you make sure that we're getting the right thing? And so that's been a big part of my work. Hiring and onboarding, again, has been a big part of it. There's also some amount of communicating back to the broader team - what are we doing? What is the product organization or the engineering team delivering? And so I'm okay at presentations, I think. I'm comfortable with giving presentations. The thing that I struggle with is finding the optimization point in preparation. I will, of my own accord, over-prepare. And that may sound a little bit like, oh, what's my greatest weakness? That I care too much. But I mean it sincerely as like, I would love to find that right amount of like, it's like an hour of preparation for a 15-minute presentation to the team. That's the right ratio. And I just hit that on the head, and it's great. But whenever I know that I need to give a larger presentation, it will distract me. And it's work that can expand to fill whatever time you give it, and so trying to thread that needle is a tricky one for me. STEPH: Yeah, I'm with you. Presentations, for me, they're one of those things that it's very stressful, anxiety-inducing; all the prep feels distracting from some of the other work that I want to do. Or maybe I'm excited about the presentation, and that is the work that I want to do. But it's not until it's done that then I'm like, oh, that was fun. That went well. This was great. It's not until after that then I feel good about it. So the lead-up to it is very stressful. And so if you can optimize that to say, well, I know exactly what this group needs, where I can cut corners, where I have to go into details, that sounds incredibly valuable. I'm curious, so this is probably a bad idea, but it's the only way I really know how to find those boundaries is you got to experiment and tweak a little bit and let yourself fail a little bit or just be very explicit with folks about this is what the presentation is, if you expected something else, let me know. Or here's what I've got, have someone to bounce ideas off of. But there's such a nicety if you can find that I'm going to try failing just a little bit and get some feedback. Or maybe it's not failing at all, but you are testing that boundary to find out did this work, or should I put more effort into this? I'm curious, do you have thoughts on that? How you're going to find that right optimization level? CHRIS: Not as specific to truly honing in on whatever the correct number is. The thing that I've been doing is I...this will sound complicated, but I wait until the last minute but a specific version of the last minute. So at most, I start working on it an hour and a half before the meeting. And these are, again, not particularly large presentations, and it's a recurring sort of thing. So it's sort of engineering talking about the work that we've done recently and trying to find the right level of detail and whatnot, so giving myself a smaller time window. I think that's enough time to tell the story and to find a meaningful way to tell the story and grab the screenshots and all of that, but it's constrained so that I don't over-optimize, over-edit, overthink. I'm using Deckset, which is a presentation tool that starts from a Markdown file. So it's just a Markdown file that I'm editing. That's great; that works really well. I do not twiddle with fonts. There's one theme that I use. It is white background with black text. That's it. And I think I've given myself deep permission to be the CTO that has a white background with black text and no transitions. I don't even go into presentation mode for it. I'm literally showing the UI of Deckset, and then just hitting the arrow to move between them. But the Chrome and the drop-down menu at the top is still visible because I want to see people's faces as I'm presenting. And I haven't figured out how to do that correctly on my computer. So I'm just presenting the window of Deckset. And I'm like, I have given myself permission to do all of those things, and that has been super helpful, actually. So that's a version of me negotiating what this means. Where I do invest the effort is trying to enumerate all of the things and then understand what is the story that I'm telling around the things and how do I get the message right for the collective audience? So, for a developer team, I would say much more nuanced technical things, for marketing folks, it would be at this end of the spectrum. I do lean on the old idea of, like, let us talk about it in the mindset of the user, so it's very much user-centric, but then some of the things that we're doing are important but invisible to the users. They're part of how we broadly build the platform that we need to, but they're completely invisible to users. And so, how do I then tell that story still with ideally a user-centric point of view? So that's where I do invest the time, and I give myself complete freedom to just grab screenshots, put black text on a white background, and then talk over it. STEPH: I love it. Because you made this comparison earlier, so now I'm thinking of a comparison of like TDD-driven presentations where it's like, what's the end goal? What's the assertion? What's the outcome that I want? And then backfilling from there. Or, in your case, you're talking about what's the story that I need to tell? What's the takeaway that I want people to have? So then you start there, and then you figure out what's the supplemental information that you need to provide to then get there. And the fact that you don't twiddle with fonts and all that stuff, I think you're already really on your way [chuckles] in terms of finding that right optimization of I need to present a clear and helpful message but not sink too much time into this CHRIS: Black text on a white background is very clear. So... STEPH: [laughs] If there are any designers listening to this, they might just be cringing to this conversation right now. [laughs] CHRIS: I actually wonder about what the...I know that dark mode is a thing that lots of folks care about. I'm thinking about the accessibility affordance of it now. I'm actually thinking through it now that I said it somewhat flippantly. I actually don't know what I'm talking about, but it was easy, and it wasn't a choice that I allowed myself to think about. So there we are. Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: In my personal world, so Tim and I are moving. We're on the move. We are transitioning from South Carolina to North Carolina. So I think I may have shared a bit of this news, but Tim has acquired his first software developer job, which is just phenomenal. It is in North Carolina. He does need to be there in person for it. So we are currently selling our South Carolina house and then moving. It's not too far. It's like three and a half hours away to where we're moving in North Carolina because we're already pretty far in North and South Carolina. So yeah, there's always another box that needs to be packed. And there's always just something else that you forget, another thing that you want to take to Goodwill or try to give to a neighbor. It's a good way to purge. I will definitely say that every time you move, it's a good time to get rid of things. CHRIS: That is a very cup half-full point of view on it, but yeah, it feels true. STEPH: [laughs] It's true. I'm a very cup half-full person. For more technical news, for more client stuff that I've been working on, so I think the last time we chatted, I was sharing that we had this mysterious CI slowdown where we were going from CI builds taking around 25 minutes to spiking to 35, sometimes 45 minutes, and I have an update there. So we found out some really great things, and we have gotten it back down to probably more about 23 minutes is where the CI is running currently. As for the actual who done it, like what caused this specific slowdown, we got to a point where we were like, we're doing so much investigative work to understand exactly what caused this that it felt less helpful because at the end of the day, we really just wanted to address the issue. And so solving the mystery of exactly what caused this started to feel less and less meaningful because we're like, well, we want to improve this anyways. So even if we found that one line or something that happened that caused this, we want a bigger solution to this type of problem because then this could happen again like, someone else maybe adds one line or something happens, and things get thrown off balance, and then suddenly, we have a slowdown, and that just takes too long to investigate. So I don't have a concrete who done it answer for the slowdown. But we've learned a couple of things; one of the things that we learned is we're using parallel_tests to then split our tests across all of the CPUs that are then running the RSpec test. And we realized that we weren't actually splitting tests based on runtime data. So there are a couple of ways that paralleltests will let you divvy up your test, and two of those ways one is file size. So you can split the files based on the size of the file, or you can use info from the runtime log. So then paralleltests can be a little bit more intelligent about like, well, I know how long these files take, so I'm going to split it based on that versus just the size of the file. And we realized that we were defaulting and using the file size instead of the runtime even though we all thought we were using runtime. And the reason for this took a bit of source code diving because looking at the README for paralleltests; it looked like as long as we're passing in a file to the runtime log path, then paralleltests is going to use that runtime data. But then there's some sneaky-sneaky in there that I'll actually link to in the show notes in case anybody's interested. But if you are setting a particular flag and don't pass in another flag, then paralleltests is going to be like, cool, I'm going to portion out your test based on file size instead of the runtime. So we fixed that, or we updated that, and that has had a significant improvement for the test being split out more evenly. So we didn't have a CPU that was taking 25 minutes while the next CPU was only taking like 17 minutes. And paralleltests also provides some really helpful data that because we have that runtime log file, we could tell how long each CPU is running and how they're getting split. So the past couple of weeks, it's heavy measure, measure, measure, take all the data, create lots of graphs, understand what's happening, and then look for ways to then fix it. So figuring out how these files or how the tests were being distributed across, we had a number of graphs that were just showing us what's actually happening. So then we could track the improvement, so that was really nice. It was the measure twice and change something once [laughs], and then we got to see the benefit from it. For scaling the CI, so we are looking on adding more machines to then process tests. That has been really interesting because we're at the point where we are adding more machines, but if we add more machines, we're not going to speed up how quickly our CI processes everything. Because we are splitting tests based on file size and not by examples, we're always going to have this effect of a tentpole. So if we have a file that takes 10 minutes, that's the fastest we're ever going to get. So Joël and I are in discussions right now of where we still really want to understand what's the fastest we can achieve just by adding another machine or two versus are we at the point that, okay, scaling horizontally and adding more machines has been helpful, but we have reached the breaking point where we actually need to divvy out the tests at a smaller scale and have a queue approach? So then that way, we can really harness the power of then we don't have one file that takes 10 minutes, and we don't have to care either. So if somebody adds a test to a file and suddenly a file goes from 12 minutes to like...well, hopefully, they added more than one test. [laughs] But let's say it goes from 10 minutes to 15 minutes; we don't want to have to manage that and understand that there's a tentpole. We just want to be able to divvy out all the examples and then have a queue approach. That's probably going to be MVP two of this, but we're still waiting that out. But it's just been really interesting to realize that scaling horizontally really only takes you so far. Like, we've added one machine, maybe one more, so then we'll have three total. And then it's like, okay, that's great, but now we need to actually address this other bigger problem. CHRIS: I know we've talked about this in previous episodes, but I'm super interested to hear as you progress into the queue approach because that's something that's been top of mind for me for a while. I don't know if we've talked about it before specifically, but Knapsack Pro is the one thing that I'm available as a service that does this. Do you have other tools that you're looking at for that, or is this still in the exploratory phase? STEPH: Knapsack is still a top contender. There's also RSpec Queue; that's another one that we have in mind. Unfortunately, I really wish paralleltests let us do this, but paralleltests just doesn't quite offer that feature. And someone in the team, I think, even reached out to the maintainer of paralleltests, and they were like, "Yep, you're totally right. We're actually more focused in making sure that this works for everybody versus has specific features." And they gave a really nice thoughtful response, which we appreciated, so at least we could confirm that paralleltests won't do exactly the thing that we need. So yeah, RSpec Queue, Knapsack, I think those are the top two that I'm familiar with. CHRIS: Gotcha. I don't know if I've seen RSpec Queue before. I'm intrigued. So actually, an interesting thing happened. While I was away on vacation, one of the folks who just joined the team as one of their first steps joining the team, noticed that our CircleCI config wasn't actually taking advantage of the parallelism that we had configured; that's on me. I turned on parallelism and then never did anything with it, which is a complete waste. And so I was super happy to come back and saw that CI, which had been creeping up to six or seven minutes, had suddenly dropped back down to two to three minutes sort of thing. I was like, this is amazing. But now I'm at the point where our RSpec suite is spreading across the different, I think, it's like four different cores that we have available, but it's not doing it as efficiently as we would like. So I'm like, oh, okay, can we dial it up to 11? But I'm intrigued; I've only looked very much in passing at RSpec Queue literally now that you've mentioned it. But Knapsack Pro exists as a different service. And so, as far as I understand, the agent that's running is going to communicate and say, "Give me another test. Give me another test." But there needs to be some external process running and managing that queue. Does RSpec Queue do that? Somebody owns the queue, right? Who owns it? Do you understand how that works? STEPH: So I was definitely familiar with this. If you'd asked me a couple of weeks ago, when I was diving heavily into the queue work before then, we transitioned more into focusing on then adding new machines; I was very up to speed on this. So I may get a couple of things wrong, but my understanding is that RSpec Queue, you're going to manage your own queue. So you bring in the gem and then use something like Redis, so then you are in charge of that. And with Knapsack, then you are using their service to manage that queue. And then they have found ways to optimize around what if you can't reach their API or something; their service is down? And making sure that that doesn't impact your CI so then you can't still run your test just because you can't reach their queue somewhere. So that's my current understanding, RSpec Queue you own it, Knapsack they're going to own it. CHRIS: Gotcha. That makes sense. That about maps to what I was expecting, and so I wonder if I could use RSpec Queue. Now I'm going to have to go research that. But it's always nice to have new things to look at on this to go at ludicrous speed. That's what I'm going for. I want to get to ludicrous speed for our CI. STEPH: I like that name. I haven't heard of that speed. I feel like I have. I feel like you've dropped that before, [laughs] like you've used that. CHRIS: I don't know; quite possibly, I have. It's a Spaceball's reference. It's a throwback to days of old. STEPH: Well, then we may be investigating RSpec Queue together. Because yeah, Joël's and I goal for this week has been very much to figure out what's our boundaries with TeamCity? What are our boundaries with horizontal scaling? And I think we're both getting to that conclusion of like, okay, this has been good, it's helpful, but we really need to look into the queue stuff if we really want to see significant progress. Also, some of the stuff we're doing because we're pushing on it, we are manually splitting files. So if there's a file that has created this tentpole that's taking 10 minutes, but we know ideally most of the other files only take six minutes, then we are splitting that file, so then we have two spec files that are associated with the same class. And then using that as a way to say, okay, what would this look like? Let's say if this were better balanced. And that's also been pushing us in the direction of like, okay, this is fun, this is informative, but it's not sustainable. We don't want to have to keep worrying about splitting these files and doing this manually and pushing us towards that queue-based approach. MIDROLL AD: And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with Mongo DB, SQL, Oracle and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30 day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial head to studio3t dot com forward slash free. That's studio dot com forward slash free. But shifting gears just a bit, we have a listener question. So this person wrote in, "I have listened and loved your podcast for many years dreaming of getting a job with people half as thoughtful and intentional as you, and finally it happened. I have my first junior dev job, and my co-workers and bosses are all super awesome. Up until now, I've been flying solo. And in my new job, I've been finding it very unsettling to resolve merge conflicts. As careful as I am to comb through the conflict and contact the other developer if needed, I feel like I am covering my eyes and crossing my fingers whenever I select the resolve conflict button. Is there some type of process or checklist I could rely on? Is it normal to have such a high fear factor with a merge conflict? Any advice or maybe just a bit of been there felt that way...?" All right. So one, that's fabulous, congratulations on the new job. That's very exciting. I think I've voiced this many times, getting your first junior dev job is so hard, and so I'm so excited when it works out for people, and they get there. And then, for the merge conflict, I have thoughts. Chris, do you want to start? Shall I start? How are you feeling? CHRIS: Why don't you start? Well, actually, I'm going to add some pre-commentary, and then I think you should lead into our actual answer. But first, I just want to say a deep thank you to this listener for sending in the question. Again, we really love getting these questions. And also, thank you for the very kind words. To be clear, listener, if you're going to send in a question, you don't have to say very kind words, but they are really wonderful to hear and especially to hear if we had any part in helping this person feel more comfortable getting into that first dev role and having an idea of what maybe a good version of that could look like. Additionally, I really love the shape of this question because it gets into the people stuff and the tech stuff, so I'm super excited about this question. Actually, both Steph you and I responded very quickly to this one. And so it really did catch our attention because I think it crosses that boundary in an interesting way that I think is sort of The Bike Shed space in the world. But to that end, you did reply first in our email chain. So I think you should start, and then I'll follow on after that. STEPH: I should also check with you. Wait, so you don't have a filter on your email that's like kind words only to The Bike Shed, and then you filter out anything that's negative? CHRIS: I have a sentiment analysis, and if it's even neutral, it gets sent straight to the trash, only purely positive. No, constructive feedback is welcome too. We would love to hear that. Well, love is a strong word. We would accept it into our inboxes and then deal with it, but yeah. STEPH: [laughs] It will be tolerated. Must require at least three hearts in all emails; just kidding. [laughs] CHRIS: Are you kidding? I'm counting them now, and I see a lot of hearts in our emails. [laughter] STEPH: Merge conflicts. So is it normal to have such a high fear factor with a merge conflict? I'm going to say absolutely. Resolving a merge conflict can be really tricky and confusing. And I think; frankly, it's something that comes with just time and practice where then you start to feel more confident. As you're resolving these, you're going to feel more comfortable with understanding what's in the branch and the code changes that you're pulling in versus something that you need to keep on your side. So I think over time, that fear will subside. But I do think it's totally normal for that to be a very scary thing that then takes practice to become accustomed to it. As for if there is some type of process or checklist, I don't know of a particular checklist, but I do have a couple of ideas. So one of the things that I do is I will often push my code to whatever management system I'm using. So if I'm using GitHub, then I'm going to push up my branch because then, at least that way, someone has a copy of my work. So if I do something and I completely botch it locally, I know I can always reset to whatever it is that I pushed up to GitHub, so then that way, I have more freedom to make mistakes and then reset from there. So that is one idea is just put it somewhere that you know is safe, so that way you now have this comfortable sandbox to then make mistakes. The other one is run the test. So hopefully, the application that you're working with has tests that you can trust; if not, that could be another conversation. But if they do have tests, then you can run those, and then hopefully, that would let you know that if you have left something in, like maybe you left a syntax error, or maybe you removed some code that you shouldn't have because you weren't sure, then those tests are going to fail, and they'll let you know that something went wrong. And you can run those while you're still in the middle of that merge conflict as long as you've addressed like...well, no, if you haven't addressed syntax errors, that's still a time that you can run it, and it's going to let you know that you haven't caught all of the issues yet. So you don't have to wait till you're done to then go ahead and run that. A couple of other ideas, practice. So go ahead and create your own merge conflicts on purpose. So this is something that I think is really helpful because it will teach you, one, what causes a merge conflict? Because now you have to figure out how to create one, and then it will help you become comfortable because you're in a completely safe place where you have made up the issue, and now you're having to resolve that, so it'll help you become more confident in reading that merge conflict message. And then last but certainly not least, grab a buddy so if you are just feeling super nervous. Anytime I'm doing something that I just feel a little nervous about, then I just ask someone like, "Hey, would you look over my shoulder? Would you pair with me while I do this?" And I have found that's incredibly helpful because it eases some of my fear. I've got someone else that is also looking through this with me. But I also find it really helpful because then it encourages that person to be like, hey, if they're ever in a spot that they need to pair, I want them to know that they can also reach out to me and have that same buddy system. I guess that's my checklist. That's the one I would create. How about you, Chris? What do you think? CHRIS: Well, first, I just want to say that basically everything you said I 100% agree with, and purposely I think was great that you actually replied to the email first and that you're saying those things first because I think everything that you said is true and is foundational. And it's sort of the approach that I would definitely recommend taking as well. My answer, then adding on to that, has to do with how I've approached learning about this space in my own career. To name it, to answer the core question, is it reasonable to be scared of this? Yes, Git is confusing. Git is deeply confusing. I absolutely love Git. I have spent a lot of time trying to understand it, and in understanding it, I've come to love it. But it's only through deep effort that I've gotten to that place. And actually, the interface, the way that we work with Git on a day-to-day basis, particularly the command line is rough. I'm going to say, what does Git checkout do? Well, it does just about everything, it turns out. That command just does all of the stuff, and that's too much. It's, frankly, the UI for Git, specifically the command-line user interface; the commands that we run to manipulate the Git history are not super intuitive. But it turns out if you pop open the hood, the object model underneath the core way that Git stores your code is actually very simple. I find it's very easy to understand, but I, unfortunately, have found that I can't understand it without dropping down to that level. And so, in my own adventures, I kind of went deep on this topic a couple of years ago, and I created a Vim plugin because obviously, that's the best way to encapsulate your knowledge about Git, and so I created a plugin called Vim Conflicted. I don't necessarily recommend the plugin. It's fine if you want to use it. I don't do a great job of maintaining my plugins at this point, to be honest. But there was a weekend where I was trying to understand the world of Git and merge conflicts in particular, and it was really sort of fighting me. And as I started to understand it better, there's a little diagram that I drew on the README that I think is probably the most interesting artifact from it. But it's this idea that there are actually four files, four versions of a given file involved in any merge conflict. And that realization shifted my thinking a good amount. And then as I started to think about that, I was like, oh, okay, and then I want to see this version of it, and this version of it, and this combination, and the diff between these two, and that was super helpful for me. More generally, I also made a course on Upcase about Git as I tried to understand it better. And there are two particular videos from the middle of the course named the Git Object Model and Object Model Operations. And again, those two videos deal with popping the hood on Git, looking inside it, and what actually is happening to your code as you perform different Git operations. One of the wonderful things about Git is it is immutable. So you're never going to destroy your Git history if you've committed. So one of the rules that I have is just always be committing, never worry about committing. If you've committed, you can always get back to that version. You would have to try very hard to destroy committed code in Git. It's the things that you do when you haven't yet committed the code that are dangerous. So commit the code, like you said, Steph, push that up to GitHub, so you have a backup of it. You will have a backup locally as well, and that's a thing that you can come to be more comfortable with. But then, from there, there's actually a lot of room to experiment and play around because there's a ton of safety in the way that Git stores the code. You do have to know how to get at it, and that's the unfortunate and tricky part. But I think, again, to sort of summarize, yes, this is confusing. Your feelings are absolutely valid and totally grounded, but it is also knowable, is what I would say. And so, hopefully, there are a couple of breadcrumbs that we've laid there in how you might go about learning about it. But yeah, find a buddy, watch a video or two, and give it a try. This is definitely a thing that you can get there but totally reasonable that your first approximation is this is confusing because it sure is. STEPH: I often forget that Git has that local copy of my code, so I'm so glad you mentioned that. And then yeah, I saw when you linked to Vim Conflicted. The diagrams are great. I had not seen these before. So yeah, I highly recommend folks take a look at those because I found those very valuable. CHRIS: In that case, it's a white background, but I allowed myself to use some colors in the little images to help differentiate the different pieces. And it's an animated diagram, so it's really a high bar for me. [laughs] STEPH: So now the question is, did you go too far? Have you over-optimized? [laughs] CHRIS: I'm going to be honest; it was a weird weekend. STEPH: [laughs] Well, I don't think you've over-optimized. I do think it's wonderful. And I think this is definitely a reference that I'll keep in mind for folks whenever they're learning about merge conflicts or just want to get more knowledgeable about them. I think these diagrams are fabulous. CHRIS: Well, thanks. Yeah, I hope...they frankly were a labor of love, and the course is three and a half hours of me rambling about Git, so hopefully, it's useful to folks. If anything, it was super useful to me because my understanding of Git was deeply crystallized in making that course. But I do hope that it's useful to other folks. And particularly those two videos that I highlighted, I think are the ones that have been most impactful for me in terms of how I think about working with Git and getting comfortable with it. STEPH: Do you still receive emails every now and then from people, or maybe they are tweets from people that are like, "Hey, I watched one of your videos and found it really helpful." I feel like I still see that every now and then where people are just commenting on like, they watched some of the content that you created for Upcase a while back, and I think that's really cool. I'm curious if you still see that. CHRIS: I do, yeah, from time to time. It is absolutely wonderful whenever I hear that. Again, listener, do not feel the need to send me anything, but it is nice when I get them. STEPH: It does seem like I'm fishing for compliments now. [laughs] CHRIS: It does seem like that. So I want to be clear that's not what's going on here. But it is nice because I do actually forget that they're out there. But a lot of the stuff that I produced for Upcase, in particular, I tried to do more timeless stuff, so like the Vim content was really about how Vim works in a deep way. And the tmux course and the Upcase course...or the tmux course, the Git course. I look back at them, and a couple of little syntactic things have changed. But I'm still like, yeah, I agree with me from six years ago or whatever it was. Oh, that's a weird number to say, and I think is honest. It's fine. I'll just be over here. [laughs] STEPH: [laughs] That's helpful to hear, though, because that's always one of my fears in creating content. It's like, I don't know, it's okay if it's more opinionated and I change my mind and disagree with my past self. But it's more like, yeah, keeping up with is this still accurate? Is this still reflective of the times? And then having to keep that stuff updated. Anywho, that's a whole big thing, content creation. CHRIS: Content creation, but there's a parallel to it that many folks will not be creating content, and I think that's a very fine and good way to go about progressing on the internet. But there's a parallel to it in learning that I think is useful. I, at this point, will typically lean in if there is something in the SQL layer that is fighting me. I have never found effort spent trying to better understand the structured query language to be wasted time. Similarly, Git is one of those tools that is just so core to the workflow that it felt very worth it to me to spend a little bit of extra time to get to a deeper level of comfort with it, and I have not regretted one minute of that. Vim and tmux are pretty similar because they're such core tools for me. But React, I would not call myself a deep expert of React. I follow some of the changes that are happening but not as deeply, and I'm not as worried about it. And if I'm like, I don't know how to do this thing, should I spend two hours learning about it or not? With frameworks and tools that have not been part of my toolset for as long, I will spend less time on them. And I think that the courses that I produced on Upcase mirror that. They're the things that I'm like; I feel very true about these things versus other stuff. Maybe it was in a weekly iteration episode or something like that. But that very much mirrors how I think about learning as well. What are the things that I'm going to continually invest in versus what are the things that I'll sort of keep an eye on from a distance but not necessarily invest as much time in? STEPH: There's a particular article that you're making me think of as we're talking about content creation and, as you mentioned, finding the things that you always find value in investing in. There's a wonderful blog post that was recently posted on the thoughtbot blog by Matheus Richard, and it's called The Opportunity Will Find You. And it made me think a lot about what you're talking about, find the things that you're excited about, find the things that you think are a good investment and just go ahead and lean into it. And it's okay if maybe that's not the thing that you're using currently at your work, but if it's something that gets you excited, then go ahead and pursue that. So in this article, for example, Matheus uses the example of learning Rust, and that's something that he's very excited about and wants to learn more about. And then there's another one where he started looking into crafting interpreters. And then that has actually led to then some fruitful work around creating custom RuboCops because then he had more knowledge around how the code is being interpreted so then he could write custom RuboCops. So yeah, plus-one to finding the things that give you energy and joy and leaning into that and investing in it. And if you share it with the world, that's fabulous, and if you don't, then keep it for yourself and enjoy it, whatever makes you happy. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
331: Git Down

The Bike Shed

Play Episode Listen Later Mar 22, 2022 29:28


Steph celebrates Utah's adoption day and Daylight Savings Time and troubleshoots a CI build time that had suddenly spiked for a client project using TeamCity. She also shares a minor update regarding the work that thoughtbot is doing to scale horizontally and add more machines quickly and efficiently to process more RSpec tests. Chris was alarmed by logs and unknown-unknowns and had some fun using Git down. Git bless his heart! This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. TeamCity (https://www.jetbrains.com/teamcity/) lograge (https://github.com/roidrage/lograge) Cleaning up local git branches deleted on a remote (https://www.erikschierboom.com/2020/02/17/cleaning-up-local-git-branches-deleted-on-a-remote/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Today is Utah's adoption day. So officially, one year ago, we adopted Utah. He's about a year and a half years old now because we got him when he was around the six-month mark. So Utah, aka Raptor, which is the nickname that you gave him, and aka UD [spelling] the cutie is his other nickname...which I've forgotten, why do you call him Raptor? Why is that a name? CHRIS: Because there's a Utah Raptor. STEPH: A person? [laughs] CHRIS: No, I think it was like the fossils were found in Utah. But the Utah Raptor is a type of dinosaur. And so when I heard Utah, my brain went to Raptor, and then I dropped the Utah sort of a Cockney rhyming slang sort of thing. Shout out to Matt Sumner real quick. But yeah, Raptor. STEPH: Cool. Cool. Cool. I'm so glad I asked. Now I know. I just accepted it when you called him Raptor. I was like, sure, he can be a Raptor. [laughs] CHRIS: I feel like that says a lot about me that you were just like, okay, why not? STEPH: [laughs] CHRIS: That's different and has no apparent connection to the actual name of the creature, but that's fine. I might be a nonsense person. STEPH: Or me for accepting it. You share a lot of nonsense, and I accept a lot of nonsense. That might be our dynamic. [laughs] So it works out. CHRIS: That just may be our dynamic. STEPH: That's why I'm always so nice with the good idea, bad idea, or even terrible. [laughs] CHRIS: You're like, it's all nonsense 100% of the time, but yeah. So Utah is one year into living with you folks. So that's lovely. STEPH: Yeah, and he's growing up so well. Oh, and I've been training him for one of his latest tricks. I'm very excited because it seems to be really sinking in. So every night, we take him out for his final bathroom potty but then before we go to bed. And one night, for some reason, I started singing The Final Countdown. [singing] It's the final countdown. But I started singing it's the final potty instead. So now, when it's time to go out for the bathroom late at night, I look at him, and I start singing. And I start singing [vocalization], and it's working. He's starting to recognize that when I started singing that tune, he's like, okay, and he gets up from his comfy spot, and we go outside. And it brings me a lot of joy. CHRIS: That is perhaps the best use of Pavlovian conditioning that I've ever heard of. Also, I really appreciate that you both mentioned the final countdown but then said just in case anyone is unfamiliar with the tune, let me hum a few bars. Thank you for doing the service there. STEPH: I have been singing so much this week. I don't know if Joël Quenneville, who I've been pairing with a lot, appreciates that. Sorry, Joël. But I have been singing so much. And I think that's post-vacation vibes. That's what vacation does for you. And it helps you get back into, you know, lots of singing or at least it does for me. Let's see, what else is going on this week? So this is the week that we have DST in the USA, so Daylight Savings Time, aka summertime, where we advance our clocks so everybody...although this is going to be late. So at this point, by the time people are hearing this, you're going to have already dealt with all those bugs that have crept up. But those are creeping up this week, where people are starting to notice a lot of those flaky specs that aren't technically flaky. They're actually breaking for real reasons because they were tested in a way that shows that they're not considering that daytime boundary. CHRIS: It's as if you spend some of your time fixing flaky specs that that's where your mind goes with DST. Because I'm going, to be honest, part of what you're doing right now is telling me that this is coming up, and I didn't know. I had forgotten about that, which is very exciting, except you lose an hour asleep for this one, right? Or is it that you gain? STEPH: We're going forward. Yeah, it's fall back and then spring forward. That's how I remember it. CHRIS: Worth it. I'll take the sunshine at night. STEPH: Yeah, it's supposed to be so we have more sunshine during the daylight hours. That's the reasoning for the nonsense, the headaches. On some more technical news, when I came back from vacation, we noticed that the CI build time has suddenly spiked for the client project where previously we were averaging, I'd say, around 25-26 minutes. There's definitely a range there. But that seems to be pretty consistent. And right now, builds are taking more about 35, sometimes upwards to 45 minutes. And so it's been a bit of who done it or what caused it adventure of figuring out why, what's causing the spike. And so Joël and I have been pairing heavily on that to investigate what's going on and learned a lot of features that TeamCity offers and just diving into this particular issue. One thing that brought me joy is by looking through all the builds that are taking place on TeamCity. As I noticed, there are a number of builds that are using the RSpec selective testing that I added where if you only change a test to then we're only going to run those tests instead of the whole suite. And it was one of those changes where I thought, okay, maybe someone's going to get use out of this. Joël and I will probably get use out of this. But I'm actually seeing it about one every ten build something like that. And I'm just like, oh, this is awesome. One, people are improving tests. That's amazing. And then two, that then they're benefiting especially while we have this spike going on. So that was a suggestion from you that I appreciate because that is paying dividends. And so that brought me a lot of joy while looking into this other issue, which we haven't resolved yet. We think it has something to do with how the tests are being balanced across all the different parallelized processes. And we think that there is an imbalance that has happened. And then that's what's really throwing things off. So we can see that one particular process is taking around 26-27 minutes, but then the next process that's highest in time is only taking 17 minutes. So it's like, why is there suddenly ten more minutes that's being attributed to one process? And why is that not getting spread out? So still looking into that. That's the mystery for this week. But that's mostly what's going on in my world. What's up in your world? CHRIS: What is up in my world? I'm going to say a quite alarming thing happened this week, which was we were investigating some changes, or we were investigating some behavior where the particular portion of the system ended up in the logs, just sort of combing through. And I happened to notice this one log line that...our logs tend to be somewhat verbose. They're JSON-structured log format. I've talked about the lograge setup that we use in the past, but there's a bunch. These are long lines of JSON-structured data. But this line that caught my eye was not. It was just some text, and it said, "Unreported event: and then some other texts." And I was like, ah, what? Who didn't report which to when? I did some digging, eventually figured out that this was Sentry. Sentry was logging that it had not reported an event to us. But had we not randomly happened upon this in the logs, which is sort of a random thing to see, we would have missed this, which is scary. I mean, it was missed for a little while. And so Sentry was not reporting certain events. We had made a change, particularly to Sentry's before_send configuration. So there's a way that you can do some amount of filtering client-side or client being, in this case, our Ruby app. So that's like the client-side of Sentry, and then there's their server backend. So that would, weirdly, that's the way the client-server work in this case. But the idea is you can do some proactive filtering of being like, you know what? Rather than sending a ton of noise...because we know there's this one error that we can't stop for reasons. It's a JavaScript Chrome extension that's getting embedded in the app. That doesn't mean anything; that's just noise. Rather than even sending those over to Sentry, let's proactively filter them out. before_send is a function within the Sentry SDK that allows you to do this. But it turns out if you raise an error in there, if you happen to have introduced something that doesn't cover all the possible edge cases, then Sentry will just not let you know and will log, interestingly, that they did not report the event. I'm going to throw it out there that I would love if Sentry were to say Sentry me...that's where I put something very bad happened, and you should look at it. And they're just like, well, something pretty darn bad happened. We'll log it. Supposedly, my understanding is before_send can be used to filter out like PII or other things like that. And so their failure mode is quiet intentionally. That's my understanding as to maybe why this is true. I wish there were configuration that said, no, please fail as loudly as humanly possible. But that was terrifying. STEPH: Yeah, absolutely. I'm going to piggyback on what you just said for a minute because I was also thinking earlier and related to the sudden spike in our CI builds where I was like, it would be really nice if there's...because I suspect there's one particular change that has caused this to happen. I don't know what it is yet, but that's just my suspicion. And it would be great if when that build ran, let's say that build went from an average of 25 minutes and suddenly we have a build that took 35 minutes if TeamCity had alerted us or if something more aggressive had to happen to say like, "Hey, your team..." or maybe it's just in the logs somewhere. Okay, not in the logs somewhere more visible on the build where it's like, "Hey, your build took an extra 10 minutes compared to the average, just letting you know. I don't have a diagnosis for you, but we're just letting you know." So yeah, plus-one to getting those types of alerts out to people and notifying us when there's an average that's not being met or when things aren't getting logged like you'd expect them to. CHRIS: As part of what we were doing in the logs...like how to get to that anomaly detection place is a really interesting question in my mind. And this is a case where we were in the logs, and we wanted to instrument more things. So we have a bunch of stuff right now that goes in. It's either a warn or error log level. And the error should be pretty rare because, ideally, those are going to Sentry instead, but we still want to keep an eye on them. But we introduced a new search within log entries, which is what we're using for logging aggregation and searching. And the idea was to group all warn-level messages and to group it on the message string. So ideally, what this allows us to do is say, "Oh, we've seen 200 instances in the past two days of this new warning that we didn't see before." The difficulty is, as a human, I would see unhandled error blah as one bucket of warning, or I might want to see it that way. I might want to group it on part of the message. So it becomes really hard to find the signal in the noise on these, but at least it was a start. We now have this little graph for both warning and error-level log messages that we can see are there any new anomalies that are occurring pretty regularly? But this, again, was just this weird edge case where we were lucky to catch it. But it was very scary that it was just throwing stuff away. So the universe might have been true that our error log did get a little quiet for a little while, which was nice, but it wasn't 100%. It wasn't like we were at 10 hours, an hour, and then we went to zero. It was like some, and then we went to a lower number because we were still getting some. We were only filtering out certain ones. But yeah, it's how do you know at runtime that the system is doing the thing? This is increasingly the question that I have in my mind. But yeah, so that was the thing. We fixed it. It's fixed now. I also set up an alert in log entries to say, "If you ever see this particular phrase again unhandled or unreported," then please tell me about that post-haste. So we've got that now. STEPH: That's perfect. That's what I was about to ask us if there's a way that you could add a filter or add a warning for that anomaly detection. So that sounds great. CHRIS: I've got that now because this became a known-unknown, but there are still the unknown-unknowns, and there are so many of them. And I can't know them is my understanding of how they work. I would love to know them. I would love to pin them down and be like, "Hey, what are you doing here?" Someday maybe. But anyway, that was the thing in my world. [laughs] It was fun. It was a great little time. What else is up in your world? STEPH: I feel like you can always judge the level of fun based on how high someone's voice goes. No, it was fun. It was great. It was fun. [laughter] CHRIS: I believe that is an accurate assessment, yes. STEPH: I've caught myself doing that. I'm like, my voice is extra high, so I don't think I really mean that when I'm using the word fun. [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. I do have a small update that I can share regarding the work that we're doing to be able to scale horizontally. So we want to be able to add more machines quickly and easily so we can then process more RSpec tests. And we have discovered with TeamCity that we're pushing forward on that particular path because they have something called a composite build. And with a composite build, it's essentially your parent or your supervisor build. And then, from there, you can create other subsequent builds. So we can then say, all right, let's have multiple builds that then run the RSpec test, and then we can separate in that way. And right now, we're going about it in the hacky way because we just want a proof of concept. So we are saying specifically in this particular step, we want you to run spec models. And in this other process, we want you to run these particular tests just because we want to see how this works. And so far, the aggregation seemed great. So when you look at that composite parent build, it's showing you how each of those builds are doing. It's also reporting back the failures. It's even de-duping them. Because initially, we set it up where we were running the full test suite in parallel on both of these builds, [laughs] not what we wanted, fixed that. But it did highlight that it was de-duping the test failures. So that part was nice. So the UI seems great and seems quite very capable of doing this. Composite build seems to be the way that we can do this with TeamCity. But we're still diving into actually getting the metrics like, okay, how much is this actually going to speed us up? And what does this look like if we want to be able to scale up to say from 5 to 10 where we went from 5 machines to 10 machines? And that part doesn't feel graceful because then you have to go in and change the configuration and copy the configuration to then add a new build that then is going to process RSpec test. So other services like Buildkite make it very easy. I can't remember if it's like literally a slider or if it's a number that you enter. But you can say, "This is how many processes that I want to run," in which it would be a lot nicer for that actual scaling. Versus TeamCity, it feels far more manual and intentional where you then have to duplicate and add those settings. But it's a really good first step because, as we'd highlighted before, there's a lot of risk in moving over from an existing infrastructure to something totally new. So if we can have some wins with this approach and help out the team and reduce build time, then that gives us more grace period. So then we can assess, okay, do we really want to move over to Buildkite? What do we want to do next? What does this look like? And have further discussions. So that's a small update there. Next time I should have some more updates around actual data on how things are looking. CHRIS: Oh, cool. Yeah, I appreciate the update and definitely interested to hear how this continues to play out. This is a large project that you're undertaking and all the facets and whatnot, so yeah, super interested to hear the continued journey of the test build time reduction. Let's see, other news in my world. I've been exploring something that I'm intrigued by the idea. Let's go with that. [chuckles] That's going to be my start. I always start with these lead-ins that build things up too much. But I am finding a small tension in trying to just keep up with what the team is doing, which is a wonderful place to be. Our team is growing. We actually have someone new joining tomorrow, very exciting. But I'm trying to find the right version of I don't want to block things. I don't want all code review to have to go through me. But I do want to keep an eye on everything. I want to kind of know what we're doing collectively. And ideally, mostly, that's me being like, yep, that makes sense. We're doing that. I remember that, cool. Wait, what's this? And rarely, occasionally, there'll be a point where I'm like, oh, I want to intervene here. I want to have a conversation. I want to rethink how we're building this. And so it's moving from a place of any sort of blocking synchronous review or the necessity for that to ad hoc post-review sort of thing. And so the way that I'm trying to poke around with this, of course, I'm writing some code to do it because of me. So the two systems that we're using that seem most of interest are GitHub and Trello. And so it turns out GitHub has a wonderful search, and I can create a search that is parameterized like create a URL that jumps into a parameterized search saying, "Show me everything that was merged in the past X amount of time, " so I can say the past two days because I haven't checked it in two days. So I'll see all of the PRs that were merged, and some of them I'll have already reviewed. So I maybe could even filter further there. But for anything that I haven't seen, I'm like, oh, what was this? What was that? What was this other change? Similarly, on Trello, there's a way via the API to get all of the card update actions. And then I can filter down to say whenever a card was moved, which in our system that means...we're doing Kanban-style, so a card being moved from this column to that column that tells me that someone is progressing forward with some work. And then I can further filter down because, again, I don't really want to be blocking on this. I'm most interested in what have we done or completed in the most recent timeframe. And thus far, it's an interesting data set. And it's an interesting way to switch the problem around such that I'm not feeling...there was FOMO or organizational FOMO is perhaps how I would describe it of like, I want to try and keep an eye on stuff and make sure I'm responsive. But I'm now blocking, so I have to step away. But now I'm worried that I'm missing things. And so I'm trying to find that good middle spot. And this feels like an interesting exploration of that. STEPH: I'm intrigued when you mentioned the card moving over, so then you can tell things are progressing. And then you're answering the question of what did we do in this particular chunk of time? When you move stuff over, is there a clear sweep of we have finished this sprint, and then you have the date of that sprint at the top, and so then you essentially have a column that represents all the work that was done in that sprint? Is that an approach that you're using? Because that's the one that immediately came to mind for me when you're wondering what was accomplished during this week or two-week period? CHRIS: Interesting question. So we're not really doing sprints, or there are no real iterations. We're doing more of the I think Kanban is the way to describe it. But basically, we have a prioritized next up column. And then every day, I can say continuously, the work has the same shape, which is pick up the next most important thing, work on it, move it through the various columns. I did introduce in Trello just the idea of, like, here's a month, so we can see by month what we're doing, but that's too low granularity in my mind. I want to review it a month at a time. The whole point of this in my mind is to see stuff as it's happening vaguely in real-time but not requiring me to constantly be monitoring everything. So it gives me an opportunity at the end of the day to be like, what happened today? What do we do? But yeah, so there's no real sprint that I would couple this to because we're not really doing sprints. STEPH: Got it. Yeah, that gives me more context. I understand why you're then looking for ways as to how to answer that question of, like, what did we accomplish in this week or a particular time period? CHRIS: And to name it, this is not an intention on my part to be like, I need to control everything. I need to make all the decisions. I very much want to empower the team. And in my mind, this is actually a mechanism to empower the team. I want to give them more freedom and then have the opportunity occasionally to check back in and be like, oh, actually, there was some context that was missing here the way we did this. Let's actually unwind that, do it this other way for these reasons. But it gives me the ability to potentially have that conversation after the fact. We're trying very hard to have the tickets be as representative and complete, and well documented as possible. But that's very difficult to get to. And there are also things that I don't even know to mention. Again, I think the critical bit is this is not an attempt to make sure everything aligns with what I think; it's more I want to empower the team to move without me most of the time. And then, where there are things that potentially should have a small conversation or a redirection, then we have the ability to do that. And so, I'm trying to build that back into my workflow while basically loosening up my connection to the work in progress at any given point in time. STEPH: So you just touched on a topic that's really interesting to me or a particular space. You're doing a very kind thing where you want tickets to have lots of context so that people feel confident when they're picking up what's the action item to be done. And for someone that's new, that's incredibly helpful, and I think more important since they are new to that world. But in general, my spicy take of the moment is going to be as developers; that's part of our job. If we notice that context is missing or if we're not clear about the action item, is to think through what is it that I'm missing? Who do I reach out to? Who can I go to for help? How can I scope this work? All of that, to me, is very much part of our role. And the idea that tickets always have to be perfectly curated, which I don't think you're saying, but you're just trying to be extra helpful. But if someone were to have that expectation, I think that expectation is wrong. And I do think it is part of our work that then we help make sure that tickets are well-scoped and well-defined and have those conversations with the people creating the tickets or creating them ourselves. CHRIS: I love the clarification there, and I'm definitely in agreement with you. I don't know how picante of a take it is. I would be intrigued. Listeners, let us know. Are we breaking your mold of what things should be? But I do like the idea that it is a conversation so back and forth. And so the idea that as developers, there should just be this very clear list of things to do and you just kind of pick up a card and heads down, just get it done, I don't think that should be the mold. But I do think; ideally, the why is the most important thing that I think should be in a card. So ideally, a card should have little in terms of technical implementation notes and should have more in terms of here's the goal that we're going for, here's the problem, or here's the thing that we're trying to solve. And then maybe a suggestion of like, I think it could be an X, Y, and Z, but I'm not sure. Or we want to be able to send transactional emails, but I don't know any more than that. Our goal is to engage users. Like that last sentence, that last little bit of our goal is to engage users is a critical, critical data point, versus our goal is to solve for a regulatory and compliance issue. It's like, well, those are different. And they will lead to different solutions and different implementations and all that. So yeah, I definitely share the idea that cards don't need to be perfectly specified. And if anything, I think I'm closer to that than it probably sounded like I was. But for that reason, it's totally possible in my mind, that work will be done in a way that after the fact, I'm like, "Oh, sorry, there was a misunderstanding here. Let's revisit this work." And so, my goal is to try and stay connected and have a feedback mechanism at the end of the process. So when the work is done, be able to spot-check it rather than trying to have to watch it as it's happening or proactively define everything in excruciating detail such that exactly the right things happen all the time. So I'm moving to a place of ask forgiveness, not permission. That's the wrong analogy here. But that idea of like, we can clean it up after the fact, that's fine. And we don't need to try and prevent any sort of things, or at least that's what I'm exploring. STEPH: Yeah, I love that you highlighted having the why. I adore that when that's on a card just because I then I want to know the goal because then that's going to help me ask questions and think about scoping versus if it's like a very specific implementation, then I feel so narrowly scoped that I don't feel as confident that I can be like, okay, I know why I'm doing this versus I just feel very directed to do a thing, and that's incredibly helpful. I have also felt the pain that you're mentioning where it does feel like a ticket has all of the work clearly defined, and the goals, and the whys, and it can have everything there, but just something gets lost in the communication. And so someone implements something in a way that is how they interpreted the work versus it's not actually what the ticket or what the goal of the work was to be done. So I appreciate that where you are looking for ways to tweak things to make sure that whoever is picking up that ticket will have the same interpretation that the author intended for them to have. And then if that does happen, and things get misaligned, then you chat and figure out ways to improve it. I think that's the point that I was really thinking about, and my air quotes, "hot take," is that as developers, a big part of our job is communication, and then also sharing the knowledge that we have with other people. And so if someone is expecting that they can just always pick up work and never talk to someone, I don't know, maybe you're in the wrong business. [laughs] That's my hot take. CHRIS: I, for one, like the hot take. It is nice and ever so slightly spicy. STEPH: Thanks. Yeah, I just think communication is incredibly important. Earlier, you mentioned, I don't think we were on mic at the moment, but you mentioned something about a new Git alias. And I am very intrigued on hearing about what you've added, what it does, all the details. CHRIS: All the details, that's probably too many, but some of the details I can certainly provide. So I have two new Git aliases; one is Git gone, which is probably the heart of the whole thing. And so the background of this is I found myself pushing the green merge button on GitHub more. We've introduced some branch protection stuff, which I've talked about in previous episodes. And I dream of the day that one of my good, good friends at GitHub will give me access to the merge queue beta. Please, please, I implore thee. But in the interim, still clicking the green merge button more often than not. STEPH: Wait. I have to ask to help you in this dream. Are you forwarding these episodes to someone? You can just take a clip of you saying, "Please, please, please give me access," [chuckles] and just forwarding that or mentioning someone at GitHub or GitHub in general. CHRIS: Just leaving voicemails for people with a Bike Shed section of me begging for access to the merge queue beta? STEPH: Yeah. [laughs] CHRIS: No, I'm not. But maybe I need to up my game. You're right. [laughs] Someday, I'll get there. And that will only exacerbate this issue that I'm feeling, which is again, I'm clicking the merge button. That's what's happening. And as a result, that means my local branch is now like it's done its job. You've served me well. And in the Marie Kondo sense, I need to hold you up, thank you for your service, and then let you go. But I obviously wanted to automate that. So Git gone does that automation, and it was fun. So I found a blog post which we'll include in the show notes, that had most of the pieces here, but it was still fun to play with the shell pipeline in a way that I hadn't in a while. So it does a Git fetch and then git-for-each-ref with a particular structured format that references the upstream of the branch then uses awk to search for the word gone. Because Git, if you print it out in this particular way using this format, it will say the local branch name and then the upstream. But if you've deleted the upstream, it will specifically say (gone) in brackets, so you can actually use that to filter them down. And then I pipe that to git branch-D so..well, xargs of course. I love a little shell pipeline. As an aside, these are fun little things to build up. So that is Git gone. And then the other one that I have is Git down, which is what I use more. And Git down works on top of Git gone, so it's Git checkout main and and Git pull and and Git gone. But that means I get to type Git down into my terminal whenever a branch happens to get merged in the upstream land. [laughs] STEPH: [laughs] Oh, that's adorable. I love it. I like the Git gone, and yeah, I like the Git down just for fun. You are inspiring me where I now really want a Git bless your heart that's like maybe a Git blame or a Git revert. [laughs] CHRIS: I've definitely seen people do Git praise as an alias for Git blame. STEPH: That's nice. CHRIS: But Git bless your heart is...ooh, I love that. STEPH: [laughs] I might have to add that just so I can type it, and then someone can say, "What are you doing?" [laughs] Cool, I love it. CHRIS: Little things, little fun bits to add to your day and to automate and have a little fun while you're at it. So that's where I'm at. STEPH: All about the communication and fun. That's what I'm here for and the singing. Let's not forget the singing. CHRIS: And the singing, of course. STEPH: [singing] On that note, shall we wrap up? CHRIS: Let's shall. Oh. STEPH: [laughs] CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Bye. ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

All Ruby Podcasts by Devchat.tv
Audit Logging in Rails - 538

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Mar 9, 2022 63:17


If you think all audits suck, think again. In this episode, the Rogues sit down with Jeremy Smith, a developer and writer who's ready to show us the RIGHT way to implement audit logs in Rails. “I want to be cautious about how much I bring into a code base. As gems grow, they accumulate more functionality.” - Jeremy Smith In This Episode 1) Why you NEED audit logs in Rails this year (and how to keep your clients happy with them) 2) Jeremy's recommendation for top-notch audit gems 3) The answer to “Build it or Buy it?” to keep your code (and your business) tidy Check out Jeremy's work: Audit Logging in Rails 2 Sponsors Top End Devs (https://topenddevs.com/) Coachong | Top End Devs (https://topenddevs.com/coaching) Picks Charles- 7 Wonders Duel | Board Game | BoardGameGeek (https://boardgamegeek.com/boardgame/173346/7-wonders-duel) Charles- Encanto | Disney Movies (https://movies.disney.com/encanto) Charles- Airmeet (https://www.airmeet.com/) Jeremy- The Trusted Advisor (https://amzn.to/36bciBy) Jeremy- Brian Lovin (https://brianlovin.com/) Jeremy- How I Built a Date Picker - Decoding (https://decoding.io/2022/01/how-i-built-a-date-picker/) Luke- Clean Agile by Robert C. Martin | Audiobook | Audible.com (https://www.audible.com/pd/Clean-Agile-Audiobook/B08X7D3YFR?source_code=GO1DH13310082090OZ&ds_rl=1262685&ds_rl=1263561&ds_rl=1260658&gclid=CjwKCAiA4KaRBhBdEiwAZi1zzkl2upvhlcZS1ckhZThjcKCZ8M5puFXiMbcekIKA8foM8xkVCqRUxhoChZ8QAvD_BwE&gclsrc=aw.ds) Luke- Effective Testing with RSpec 3 (https://amzn.to/3Kz8b0P) Valentino- Github Codespaces is Great, but Mutagen is Better (https://technology.doximity.com/articles/github-codespaces-is-great-but-mutagen-is-better) Valentino- Light up your WFH meetings with HomeKit (https://technology.doximity.com/articles/light-up-your-wfh-meetings-with-homekit) Special Guest: Jeremy Smith.

The Bike Shed
329: Fire Mode

The Bike Shed

Play Episode Listen Later Mar 8, 2022 31:41


Steph is excited to be headed on a retreat with her mom in the mountains, but before that, she details how she helped troubleshoot a production issue with her team and appreciated their process. She's also looking into tooling around spinning up more machines to process more RSpec tests. Chris had a developer start their new job at Sagewell and highlights how they involved the new person in rectifying potentially missing and/or confusing existing documentation. He also has a gripe, and that is accounts. Handling too many accounts. Additionally, he talks about triaging an error and how it was tough initially to understand if something was actually broken. And then it was even harder to understand what was broken. So he paired through it and used the power of putting two heads together. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris, I am going on vacation next week, and I am so excited about that. It's going to be pretty much a week long. It's like a Tuesday through Friday ordeal. And it's a trip that I'm taking with my mom. So over the past year, she's gotten super serious about her health and nutrition and done a phenomenal job of being very focused on a plant-based diet, which is basically healthy vegan food is what that comes down to. So there is a retreat that's taking place in the North Carolina Mountains that she's really excited about. I'm going to go with her. We're going to do lots of cooking, and hiking, and hanging out in the mountains, and it's going to be lovely. CHRIS: Well, that does sound lovely. STEPH: Yeah, it seems like a really perfect time to disconnect just because you're headed into the mountains. So all you should take with you are books and things that are not iPhones, and tablets, and computers, and screens. So I'm looking forward to that, just to be away from screens for the week. On some more technical news, this past week, I helped troubleshoot a production issue, which was a bit novel for me because the work that Joël and I are doing with our current project it's all in the testing realm. And so it was probably around 10:00 o'clock at night my time, and I got a ping on Slack. And it looked like I was getting called in for a production issue. And I was like, I have touched zero production code. [laughs] So I'm very intrigued how I could have broken production at this point. And so I looked into it, and it turned out that it wasn't necessarily related to a commit that I had authored, but it was for a commit that I had reviewed and then approved. And so their strategy is they create a new channel. They'd gotten a ticket that an error was occurring. And then the site reliability team created a new Slack channel, and then they pinged everybody who either authored, reviewed, and approved that change to be like, hey, we think the issue is related to this commit. Our plan is we'd like to roll it back. But before we do, we just want to check in with folks who have more knowledge to help us confirm that, yes, this error message seems related. And I really liked that approach. I really like the idea that it's not just the person who merged the commit that then gets pinged on it, but it's like everybody else who happened to look at this and review it come help us too. So we spent some time looking into it, confirmed that yes, indeed, it was related to that particular commit. And then their team did the wonderful thing of then rolling it back. So then, it was no longer an escalated issue. And so then I asked, "What else can I do to help?" And they said, "Well, from here, it's no longer a production issue. So tomorrow, just follow up with the author and let them know and issue a fix for the bug, and then merge it like normal." So we're back in that normal pull-request flow, very calm. And overall, I just appreciated their process. I like very much how they pulled more people in because I think some of the other people that were involved weren't online, which makes sense because it was really late. So that way, you just spread in case some other people really aren't available that then hopefully you'll get lucky and one of those three or four people are available to help you troubleshoot. CHRIS: That does sound like a really nice and thoughtful and intentional bug response, communication, procedure, rollback, et cetera. All of that sounds like it worked very well and is nice to have. And it's the sort of thing that a larger organization ideally gets to, having these sorts of processes. Spoiler alert, later in the episode, I will talk about the other side of it of being a very young organization and trying to be like, wait, is this a bug? Is this not a bug? Should we roll back? What do we do? That's actually my topic de jour. But what you're describing sounds like the calm even in the case that there is a fire sort of like, yep, we've got procedures. We have workflows. We have communication channels and ways that even the exceptional things can be handled in an ideally as calm as possible way. So that's awesome that that's what you got to experience there. STEPH: Yeah, getting called in at 10:00 o'clock is never fun for anybody. But when it happens, because it's going to happen, then I appreciate the thoughtfulness and that process that they put behind it. So it all went fairly smoothly. And it was also one of those fun things where I haven't met...like this is a very big organization, so I hadn't met any of those people. So when I got pinged on it, and then I hopped in, I was like, hi, I don't know anything about this process and what y'all are doing, but I am here. I'm here to help. Where can I look? What can I do? So it was also a fun endeavor in that regard to just be like, I don't know what I'm doing, but I am here to help. Please let me know how I can help. And it ended up working pretty well. So yeah, that's been a fun adventure for this week. How about you? What's new in your world? CHRIS: What is new in my world? Well, we had a developer start this week, which has been really wonderful. Unfortunately, we had scheduled their first day to be Monday, which was Presidents' Day, and that's a holiday. So we got out in front of that one and figured it out. We're like, no, no, actually, feel free to start on Tuesday. We'll not be around on Monday, so you shouldn't be around on Monday. But then, on Tuesday, they started. And we intentionally structured things such that we have a contractor that has been working with us for like seven or eight months now. So it's been a long time and been very formative as well the work with that contractor. So this is their last week, and thus, we very purposefully brought the new person on the team and that contractor together to maximize the amount of pairing and overlap that we have there just to try and as intentionally as possible grab whatever is in their head, get another point of view. Because this new individual on the team will be able to work with myself and the other full-time developer on the team a bunch moving forward, so we want to maximize their overlap with the person who is on their way out. But otherwise, it's been great. We're a young organization, so the version of onboarding it's me running around setting up a lot of accounts, forgetting to set up other ones, getting pings in Slack, and then following up and setting up another account. Eventually, I hope that there are checklists and formalizations and, ideally, one-click SSO magic that makes all of that work. But for now, I'm happy to chase it down. But really, we're just leveraging pairing as much as possible as the onboarding tool to make sure that where we don't have formalization, procedures, documentation, et cetera, as thoroughly built out as I would love to be at, we can shore that up with some time with other humans. STEPH: That's awesome. It's always fun having someone new to join to highlight all the things you need to automate or at least have a checklist for to then help them onboard. But that's really exciting that you've got a new teammate. CHRIS: Yeah, definitely very exciting. And they've been great. They've hit the ground running and a couple of pull requests already and just contributing very effectively within their first couple of days. So that's always wonderful to see. We are definitely taking this moment to document what is undocumented or update the README where it needs to be and start to make that checklist. We have another person who will be starting in about two weeks' time. And so, ideally, that will be even a little bit more fleshed out of a process. So slowly, incrementally get a little bit better with each we add that we get there. STEPH: How much do you involve the new person in creating that documentation? Is that something that you ask them to help build, or is it something you take ownership of? What's that balance? CHRIS: It's interesting. So definitely some I want to be with that person because I think it can often be the easy first PR is an update to the README for like, oh, I tried to set up the app, and it did not work. For this reason, I have now updated the README, and now there's a pull request. And we get to experience that flow via the very low-stakes change of updating the README. So that's a definite one that I like to have. The other is I'll typically ask for the individual to capture as much as possible. There's a very delicate line in my mind between empowering them and being like, yes, absolutely. We're young. We don't have everything documented. So feel free to make changes where that makes sense to you. But at the same time, I know that joining a new team can be complicated, can be intimidating in certain ways. You're not sure what's okay to change? What's not okay to change? That sort of thing. So I simultaneously don't want to put the pressure on someone to be like, "Yeah, no, change anything you want. Literally, nothing is stable here. Nothing's glued to the ground. So feel free to pick up anything and throw it out the window." That feels too far in my mind. So I don't have an actual answer like, I'm ideally calibrated at this point. But it's sort of those two tensions that I'm holding in mind as I think about that. STEPH: Well, I really like your answer. I like that balance because I think it's really nice to include the person in those changes and also just because they're going through it. So they happen to have that insight, and it's fresh. But I agree, when you're joining a job, you want some stability and confidence that the people that you are joining that team with are also working hard to make it a very positive onboarding experience. And if you just were to push all of that responsibility on to them to be like, "Yeah, we know. We don't have this organized yet. So you tell us everything that we need to do," that would feel unkind to that new person. I think as a new person that I wouldn't fully enjoy that. I don't mind some of it, but I wouldn't want all of it. I'd have nervousness around ownership, around improving processes, and who that belongs with. CHRIS: Sort of a classic case of it depends, or it's a little from Column A, a little from Column B, but definitely some, just hopefully not too much. STEPH: The Goldilocks of onboarding, some onboarding responsibilities, but not all of them, just the right amount. [laughs] CHRIS: Shifting gears slightly, though, I just want to gripe for a minute. I'm just going to gripe. This is not my normal mode, but I'm going to lean into it. STEPH: Do it. CHRIS: Accounts, just accounts. I have so many accounts now. There are so many across different systems, and I'm trying to do the good thing, which is let's stop using personal accounts for anything and only use organizational accounts for the things that are for work. And some organizations do a great job with this. GitHub, I'm looking at you; really well done, super happy with the way that you folks have implemented accounts. You get that I am one human being that contains multitudes. I am my personal self; I am my work self. I am maybe even another version of work, and you get that. And you usually let me exist as all of those versions of myself and, man, do I appreciate that. Heroku, you're okay. Like, it's all right. You treat the different facets of me as different accounts, but that's okay. You make it relatively easy to switch between. Although you do make me two-factor auth and re-login every single day, and I don't love that. So I don't know what's going on there, but fine. Trello, aka Atlassian, I guess at this point, come on, what are we doing? What's going on here? So originally, I had started, and I had the one Trello account, and I had my personal boards. And then there was the Sagewell organizational account. And within that, there were some boards, and I would just bounce back and forth. But I realized, no, I need to do the right thing. So I created a new Trello account. And now Atlassian just forces me to switch between them, and it loses the link that I'm going to often. It's a different login interstitial screen. And it constantly shows me that like, hey, you don't have access to this. Do you want to switch accounts? And I say yes. And then they take me to a screen where I can pick between two options, the one that I was that didn't have the ability to do it and another. And as a developer, I know that the thing I'm about to say is not fair. But come on, folks, you could know the answer to this question. There are two, and one is the wrong answer, so the other one is probably the right answer. You don't need to autolog me into that; I get it. Just emphasize it because they almost look identical on the list. I have now accidentally tried to request access with my secondary account to my other account, and I can't get out of that state. So now, one of the ways that I try and do this it shows me a list of them to pick. The other it says, "You have requested access. We're waiting to hear back." And I'm like, no. So anyway, that's a thing. STEPH: So I know people can't see me. [laughs] So I'll narrate that I'm dying over here because I very much appreciate that we are positive people. We are very focused on bringing positive energy, but the descent into the amount of shade that you're throwing at different applications [laughter] just really made my day, and I feel that pain. I have felt that pain with Atlassian and can relate. And we should have some gripe sessions. This feels healthy. This feels very...okay, well, I don't know for you. I'm the one that's laughing and getting joy out of this. I don't know if it's helpful for you, but it feels very cathartic to me. [laughs] CHRIS: It is definitely somewhat cathartic. I think there's utility in having these sorts of conversations. And throwing shade at Atlassian, whatever, they're doing fine, so I'm not super worried about it. But generally, we try and keep things positive because I think that's, frankly, a more effective way to communicate. But occasionally, it is useful to look at the things where I'm like; that is a pattern that I do not want to repeat. And I'm sure that there are complex organizational enterprise-y reasons that it has to be this way. But I can look at that and say never that. That experience as a user is like, wow, yeah, I just tripped over nine layers of your enterprise there just trying to do very simple day-to-day things for myself. So I want to avoid that. I've griped about that one login, not the company OneLogin. But that one login page that I've experienced where I start to interact with the form, and suddenly some JWT handshake in the background happens, and I'm now logged in. And it just rips the page out from underneath me. That is unacceptable. That is not okay. And I really do think there's something worth occasionally looking at those and being like, well, not that. But anyway, I should probably stop my gripe session now. STEPH: [laughs] Well, if I may join in, I have one that I'd like to share. Since we're on this -- CHRIS: Throw it on the pile. What else we got? [laughs] STEPH: [laughs] So there was some code. There was a piece of code that I was looking at that was very not friendly. It was difficult to understand. It took a while to parse through what are they actually doing? What records are they creating? Why did they choose this manner? Why are we iterating over these particular numbers? What's the outcome here? And I was pairing with Joël and was going back and forth having a conversation trying to be the detectives of why this code exists, and we finally got there. And we finally understood what it's doing and why. And I just lost it for a minute once we finally got there. [laughs] I just thought the way this code is written, it does not improve readability, and it doesn't improve performance. All it did was make my life harder because it was very difficult to read. So all they did was become really clever with the code that they were writing and essentially drying it up, which I have such a beef with DRY because it has caused me pain. And so they essentially were drying up their code or introducing a way to make it just take up fewer lines that took up less vertical space. But overall, I was very grumpy about it. And Joël was very kind about it and was like, "Well, this is the type of code I could see maybe why they did this." But you're right; it doesn't help with readability and performance. And he was helping balance out my grumpy goose moment. I've been having a lot this week; maybe it's just the week I'm in. I'm in more of a fiery mode this week [laughs] with some of the code that I'm seeing, and that was one of them. That was the please, please, please don't DRY up your code. If it doesn't improve readability or performance, there's just no need. There is no benefit. CHRIS: Well, I definitely know that feeling. And I think I've probably, as a developer, gone through that arc where early on I was just trying to make stuff work, and then I learned how to be clever. And suddenly, being clever became a game that I could play. And then, pretty early on, I realized I would come back to my own code from two weeks ago and be like, what the heck does this do? I have no idea. And that's when I was drawn to Ruby. That was one of the things. I'm like, oh, I can write code that looks so much like the clear words that I have in my head about the thing. I like that. And so much of my career has been spent in the let's make it obvious and revisitable. I actually remember very clearly early on in my time at thoughtbot, I was working on something and was working on it with Joe Ferris, who is the CTO of thoughtbot and a very clever individual, and I mean that in the truly positive sense of the term, one of the most capable engineers I've ever worked with. He was describing an anecdote, but it was basically he'd put up a pull request. And someone replied, "Oh, that's clever." And Joe's reaction was, "Oh, crap." Just taking that as not an insult but as someone saying, oh, that's clever in a positive way, and Joe hearing that in the negative form of I went too far here, or this is not obvious in its initial interpretation. That really stuck in my head from there, just his reaction to it immediately of that being not a good thing. And I was like, that is interesting. And all the more so over time, I've come to believe that clever is probably something to avoid in code. STEPH: Yeah, agreed. I'm at the point that if I do see someone who's done something that I do think is clever in a positive way, I will still abstain from using that word clever because I do want to make sure they don't think that I'm saying in a bad way that this is clever, that it's not readable, and it's not friendly. So I totally avoid that word when I'm complimenting someone's code just to make sure there's no confusion. CHRIS: It's one of those words that got away from us that we lost the definition of, and then we came back, yeah. Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Let's see. In other news, you had mentioned this earlier, and then I had mentioned my side of it but errors in alerting and all of those sorts of things. They're an interesting question. We had a small situation over the weekend that turned out to be kind of real, kind of not real. But I happened to be away on vacation. I did have my computer with me because, at this point, we're early enough. And I'm like, I'm going to take my computer everywhere and just be ready in case it's necessary. And in this case, I did get a ping. I looked into it and what was unfortunate is it wasn't immediately obvious if something was broken or not. And to a certain degree, that's always going to be kind of true. There's so much noise, so many requests hitting a web application. And how do you tell the good ones from the bad ones? And ideally, I could threshold around certain volumes of traffic, but even that's going to have spikes, and ebbs and flows and things like that. So it was very hard initially to understand is something actually broken? And then all the more so to understand what was broken. Thankfully, it was tractable. It was solvable. And we've done, I think, some good work especially considering how early on we are and how we've instrumented things in Sentry, in particular, our usage of Sentry and also somewhat in the logs. But again, I think I've talked about this before, but I'm feeling this tension around there's data. There's data just kind of like, what happened? And right now, we've got logs. That's one of the places that goes to Sentry if it gets escalated up to that level. And we sort of have a weird Venn diagram between logs and Sentry. And then we also have analytics as another thing and then eventually data science, and what do we want to try and learn? And all of these kinds of want different facets of it's not the same data set. But I wonder, is there a superset of data that then we could filter and slice and cut up, do all those sorts of things? I think this is the dream of Honeycomb and platforms like that, but I'm not even certain if that's true. And so I'm in that awkward middle space is how I would describe it. But in that particular case, I was able to resolve it. I did take away as an action it's probably time to start thinking about PagerDuty anomaly detection, that sort of thing. When does alerting happen? When do engineers actually get calls when not just during the normal nine-to-five of the workday? So I'll be investigating that in the coming weeks and see where we get to. But it's sort of the first thing that really pushed us in that direction. The other thing I'll say is we have the idea of the point dev, which I've talked about on a couple of episodes. But the idea is for each week, one individual on the engineering team is in charge of the noise, for lack of a better term. They're looking at the error stream in Sentry. They're looking at any ad hoc requests that are coming from our admin team, et cetera, et cetera. And that's been really great. But one thing that I've noticed is that dealing with the errors is particularly tricky and what we did in this particular case was just to pair on that. As an individual, it is really hard to sometimes to reproduce, sometimes to just understand these are the things you didn't expect in your code, and therefore they are, by definition, harder to understand, harder to think about. And then sometimes you get to an understanding. You're like, ah, what do we do about that? Do we care? Do we not care? Is this just noise? Is this something we should solve? Is it something we should solve soon? Or is this something we can solve whenever we get to it in the backlog? And making that sort of determination is all the harder. And so I'm increasingly of the mind that there should be some amount of time that is pairing on that error backlog to bring two heads together. I hadn't been thinking of it this way, but I've now come around to thinking this is a really great place for pairing because it's so hard for one individual to deal with that complexity to make the hard value judgments. And to do that, if each individual does that in a vacuum, then we have n different value systems at play that are hopefully very similar. But if we start to pair up, then there's osmosis between those groupings. And ideally, we sort of coalesce towards a shared value structure around, like, what can we ignore? What should we snooze for a week? What should we put in the backlog? What should we prioritize and fix immediately? Because I think those are really hard things to otherwise...that's really hard to document, I would say. I would love to write up a page in the Wiki that says, "This is how you treat errors," except each error is a unique snowflake, and you just have to follow your values. STEPH: I have been on teams where we've written up documentation that helps you triage an error because you're right; you can't write documentation around a specific error. But that I always found really helpful where it was like, here's all the links that you can look at, here are some recommendations. When we were working on an application that was falling over more often, there were some specific outlines around if you see this problem, then this is typically how you can solve it. And then we had to fix that at a larger scale, but it was a nice band-aid to get us through at that point. I like the idea of pairing, especially as you mentioned; it's tricky. It's funny when you mentioned capturing those errors and putting them into the backlog because I like that idea that then you can prioritize and bring those into the sprint. It just made me feel a bit hesitant. If we don't work on it now, we're never going to work on it. But then that feels unfair to say because it really comes down to the team. If you have a team that's going to be able to look at those errors and say, "Yes, we're going to bring them in and prioritize them," then that feels really good to then be able to say, "This is an error. Let's capture it. Let's provide some content around it. But it doesn't need to be addressed at this moment. It's still pretty low in terms of risk for users or at least low in impact for users." So yeah, I guess it just depends as long as the team feels good about being able to prioritize errors, which I feel confident that your team would be able to do. And if you can't, then y'all could reassess that plan. CHRIS: That's why we definitely have that. We're revisiting the errors. They're part of the same backlog as everything else. So they're coming up in relative priority and getting worked on and getting resolved. But we're also shifting our thinking just a little bit to say, "We should take a little bit more time in the moment to try and resolve some of these where we can." I have the dream of there are just zero bugs ever. But that's hard, especially in different platforms. And we're seeing a lot of mobile traffic and from different older Android versions and so weird JavaScript edge cases and things like that. Like, why does your runtime not have object? That feels like a thing every JavaScript runtime should have. But that's a joke. Every JavaScript runtime, I'm pretty sure, does have object but that sort of thing. It's like, whoa, this is weird and specific to this one device. Cool, those are fun. So yeah, giving a little bit more time to do those. And again, so we definitely do have the document that describes here are the places to look and how to think about this category of error and this category of error. But at the end of the day, you get one that's just like, there's not a ton of detail in the error. It's hard to reproduce. It might be device-specific, et cetera. And so what do you do in that moment? And that's where we're trying to...I think pairing is a great way to share that thinking around the team. So overall, it's been great, though. I think everyone who has been involved has been like, "This was better than when I did it on my own," so cool. STEPH: Awesome. That sounds great. CHRIS: Yeah, I think so. This is one of those ever-evolving facets of how we work as a team and how we build the platform. So I will certainly report more in future episodes, but for now, happy with that. And yeah, what else is up in your world? STEPH: Yeah. So we've been looking specifically into tooling around how we're going to spin up more machines to process more RSpec tests. So specifically, we have around 80,000 RSpec tests that we are processing, and we have one machine that is parallelizing those and takes around just for that portion of the build because then there are other tests and things that get run that brings it up to about a total of 30 minutes. But for the RSpec portion, I think it's probably around 20-ish minutes to process those 80,000 tests. So we split that across four different containers, and then we run those tests. And so we'd really like to spin up more machines to then process because we've reached the point that we have given as much power to that one machine as possible. So now we're looking to add more machines. And one of those solutions that we're looking at is using Buildkite, which is built with the idea that you can add these build steps so then you can more easily say, "All right, once we get to this particular build step, hey Buildkite, we'd like to run n number of machines to process all these tests." And that seems really nice. And it is something that we are interested in. It is actually what Shopify uses. They use Buildkite ci-queue, which is built for mini-tests, which is what they use, and Redis to then run all of their tests. But we are using TeamCity, so we're not using Buildkite. And we would like to see if we can grow with our current CI infrastructure versus having to move to a new one. There's a lot of just risk involved in moving to a new one. And so we've been studying hard if TeamCity will let us do this. And so far, the answer has been no. But just recently, we found somewhere in the docs that it looks like there is a chance that with TeamCity, we can inform TeamCity that, hey, even though we have just this one build step, instead of only giving us one agent or one provisioned machine to then run these tests, instead that we actually want to spin up a couple of machines to then process these and then aggregate the results back to this one step. So we're looking into that. But I wanted to throw this out there in case anybody else is also using TeamCity and has already invested in this particular approach. I would love to hear about it because we are currently figuring out the capabilities and if this is something that we can stay with our current infrastructure or if we're really going to have to look for a new solution. CHRIS: Well, I'm hopeful that someone out there can give you some input. I definitely get the idea that you're stuck, and stuck is maybe too strong of a word. But if TeamCity is not ideal, the idea of moving off it does feel exceedingly heavy and the riskiness that you talked about. That's, I think, a critical word here because I think it's easy to think of CI as like it's a very important thing. But that's absolutely critical as part of your deploy pipeline, I assume. This is speaking generically about CI, and so it is, in fact, a critical piece of the infrastructure. If you've got a bug on production and suddenly CI is down, what do you do? I guess you can test locally and decide you're going to push past it, but then you have to circumvent it. And so I understand the intentional way that you're thinking about that and the risk associated. I do wonder, though, if TeamCity has felt like not the right platform for a while and if there are considerations. Is there the possibility of both trying to improve the world that you have now, so it's not the big move off of it but then also in parallel start to work on an alternative implementation? This is perhaps not entirely fair, but it feels like a Rails application is this repository of code. And typically, CI is configured via a file. And that's like, if you've got your teamcity.yaml or whatever it happens to be, could there also be a buildkite.yaml that is not on the critical path for deploying or anything like that? But it is a way to, frankly, somewhat inefficiently test on two different platforms but start to see if you can get the code moving on a different platform and be able to gradually build out and make that transition possible without it being one big swap over sort of thing, which eventually it would need to be. But just wondering, is that happening in parallel? Is that a possibility? STEPH: I think the short answer is, I'm sure there is. There's a way to look at the existing system and then find ways that we can tweak it. But I also know that the team has already invested a lot into working with the current system and making it as efficient as possible. So I don't know if there's any true big impact but intermediary steps that we can take. We are definitely in that proof of concept world. So we're not going to move anything over for the rest of the team until we can really prove that something is working for a small subset and then start to expand from there. But currently, our idea is to dig further in TeamCity, which I think also includes just a call to their team and say, "Hey, we'd love to talk to one of your engineers and see if the thing that we're trying to do if it's possible. Let us know if it's not and if we need to look elsewhere," which is intriguing to me because having a lot of tests isn't new. There are tons of companies that have lots of tests, and they want their CI test suite to be fast. So a company that then has built software that helps Team execute these steps that then the ability to say, "Hey, I want more machines to process. I want to give you more money and to give us more machines, and we can process more things." I feel like that should be a thing. And I'm getting at the edges of my knowledge. This is why we're exploring all of this. But it has been surprising to me to realize that that doesn't seem as easy of a thing as I would have expected it to be. There are also some other concerns around here where the client that we're working with if we're going to work with third-party vendors, then we have to get special approval to work with them. It's not just a hey, we can just go try it out. It's a lengthy contract process that we'd have to go through. So there are also some constraints that we have to keep in mind where we can't just work with anyone. We need to be careful to make sure that they're certified in a particular way. So yes, I like your idea. I will definitely keep it in mind. But I don't know if there are any true intermediary steps yet other than the building out a proof of concept and then finding small ways that we could move over. Then I think that would be ideal for sure. And then hopefully, if there's anybody that's listening that has experience with TeamCity or Buildkite, that's the other tool that we're looking at using, let me know. I would love to chat about it and find out your experience. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
328: Terrible Simplicity

The Bike Shed

Play Episode Listen Later Mar 1, 2022 52:36


Chris is helping with efforts to introduce security, practices, and policies at Sagewell. Right now, they are refining the usage of 1Password to standardize passwords and secure information. He also shares (what he believes) is a terrible idea around fixing inconsistencies around symbols and strings. Steph shares an update around factories. Also, at Sagewell, Chris is helping to build mobile apps, one for iOS and one for Android, and is considering pursuing having them be all native. Good idea? Terrible idea? Chris and Steph riff on that a bit. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Services down? New Relic (https://newrelic.com/bikeshed) offers full stack visibility with 16 different monitoring products in a single platform. GitHub - alassek/activerecord-pg_enum (https://github.com/alassek/activerecord-pg_enum): Integrate PostgreSQL's enumerated types with the Rails enum feature Feature #7792: Make symbols and strings the same thing (https://bugs.ruby-lang.org/issues/7792) - Ruby master - Ruby Issue Tracking System RailsConf 2016 - Turbolinks 5: I Can't Believe It's Not Native! by Sam Stephenson (https://www.youtube.com/watch?v=SWEts0rlezA) GitHub - hotwired/turbo-ios (https://github.com/hotwired/turbo-ios): iOS framework for making Turbo native apps Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Weird stuff happens when we sing, Steph. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: Hello, Steph. What is new in my world? We are continuing with some of the efforts that we're doing to introduce security, and practices, and policies, and all those fun sorts of things at the organization. One of the things that this is pushing on is we are further refining our usage of 1Password at the company as a way to standardize passwords and secure information and how we store that, how we move it around, as well as integrating SSL, and all those other fun fancier things. But I'm personally historically a LastPass user, and now I'm getting to experience 1Password. So now I'm a child of two worlds, and it's terrible, and I hate it. I hate every moment of this existence. So what I need to do is move over to 1Password, but now I'm in that space where I'm like, I can see the flaws of both systems. This is terrible. I don't like it. 1Password does seem to be great; I will say that. There's one really interesting thing about 1Password. I'm interested...you're a 1Password user, right? STEPH: I'm not; I use LastPass. I'm also a child of two worlds because we use 1Password for thoughtbot stuff, but then I use LastPass for my stuff. CHRIS: Gotcha. Okay, so you survive in the middle space. I'm slowly trying to move everything over because I think 1Password has a little bit more of what I'm going for. And I would like, frankly, to be in one cohesive, consistent space, although having two different accounts seems interesting. I definitely can handle it. But knowing which I'm in and how to save a password to one versus the other, it's a whole thing. The one thing that I find really interesting though is 1Password has a feature where it will do two-factor, two-factor authentication. It will do that for you. Specifically, it's doing, as far as I can tell, the TOTP. I don't know what that acronym stands for, but it's the fancy type of two-factor, so not SMS, not text message-based, and not others like WebAuthn is a thing that I've heard of, which I don't know if that is distinct from YubiKey or hardware keys. So there's a bunch I'm trying to learn about this space a little bit more. I'm very interested in the hardware keys because those seem cool. WebAuthn seems like a new standard. That sounds cool. Don't know anything about it, though. So mostly, I know about SMS, and I do not like that one. I do not want to use text messages because, as far as I understand it, they're not super secure. So that's not the space I want to be in. But the TOTP, the Google Authenticator, or Authy, or that space of password or two-factor code generation tools those seem good. And 1Password has a feature where they're like, hey, yeah, sure, we'll have your password and your two-factor. And so they grab the QR code, which is typically the QR code is a way, as far as I understand it, to share the seed. And then, that seed is used by an algorithm to generate the current code value for a given point in time. So it takes like, given that seed and the current timestamp, we will generate you the relevant code, which can then be verified on the far side. But that seed only exists for one moment in time, et cetera, et cetera. But I've always thought of it as this separate thing. The idea of having that all in one system is interesting and kind of scary to me. But as I think about it, I'm like, if 1Password or LastPass, in either case, gets compromised, we're all done. Like, this is over. We should throw in our cards, give away the internet. This whole experiment has failed is my sense. But it was very interesting because I had not seen this. I've always had these as separate systems. So for me, I have had LastPass, and I have Authy on my phone for the two-factor. But it's frankly very clunky, and I don't like it. And the 1Password thing is fantastic where I say like, yeah, 1Password, fill in my password and username, and then also fill in my two-factor because you have it. This is great. But, and this is where I hesitate, and I don't know, I will say this: I trust that 1Password has thought about this deeply way more than I can and have come to a place of deep confidence that this is a fine and okay thing to do. But I'm still intrigued. What's going on here? STEPH: That was a lot. I have so many thoughts. [laughs] CHRIS: Sorry, that was a lot of words, a lot of ideas, a lot of space there. It's just where I'm at. STEPH: People couldn't hear me, but I was laughing when you were talking about LastPass or if these accounts get hacked in. And I'm imagining someone who uses the combination of their cat's name and their birthday as their password and then like, aha, I win. [laughs] It's like, no, we just all lose. [laughs] But that amused me. Going back, you talked about having it all in one place. And that actually doesn't surprise me that we're different in this area. Because you also like all of your email...you like one source of everything, which makes so much sense, but I'm different. And with these accounts, I like that I have the distinction between all of thoughtbot is in 1Password while all of mine is in LastPass because it's just a very clear delineation between those two accounts. And I'm sure both of these platforms have figured out a really good way to then separate those two. But I just remembered there was someone at thoughtbot that accidentally...because they have everything in 1Password, they accidentally shared their personal vault with a client. And so they were just typing in Slack. They're like, "Oh, shit, oh shit, like, how do I undo this?" And we're all just watching like, "We don't know. But please let us know how it turns out." [laughs] It turned out fine. I think they actually realized they hadn't fully shared it but based on the UI they thought that they had. So it all turned out okay. So that just lives with me. I'm a little scared of that now now that I know that story. So watch out, friends. CHRIS: Oh, wow. Well, now, yeah, I'm also now scared of that. I wasn't, but now I am. STEPH: And I forgot the other thoughts now. Those were my two main thoughts based on the journey that you've shared. CHRIS: Particular to the thing you were sharing there, yes, now I will have nightmares about it. But also, it feels manageable because they're both entirely different accounts, and then also within that, there are different vaults. So as I'm building up the password infrastructure at Sagewell, there's going to be different...like, the dev team will probably have one vault and then a shared vault for the dev team. And then other teams within the organization will have that. And so it feels like there are at least structures within the tool to manage that. But mostly, my consideration is around the two-factor thing. And like, is this reasonable to do? And again, I'm sure 1Password has thought way harder than I have about it. And I trust that they're like, yeah, this seems fine that they're not just like, I don't know, it doesn't seem bad. They're like, no, no, definitively for information-theoretic reasons, this is fine. But it was surprising. STEPH: That was it. The other comment that you made about two-factor auth that resonated with me because there was a point not that long ago where we have one of those, either New Relic or I forget which account it was, but it was with the systems. We really only needed one person to have access, but every now and then, someone else may need to access that account. And so we wanted to be able to store it in 1Password or LastPass somewhere like that. But then the two-factor auth was a problem because then you had to coordinate with that other person to say, "Hey, I just need to check something. Would you let me in?" And because we could then leverage that feature, then we could just store all of it. And then that person could just go to 1Password or LastPass and then have access to all of it, and that was really nice. That was a very nice solution to I want to say it was a small problem but yet also very important for team happiness. So that was really nice. CHRIS: The amount of times that I've been like, "I just tried to sign in to the shared account, and it says that it sent a two-factor request to somebody's phone, but it didn't tell me whose phone. And I'm not sure if we know who that person is or if that person's still around," that version of the story feels true. And so, the idea of being able to centralize two-factor seems great. It almost feels too good to be true, is perhaps where I'm at. I am putting on my tinfoil hat, and I'm saying, yeah, but oh man, security, though. And again, I will 100% defer to 1Password on this. They've thought about it. But it's mostly I want to get to the place where I understand the thought process that they went through to decide that this is perfectly fine because they definitely did that work. I'm certain of that. I just want to read a white paper or something, and I haven't found it yet. [laughs] I'm like, let me get to that deep place of trust because that's what I want to be at with security tooling and those sorts of things. STEPH: Yeah, I haven't looked for something like that, but that sounds...I'm kind of surprised that doesn't exist. CHRIS: Oh, it quite possibly exists. I haven't done much of a search, frankly, at all. Mostly, I'm in the space of like, huh, that's weird and then moving on with my day. Because there's not a lot of free time to go search for the white papers on the internet. But yeah, so moving from 1Password or LastPass or 1Password, or maybe I'll just end up with both for a while. I really hope I don't end up in that space, although you're describing it as a positive, so maybe I will. STEPH: I have found it helpful for me. When you find that white paper, because you are more likely the type of person to read that white paper than I am the type of person to read it, then I would love a summary. That would be much appreciated. CHRIS: I'm so intrigued by the persona that you're describing of me of; like, you're the kind of person who would read a white paper. I'm like, well, I don't know if that feels true or if it's definitely true or definitely not true. But if I do happen to find it, and especially if I happen to read it, [laughs], I will share it with you and perhaps with the listeners as well. Let's see, one other small thing. I have a bad idea. I don't want to share the bad idea with you. I want to more share it with the audience, and then I want the audience to tell me exactly how bad of an idea it is. STEPH: [laughs] CHRIS: Because I'm sure it's a bad idea. I'm just not sure how bad. STEPH: I love that there's not even a scale of goodness here. It's just nope, this is terrible, but I don't know how terrible it is. [laughs] CHRIS: What's fun is in the later parts of this episode, we're going to go into a segment of good idea, bad idea, sorry, good idea, terrible idea because I like that framing. No, this one is firmly bad idea, but how bad is the question. So we're working on the app, and we keep running into inconsistencies around symbols and strings. As any Rubyist who has worked in the language for any amount of time, especially in a Rails app, you have experienced this unpleasantness. There are strings; there are symbols. They're often used somewhat interchangeably, and yet they're different. You'll hit bugs. You'll hit edge cases. You'll hit nils that you didn't expect to be there because you tried to fetch a symbol. It, in fact, was a string, et cetera. So, what if we just applied HashWithIndifferentAccess everywhere, just deep in the internals of the app or in the Ruby runtime? What if we were to just turn this on? My sense is this would be terrible for performance reasons. My understanding is that's why symbols exist is because they are a more performant mechanism. Strings are complicated within the object model of Ruby because they're mutable. These are things that I understand very loosely, as you can tell by the tone of voice that I'm using. But symbols and strings they're separate. They're separate for reasons, performance I believe to be the main reason. But what if we were to just say, well, what if it could be like easy, though? That's what I want. Like, this is the promise of Ruby is that I want to express my code in a way that feels like the words I would use to describe to another human. That's the way I always think of Ruby is it's as close to the words I would use to describe the sort of business logic as possible. And yet these symbols versus strings thing it's just annoying, frankly. And again, I think very good reasons for it, I'm sure. But what if we were to just do the silly thing and turn on HashWithIndifferentAccess for everything? I don't even know that that's fundamentally possible. I don't know that there's the relevant hook or the way to do that. But I would love that because we're using it somewhat regularly throughout our app right now, where we're getting data from one API. And in our test suite, it's one way, and in our code, it's the other way. And granted, that speaks to us being inconsistent in our usage. But overall, I would just love for this to not be a thing. And so, how bad of an idea would it be? How much of a performance hit? That's my guess as to what it would be. Maybe there's actual fundamental correctness that would go wrong here. But my sense is by collapsing the space together; we would actually get more correct. I don't know. Anyway, how bad do you think of an idea this is? STEPH: I was thinking through some of the bugs that you're running into. And I think you provided some nice insight around that around it's the fact that you're fetching data from API. So it's typically you're parsing. That's how you're getting the string and symbol differences is because when you're parsing JSON and then you have a mixed case of maybe you have a symbol, maybe you have a string, or maybe you're parsing it differently. Are there other places in the application where that's a concern? CHRIS: I want to say one other place that we're running into it specifically is we're using a lot of enums, particularly ActiveRecord::PGEnum backed enums. So these are Postgres enums at the database level. And then, within our Rails models, we define them as enums. And the enum is typically defined within the model as a mapping of symbol to string. It could be symbol to symbol. I'm not even sure. I think this might be in terms of our implementation. But you say like, it's an enum. The key is foobar with an underscore, and it's a symbol, and then the value is foobar, but it's a string. And maybe both the key and the value could be symbols; maybe that's a thing, maybe this is our fault. But certain times, when you're interacting with the value, it's a symbol. Certain times I find it to be a string. I feel like that's true. I don't think I'm making that up. [laughs] It's possible I'm making it up. But that's another place where I feel that inconsistency or other values within the system that like as they go through certain type coercion layers, they'll start as a symbol, and then they get saved to the database, and then they get reflected back, and they come back as a string. And it's like, well, that's unfortunate. It was a symbol a minute ago, and now it's a string. And so our tests suddenly break in this way, or our code is inconsistent. And it's enough of a nuisance that I had the bad idea the other day. And so, I wanted to bring the bad idea to this space. STEPH: I think you're right. I think the main reasoning for not having everything just be strings is for looking for that performance benefit. And so then using that HashWithIndifferentAccess then you'd have to loop over everything and then convert it. So I imagine, like you said, there would be a performance hit there. I don't know how bad of an idea it would be. But when you said this, it brought up a memory because I remember someone proposing or the Ruby community talking about the fact, like, what if we didn't have strings? What if everything was just a symbol? Or can we just have one over the other? And there is a ruby-lang issue; it is 7792. And we shall also put it in the show notes and send it to you. [chuckles] And this person is proposing make symbols and strings the same thing. And then some people call out specifically the idea of using HashWithIndifferentAccess and saying, yes, that works wonderfully, but then you are going to have a performance hit for it. So it sounds exactly like everything you're saying. I don't know the outcome. I mean, clearly, the outcome is we're not there. But it seems like a really good place to see the reasoning or different approaches that maybe people have tried in this space. CHRIS: Ooh, I love that. I definitely want to read that and see what sort of deeper thinking folks have done on this. Because again, this feels like another one where definitely folks have thought about this, folks who know more about it and have chosen the current path that we're on for reasons. But I would be really intrigued if I could be like, yeah, I would just like it to be easy to start, and then have the performance optimization be something that I could opt into. Again, that's probably not tractable within the language. Like, oh, we have a hot code path here that we want to actually have immutable symbols only. And that's the sort of thing if we've done this HashWithIndifferentAccess everywhere, you can't back out of it. And so, therefore, you're stuck in a performance low point. That feels like a bad case. And so maybe that's the reason is like, you will shoot yourself in the foot with this definitely. But yeah, I'm intrigued. So I will definitely read what you're sharing here. And we'll include it in the show notes, of course. I'm probably not going to do this, just saying that out loud because it seems like a bad idea. I just want to know how bad of an idea. STEPH: I do love it, for when I'm building a class that's working specifically closely with an API, I do reach for HashWithIndifferentAccess frequently. Because like you said, I just don't want to worry about it. I want to set it up top. It's one of the rare times that I actually will use something in an initializer where I'm like, hey, pass in the data. I'm just going to run it through this method. And then all the data from here on forward you can access it in either way. So the class doesn't have to care; a tester doesn't have to care. So I do feel your pain, or I at least will always reach for it whenever I'm building a class specifically around interactions with JSON. CHRIS: So for a segment that I framed as how terrible of an idea this is, you're like, hmm, I don't know how terrible. That seems to be your take, which is interesting. STEPH: Good point. Let me assess for a moment. I'm going to go just from skimming this issue, although I think partially this issue is talking about the fact that if you merge symbols and strings, it's like, hey, friend, you're going to break a ton of stuff and break a bunch of libraries, and these two things do serve a purpose. So this may not be exactly what you're looking for, but it has some interesting conversation on there. But embedding it deep down in the app so that just happens naturally sounds like it's just a performance concern. So yeah, it comes down to what is the question? How big is the performance? So I feel like I can't say it's a terrible idea until I actually know what the performance hit is. CHRIS: So a plausible question. That's where we're going to put this in the category of. [laughter] STEPH: Plausibly terrible, but still worth researching. CHRIS: Not obviously not terrible. But anyway, these are some of the ideas at the top of my head right now. That's a rough summary of my week. Mid-roll Ad Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem. So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy. That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software. So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's newrelic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed. STEPH: I have an update that I can share around factories because the last time we were chatting, I was sharing that strategy that we're pursuing where we're trying to minimize factories and then speed up the CI time by reducing the work that those factories are doing. So Joël Quenneville has done some phenomenal work and this past week, specifically improving factories. And he found one particular factory that he was digging into. So some stats before the change. The factory was taking around two seconds, which I know on paper doesn't sound so bad, but it gets more interesting. So total database time is around 1,000 milliseconds. And 833 total database queries were being made, which includes reads, creates, and updates. So then after, Joël was diving into this looking mainly to reduce the number of database queries because that's such a big number. So after the change, which took a lot of research on Joël's part, the factory is now taking around one second, so half of that time. The total database time is around 666 milliseconds. And the total database queries went from 833 down to 647, so a nice improvement there. But the real wonderful outcome of the story is not just those stats, but okay, so how did we impact CI? So we spent time working on this factory. And we have reduced, and we can see some of that in the stats. But how does that apply to the bigger picture? And so Joël took the time of the last 20 successful builds, and based on those builds, we average 27 minutes and 37 seconds for each build. With the factory change that he made, that same test suite was now averaging 21 minutes and 33 seconds. So shaved off six minutes from the build time, which is about a 22% decrease in the build time which is just fabulous. So that was a really nice win from all the work that had been invested in improving that one factory. CHRIS: That's a heck of a haircut there so glad to see that the efforts are paying off. STEPH: Yeah, it was a really nice win to see that we had researched which factories we should pursue, and then we were methodical about that. And then Joël worked hard to improve this factory and saw such a large payoff. It's one of those areas where the team has already invested a lot of effort and hours into improving the test suite. And it's challenging when you have so many areas that you'd like to improve and 100-plus engineers also contributing to that same codebase. So how do you improve and keep up with it all at once? They had spent about a year, so I think they were recognizing that yes, there are still a lot of areas to improve but also felt like small efforts wouldn't move the needle. So it was a nice data point to remind ourselves that we can still reduce the CI build time in a significant way. We just need to be very strategic about where we invest our time in those improvements. There is also an interesting conversation that Joël and I were having because we have a daily sync with each other each day. We've now been embedded with a team with a client, which is wonderful, but before then, we were also chatting with each other. And we like to chat about code, so we've had lots of fun conversations around code. And one, in particular, this week, came up about how people view code differently. And there's even a tweet that Joël shared that I can link to in the show notes. And there's one view that code is a liability, and if a line can't justify its existence, then it should be deleted. And then there's another view that code is an asset. If a line isn't causing any immediate issues, then why not keep it? And part of the reason that came up was while I was going through and reading pull requests, there was a particular change where someone was memoizing an expensive call, which was great, something that we wanted to do. But then they were also memoizing a very fast operation in two other places where it was just like parsing some params something that, you know, superfast and only getting called in maybe two places. And it was one of those that just caught my attention to be like, hey, I love that you memoized this other call, but this one, I don't think we need the additional overhead or complexity of adding memoization. And I found myself when I was writing that suggestion for the author that I was already looking for more than just to say, like, hey, this is more than we need. Because I've realized that often I take that stance of code is a liability. So if we don't need it, let's just get rid of it. But I've definitely run into other people where they're like, well, it's not hurting anything, so why can't I just leave it? And getting that kind of pushback on suggestions about removing code. So it was a fun opportunity to think through okay, well, why is this memoization not just unnecessary, but how could it actually cause us problems? And what's the cost of keeping it in, not just the cost of removing it but also the cost of keeping it in? And that was fun to talk about. CHRIS: I'm so glad you're bringing this particular conversation up because if we're being honest, I saw Joël tweeted about this. I saw it. I sent an email to myself linking to the tweet with the subject of the email being ahhhh, just A-H-H-H-H, which I believe was me being like, oh my God, we got to talk about this. I apparently didn't want to write all of those words, so I just wrote ahhhh. But as a handful of asides, one, if you're not following Joël Quenneville on Twitter, @joelquen, that is a mistake, because Joël is one of the clearest, most concise, and effective thinkers about code that I've ever seen. The writing that Joël produces is absolutely fantastic. And having worked with Joël for forever, I still will look at his Twitter feed and be like, well, this is fantastic. You're saying amazing things that I have not heard you say. So, again, strongest recommendation I can make; please follow Joël on Twitter and also via the Giant Robots blog and all of those other places. But in particular, I saw this one come through, and I was like, oh, man, we have to talk about this. So I actually have it up in my email app right now behind the scenes. [laughs] I was like, oh, I want to mention this to you, Steph. So I'm very excited that you're bringing it up in this moment. It is such an interesting thing. It's such an interesting case of like; I deeply believe both of these truths, and yet they do seem to be in contradiction. And so what do we do with that? More generally, I feel like that's true of a lot of stuff in life, like, the ability to hold two competing ideas in your head and be able to know where one applies and where one doesn't. That is a critical thing to get to in life and to figure out how to do, and that's some of the hard work of thinking. But in particular, this one, the idea that code is a liability. You have a line of code...I'm going to read it precisely as Joël wrote it, "Code is a liability. If a line can't justify its existence, it should be deleted. Code is an asset. If a line isn't causing any immediate issues, why not keep it?" And I think for me, if I were to try and interpret this, because I do believe both of those sides, I would apply one during code review. When code is coming into the application or when I'm writing code, do I need this? Do we need this? Is this necessary? Because it really should be necessary to come into the app. But then once something has made it in, especially the longer something's been in there, I think code sort of ages and matures. And so, the longer it's been part of the app and not causing an issue, the more I am liable to just leave it at rest. Just say, sure, or not at rest but as part of the runtime production code. But these are two competing ideas, but I think they apply at different times in the conversation. And so I'm definitely on memoization. In particular, memoization is a form of caching. Caching I have run into a handful of caching bugs in my life, let me tell you. I'll probably run into a few more. So if we can avoid caching, let's do that. So that's a particular question around that thing. But again, that idea of like the point in time to have that conversation is during code review or initial authoring or when it's about to come into the app. But if we've had some memoization in the app for forever and you're like, do we need this memoization? I don't know, but don't remove it because maybe it's very important at this point. Maybe it's one of the cornerstones holding up our application. So that's a bunch of thoughts about that. But also super glad that you brought this up because I was very excited about this particular tweet. STEPH: Yeah, there's someone that said something very similar to what you just said around they agree with number one for all new code. And they agree with number two, where code is an asset for refactoring. And I thought, yep, that's a great way to look at it. And I hadn't really thought about that specific perspective. And so it was one of those moments. Because I do like when people will push back on something that I so firmly believe on, not that this person did. I was, frankly, having a conversation with myself based on previous conversations with other pull requests authors that I've had that it's not related to this particular pull request. But in general, when people do push back on something that I do have such a firm belief in...and early eager optimization around memoization is something that I'm just like, I don't want to do it, especially for something that's so cheap and in such a fast execution and something that we're only calling twice. There's no benefit to it at that point. But then when someone says, "Well, but it's not hurting anything," then I appreciate that question because then it's more of not just pushback, but it's sort of well, tell me more. What is the pain that I'm introducing by keeping this in? And then that can be a really nice conversation to have with someone around; like you just said, I've seen caching bugs, and this could be a caching bug, and they are painful to then triage. And so we've introduced this optimization, but it's actually just going to cause us debugging pain later. And we really didn't even get the reward from it in the first place. So I really like those conversations when I feel like there's a little bit of a challenge of where I'm like, oh, I hold this as a deep truth, and somebody doesn't, and I would like to have that conversation with them. There are also some other fun conversations; one was around introducing a query object, which, as you know, we're both really big fans of. And then there was another great question because not everybody who works on this team is really familiar with Ruby and RSpec. They work in Scala, but then sometimes they hop over to the Ruby side. And so then they hop into the Ruby channels, and they're asking questions. And one of them was around the idea of introducing an RSpec Matcher. And they're like, "Am I doing this right? Is this how you would extract something to then improve your test? " And so that was a really fun conversation around like, yes, you did it right. This is exactly how you write a Matcher. But let's talk about use cases because extracting something to an RSpec Matcher to me means it meets the most generalized sense of usefulness that you want the whole team to use this and that you're willing to put in the extra overhead to then introduce this essentially like new RSpec DSL for the rest of the team to use and then maintain that. So it is the most aggressive step that I take when I'm trying to introduce a helpful tool. So then I shared my progression for when I'm extracting something for a test. And first, I will start with just a local method to that test because then it's scoped to just that test. And from there, then I will think about extracting to a shared helper. So maybe it's a module that can get included. But then its scope can still be confined to a couple of tests, but then we've also increased some of its observability. So then other developers will notice it and be able to share with it. And then from there, if I'm like, oh, this is super generic, it is testing time, and it's something that everybody is going to benefit from, then I reach for something like an RSpec Matcher or introducing a custom RSpec Matcher. So lots of fun testing conversations this week. CHRIS: That was a wonderful hierarchy. I like that a lot. I feel like that would make a good blog post. STEPH: There are some things that I realize that I just think of inherently about that I realize that would be fun to share. I'm much better at podcasting than I am at blog posting. [laughs] CHRIS: There's this friend I know, Joël Quenneville, very good at the blogging. He could probably help talk you through writing this up as a quick blog post. But you just described this heuristic hierarchy that you have. And you could probably provide quick examples of each, and I think encapsulate that knowledge. I, too, default to podcasting because it's easy for me to just say stuff here, and then it's there it is. But what you just said also mirrors exactly what I would think of as sort of the hierarchy and the reasons you're like, I'm not sure I'd go all the way to an RSpec Matcher. That hesitation is meaningful and comes from experience that you've had. And again, that seems sort of a trade-off of like, well, why not? Is it hurting anyone? What's the cost here? You know that cost. You have that in your head. And so now if you can capture...I don't want to put work on your plate. But I think that would be a great blog post. I would be happy to read that blog post and share it with other folks. STEPH: Cool, cool. Cool. So I totally hear you. So here's my hierarchy. Typically, I start with a podcast, and then I share it there. And then maybe it'll go to a tweet. And then once I'm like, okay, this is super generic, it can help everybody, then we've reached blog post status. CHRIS: I love how tweet is higher in the hierarchy than a podcast for you. That somehow the throw away let me just have 140 characters or 280, or whatever we're at these days, that somehow that's next in your hierarchy. But I agree; I share that place in the world. STEPH: Yeah, just writing is hard. Here I get to show up, and I say things. And then we have wonderful Mandy, who is then editing all of our words, so there's a safety net here. If it's just me and a keyboard, who knows what's going to happen? CHRIS: Then you'll probably think about the switches that you're using on the keyboard. And do you need a new keyboard? Should it be silent? What do we do? STEPH: I was thinking more how many exclamation marks do you use? That's always a question. CHRIS: Not too many, not too few. It's a difficult question. STEPH: [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Pivoting just a bit, [laughs] what else is going on in your world? CHRIS: What else is going on in my world? So we are building out a whole platform over here at Sagewell, and one of the things that we need to build is a mobile app or, frankly, two mobile apps, one for iOS and one for Android. And I'll be honest; I resisted this for a while. I am a big, big believer in the web as a platform like deeply in my heart of hearts. That's the place that I want to spend my time. That's the thing that I believe in. And there are absolutely cases where truly native mobile apps shine, completely outshine what we can do on the web platform sometimes for reasons that are, I think, not great, limitations of the available mobile web platforms, et cetera, reasons that I'll slam my fist on the table or whatever it is. But there are plenty of really great mobile experiences, offline, et cetera, that we just can't...offline is not even a great example. See, I can't even find a great example. There are definitely things, though, where truly native mobile apps are 100% superior. But again, I'm such a big fan of the web platform that that's what I wanted to do. I wanted to hold on to this dream of, like, what if we just make a really great web app and it's just great? And then consistently, our backend is one singular thing. Our frontend is kind of one singular thing. And yeah, we got to deal with responsive design. But that's to me a much more tractable problem than fracturing our entire application architecture across a bunch of different platforms and having all of the logic of our domain splintered and especially depending on how you implement it. That's sort of a big question. I've talked a ton about Inertia.js on this podcast, and that's because I believe it's a really great example as to how to pull some of the logic back to the server-side, which, in my experience, that's where I want the logic to be implemented, our deep domain logic. I just want that to be on my server in a Rails controller, or a Rails model, or a command object, or any of those sorts of things, query objects, all of these wonderful things but server-side that's centralized in one space. Nonetheless, though, we had to build a mobile app. These are the truths of the world. Sometimes it just comes down to the expectation of your user base. And there are certain things that by building a mobile app we will get so, for instance, in our case, having biometric login, so fingerprint, or facial ID, or any of those sorts of things. Those are actually material security differences. They are actually, as far as I can tell, available on the web but not consistently on every browser, et cetera. So that's something that we can get by having our app as a native app. Push notifications is another one that certain platforms, certain web platforms have dragged their feet on, Apple Safari. iOS Safari, specifically, I'm looking at you, but that's an example of something that by going the truly native route, we'll get that. Similarly, access to some of the lower-level things, cameras, et cetera, that is something that we'll get a better experience of. And again, you can hear in my voice I don't want to really seed it to the native platform, but it is true right now, at a minimum. So we had a decision to make as to how we would implement these applications, and we went with an interesting route. So for anyone that's familiar with Turbolinks native, or I believe Turbo iOS is pretty similar. But I'm more familiar with Turbolinks native as there was a talk I Can't Believe It's Not Native I think is the name of the talk that was given a while back talking about the Turbolinks native architecture. So basically, what's happening under the hood is let's still render these things server-side. Let's send down some HTML. In our case, it's a weird sort of hybrid of HTML and not HTML. But broadly, let's say that the server is rendering things. And our native application is going to then be a native shell that wraps around WebViews. But it does so in not just a single WebView sort of way. It's instead trying to find that optimum hybrid spot where let's do native things where they make sense. So, for instance, we have introduced a tab bar at the bottom of our application that is a truly native UI. We similarly have push notifications, biometric login, et cetera. Those are features of the native platform that we're using. But then, for most of the screens, most of the screens that are just some text, maybe a button, maybe a form, et cetera, we are using the server-rendered code that we have. And so server-rendered, in our case, because we're Inertia, it's sort of a misnomer because technically it is being rendered on the client-side in the WebView. But, I don't know; we're now getting too nuanced and in the weeds for it. But what we've opted for is to reuse the same views, controllers, et cetera. All of that is still being reused. Our iOS and our Android codebase at this point are wrappers around those WebView stacks. So it's not just a singular WebView; it's a stack of WebViews. So if you're doing swipe to navigate thing on iOS, that'll work...or Android. I think Android has an actual back button, though, within the applications. But most importantly, we've introduced a tiny little bridge layer. So from our WebViews, we can communicate to the wrapping native context. And similarly, from our native context, we can send messages into our WebView. So we can have a button in our native UI. And when a user clicks that button, it will send a message to the WebView that it's wrapping around and vice versa. We can do push notifications. We can do all that sort of stuff. For any given view, like, say, the login view, we can say, "Hey, don't render the normal server-side thing. Instead, render this truly native, local Swift or Kotlin view that we want to use there." So it's an interesting choice. I think it's something that I've certainly seen applications that are just like, let's take some HTML and wrap it in a WebView, and it'll be fine. And they don't make great apps. But I think this time it might just be a good idea. I actually do think that the approach that we're taking, at a minimum, is buying us a ton of simplicity in terms of having to duplicate what are somewhat nascent domain concepts across multiple platforms. We're not entirely certain as to what our platform and what our business is going to be. So we'd love to non-enshrine that across three different platforms that are hard to update. Like the web, I can kind of change that every day. But iOS and Android because I have to go through review cycles, because I have to get them out to devices, because there are slow update cycles that individuals will use, I'm going to be stuck supporting whatever version of these applications are out there. And so if more of that is the dynamic content that's driven by the server, frankly, I just feel way better about that, at least for now, at least for the point in time that we're at. But I kind of believe that this may be a really useful architecture for us long term. That was a bunch of me rambling about the architecture. Let me pause there, thoughts, questions, comments, concerns? STEPH: First, I really appreciate the thoughtful approach and explanation. Also, you highlighted the reasons that y'all are pursuing having a native app, and all of that makes a lot of sense. Because there is that user expectation of you told me about a service that then there must be an app that I can download because that's what I'm accustomed to using versus having to go to a browser and then having to then remember the URL of the site that I'm supposed to go to. So there's that convenience factor. There's also the idea that some people go to the App Store and search for their solutions instead of going to a browser and searching for a service. So having that presence in the App Store can seem like a really huge win because then even if it maybe slowly pushes them back to use the website or as long as they get a decent experience, they've now at least been exposed to the idea of the service and that it's out there. But then, as you pointed out, building a mobile native application is a lot of work. And then it becomes a question of like, well, are you going to hire people to work specifically on these platforms? And then, is it really worth that investment at this point? Or is it worth the approach that you're taking where you're going the more hybrid approach? I am curious; maybe this is something that you'll know. So as you are investing in this hybrid approach and you are starting to collect more users that are then using the app versus going to the browser, then what does that pivot look like, or how does that further investment look like? If you realize that the UI isn't quite delivering the expectations that you want that if you'd actually built a native iOS or Android application, then what does that investment look like? Can you still reuse some of the work that you've done? Is it totally scrapping that work? I think that would be my biggest question around taking this first approach. Is it an all-in bet that we are now stuck to this? Or is there some salvageable pieces to then move this forward into native apps should we need to do that? CHRIS: That's a heck of a question. Have you made a terrible decision or just like an iffy decision? I think that the framework that we're choosing or, frankly, building right now will actually be amenable to a potential transition entirely into the native world in the future. So again, one of the options that we have here is the ability to say, no; this facet of the application is entirely native. We're going to opt-in. And so it actually happens at the navigation layer. So we can say, if a person transitions to the /user/signin route, instead of just rendering that WebView right in place, push a native Swift or Kotlin. Depending on the context that we're in or the platform that we're in, push the native view onto the stack and use that. And so we're able to, on a screen-by-screen basis, make a decision of no, we'd like to opt into native behavior here. And so, if we did eventually see that the vast majority of the users of the platform are using it via the native app, we should probably continue to invest in that and push in that direction. I think we could do it in sort of a gradual style, and that is critically important to me. I don't want to make a big bet and then be like, oh no, we got to rewrite from the ground up. And there's no way to do that incrementally. It's going to be a whiz-bang Friday launch that everyone's going to hate. That's the thing I want to avoid most in the world. And so I think what we found now is this seems great for right now because it allows us to avoid this complexity explosion of three different platforms and trying to keep them in sync and trying to keep them up to date. But it does, I think, give us an opportunity as we move forward to slowly sort of transition things over. We are, to state it, this isn't just like wrapping a WebView around things. We are building essentially a mini framework on both iOS and Android, or roughly Swift and Kotlin is what the actual languages are, to work with Inertia because inertia is the core technology that we're using. Inertia, thankfully, has a nice little event system in there, so we can say, Inertia on navigate. And when a navigate event happens, we can hook into that and then connect it to whatever Swift or Kotlin runtime that we're building here. And there are a couple of different events that we can opt into. And so that's giving us the hooks that we need in the current architecture. But longer-term, if we needed to, we could just, I think, slowly transition everything over to be truly native mobile, and then that would probably be backed by more traditional API endpoints and that sort of thing. I want to avoid that. That's my dream is to stay in this happy place where we're always going to need some web presence. And I would hate for those to be fractured distinct things. I've worked with enough mobile apps that are wonderful native experiences, and yet I'm like, could you just give me the desktop view? Just scaled to...like, I'll even pinch and zoom because you're hiding data from me, and that makes me very, very sad. Please give me the buttons, and the text, and the content that you would give me on the web. And the fact that you're not is just breaking my heart right now. And, frankly, for our user base, consistency of experience is something that I think is really important. So that's another facet of the conversation that is really interesting to me of like; I don't want it to be different on each platform. Certainly, a three-column layout doesn't work on an iOS app that is zoomed in 150%. But we can turn that into each column is just floated down and then otherwise have all the content in there. And I believe in that as sort of a fundamental truth of let's reshape the content but not fundamentally rethink it. I say that as something that I believed deeply. But as I said it out loud, I was like, yeah, but also, I don't know, make it work on the platform it's on. So I can see both sides. But I have had enough experiences personally where I'm sad about the app that I'm using. STEPH: Yeah, I could also see an argument for both ways where you don't want it to be fundamentally different, but then also, you want it to fit the platform. And then there may be some advantages to the fact that there is a different platform, and you want to utilize that. I also agree with the not hiding of the data. I have felt that pain where I have an app, but I really want to go to my desktop, and I really want to use it there. But then on mobile, it's then hiding, and I realize it's hiding. And that inconsistency really frustrates the heck out of me. So I can understand that as well. Overall, I really like this. You're taking a bet in a direction of we should have a mobile presence, and we should start attracting people through this new marketplace. But we want to reuse a lot of the logic that we already have before we go so far as then we're going to have to start building for each different platform. Because while I don't have a lot of experience in that area, the times that I have been part of teams that are building native apps, it's a big investment. I mean, they hire people very focused on that; designers have to design for browser, for mobile, and then for native, and then everything has to stay in sync across. You have to think about how a feature is going to work across all three of those different views. And so it is certainly not something to go into lightly, which I think is exactly what you're describing is that you're looking for that in-between to how can we start working our way in this direction but yet also do it in a way that we're reusing a lot of the work that we have versus having to invest full sail into then building out these different platforms? So I'm going to go with this is not a terrible idea. [chuckles] I'm excited to see how it feels once I can download this and check it out. I'm excited to then see how that feels from a UX perspective. But overall, everything you're saying really jives with me. It makes a lot of sense. I am curious, what about React Native? Is that something that you considered using? CHRIS: Oh yeah, great question and definitely something that we considered. We're not using React on the backend, so that was actually a consideration when I was thinking about Svelte initially is I assumed we'd be building a React Native app eventually for the native platforms. But I talked myself into Svelte for the web, and that is not the reason that we're not using React Native for the native apps. But it is an interesting sort of constellation of technologies that we have now. We're not using React Native because I'm clinging to this idea of what if we could have a singular experience? So React Native fundamentally you're building a native app that this is this bundle that you download that's got all of the UI and that front-end logic in that bundle that you download. And then when it wakes up, it makes some calls back to some APIs to get some data or to decide if I can do an action or to actually do an action, all those sorts of things. But you're building out a Rest or GraphQL or one of those APIs. And with my explorations of Inertia, I found that what if I didn't need to do that? What if I could do a more traditional Rails CRUD-like experience but CRUD in a good way (I mean it in the very positive sense of the familiar architecture) and still give users a delightful experience but not have to build a distinct API where all of the or majority of the logic was on our client-side? So if I did that, then my web client would need to be that much smarter. And each of the iOS and Android clients would need to be that much smarter because that's fundamentally how these technologies work. UI components they can give a higher fidelity experience, more native-like experience, but they tend to own a lot more of the smarts. And one of my core beliefs is however long I can get away with this, I want to keep as much intelligence on the server as possible and have my view layer be as minimal and as simple as possible. So I think React Native is a really fantastic technology for that sort of work. But my goal was to avoid that sort of work entirely. What if we had a singular way that we had the logic exist on the server-side, and then we rendered pretty minimal view layers? Or, from a user experience, the view should do all this stuff and show all of the things that they want. But I want that view layer to be as naive as possible. And by naive, I mean in the positive sense of like, I want to be able to change this very rapidly. I want to be able to evolve it and iterate it. And so this is more of a buy into I think the thing that Inertia gave me is valuable enough and if I can keep using that and reuse it, especially on these mobile platforms...now if we add a new fundamental part of our Sagewell platform, if we have that, it just exists on each of the iOS, the Android, and the web, and that's fantastic. And we're going to keep a really close eye on what experience that gives to the user. And is it still great? But presuming it is, the complexity savings there are so huge. Our team is a team of web developers that is able to think about things holistically and singularly. We implement it once within our stack, and it just works. And if we can do that, that is worth a ton. We may not be able to do that forever. But for now, especially while we're figuring things out, while we're super early on as a company, I think that savings and complexity is worth a lot. So it'll be interesting to see how it plays out, and will certainly report back. But I'm a big believer in this little adventure we're on. STEPH: Yeah, you said it perfectly there at the end; you're a team of web developers. And so as long as you can stick to that, then that's what's best for y'all and the team and the product. So that's wonderful. I have a short segue because I had a little bit of inspiration when we were talking about terrible ideas. I want to circle back to your other terrible idea because I have a terrible idea for your terrible idea about strings and symbols. Okay, so my terrible idea is you're talking about using HashWithIndifferentAccess for everything. What if you had a class or method that then will first try to access via string and if that fails, access via symbol, and then if that fails, then it fails loudly? So you now have this let's try this, and then let's try the next thing. I have strong feelings about this as I'm saying it. CHRIS: [laughs] STEPH: But we're in the terrible idea segment, so I'm going to embrace it. This is my terrible idea. CHRIS: HashWithIndifferentAccess with runtime exceptions. I think HashWithIndifferentAccess under the hood probably does what you're describing of, like checks one and then checks the other or checks has_key is probably the underlying implementation. I haven't actually looked at it. But some version of that makes sense. Falling back to the key error gets interesting. I did see a different thing recently of a deep fetch, which is something that I want, to stop trying to make fetch happen, except I'm going to try and make fetch happen. We thought about this a bunch where we have these objects that we need to traverse into. So we use dig to get into the third layer of the object, but dig doesn't care. And it's just going to happily nil out whatever. So I'm like, no, dig but then right at the end, fetch, deep fetch. I saw somebody post this recently. So deep fetch is something I want to make happen. HashWithIndifferentAccess, which raises at the end also intriguing. STEPH: So yes, but this will be a little different because this one, you don't have to do the transformation process upfront with HashWithIndifferentAccess where you have to pass the data first, and then it transforms it so then it can do these two different lookups or the fallback. This one, you're skipping the transformation process, and you're using your own custom method that then does that first check for a string or first check for a symbol and then default back to the other one and then fail loudly, yeah, if both of those fail. CHRIS: Interesting, and I have to see what it looks like in practice. But I mean, broadly, I'm into something in this space. Let us find some simplicity. That is what I want. STEPH: Let's find some terribleness and see which one feels not so terrible. [laughs] CHRIS: Some terrible simplicity. Well, I like that idea. We'll see where we get to with it. But I think on that note, and we've said a bunch of stuff today, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
327: Estimate Crafting

The Bike Shed

Play Episode Listen Later Feb 22, 2022 42:33


Steph joins Chris in trying new things! For her, it's a new email client – the Newton email client – because she really wants to love her inbox. She also talks about implementing a suggestion from Chris on improving CI speed. Chris continues his search for the perfect to-do list app. (It's not going great.) But he has made hiring progress and is excited to move on to the next step: onboarding. Together they answer a listener question who asked for advice on crafting project estimates for clients. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Services down? New Relic (https://newrelic.com/bikeshed) offers full stack visibility with 16 different monitoring products in a single platform. Newton (https://newtonhq.com/) Subscribe to Email Newsletters in Feedbin (https://feedbin.com/blog/2016/02/03/subscribe-to-email-newsletters-in-feedbin/) GitHub - Shopify/packwerk (https://github.com/Shopify/packwerk) Sunsama (https://sunsama.com/) TickTick: To-do List, Tasks, Calendar, Reminder (https://ticktick.com/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: I am now recording. STEPH: Me too. CHRIS: [laughs] That's my recording voice. STEPH: [laughs] CHRIS: That's how you can tell. STEPH: I just like how it sounds suspicious where we're like; I'm now recording, so be careful. [laughs] CHRIS: This is now on the record. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hello. Happy, happy Friday. Oh, I have something that I'm excited or intrigued about. I don't know. Okay, I'm hyping it up. [laughs] But I'm realizing I'm also very skeptical of it. CHRIS: This is the best sales pitch I've ever heard. I'm so excited to hear what this is. [laughs] STEPH: I am trying a new email client; it is the Newton email client. And I so want to love my inbox. I want to check on it. I want to help it grow. Okay, that's the opposite. I want to help get through all the emails that come through, but I just want to love it. I want it to be a good space that I want to go to. And I just hate email so much. And it always feels like this chore that it's really hard for me to bring myself to do, but yet it's really important because a lot of good things come through email. So this is my rambly way of saying I'm trying the Newton email client because I saw on Twitter from Andrew Mason, who has very similar feelings that I do about email, where we are just not fans of it. And we rarely check it and have declared email bankruptcy at several points in our life. And he's also one of the co-hosts for Remote Ruby. But I saw on Twitter that Andrew was talking about the Newton email client and how it actually made him feel that he enjoyed writing and looking through his inbox. And I was like, yeah, that's the sales pitch I need. So I'm giving it a go. It's been only a couple of days. But one of the nice things I have noticed about it is it's very focused, and there's not much noise, and it actually feels like very minimal design where if you open up like a new email, so you're opening up a new draft, there's no much noise. You get to just focus, almost like you're writing a little blog post or journal post or something. It takes away a lot of the noise. While in Gmail, it's going to open up a small window in the right, but then you still have the rest of the noise that feels distracting. So I like that very intentional like, hey, you're just doing one thing, just focus on this. And then also you can integrate other email accounts as well. So you can have one-stop shopping versus Gmail, then you have to click around and sign in, sign out, or visit different email accounts. So we'll see if it helps improve my email life, but that's something new I'm trying. CHRIS: Very interesting. So you're fully on inbox zero life now. That's what I'm hearing. [laughs] STEPH: Ah, hmm. I don't want to lie to you. [laughs] We have a good friendship. I won't start lying now. CHRIS: I appreciate that. So you're halfway to inbox zero. You're not even entertaining that idea, right? This is just you want a better tool to do email. STEPH: Exactly. Inbox zero is not incredibly important to me. But I do want to make sure that I know that I've seen everything important, and I know where to find things. And then making sure that I am responding to people in a timely manner. Those are more my goals. Inbox zero, if that supports it, then great, I'll work on it. But not necessarily that has to be the goal that I reach. CHRIS: Gotcha. I'm not seeing Newton, but I'm intrigued. Particularly on mobile, I have the Gmail mobile app, and that has unified inbox, which I appreciate. But Gmail on the web does not, and I find that odd. And I've never found a mail app that I enjoy because I want some of the features of Gmail. I want to do Gmail snoozing because I still want that to be consistent and whatnot. And to be honest, that's the main way that I get to inbox zero. I just say future me will have more time. I actually tweeted recently. It was a screenshot from my Saturday inbox, which I think was 15 emails that I'd snoozed from the previous week into Saturday morning. Because I'm like, Saturday morning me will have so much time, and energy, and coffee, and it'll be great. And then it became Saturday morning and, ooph, what a view. STEPH: [laughs] Yeah, your snoozing tip has been life-changing for me because that's not something that I was using all that much. The two things are, one, schedule send so that way if I do have a sudden burst of energy and I want to write an email, but I want to make sure that person doesn't get notified until a decent time. Being able to schedule an email and snoozing is amazing. I think Newton and Gmail have pretty much similar features. I was trying to do a comparison. I was like, is there something really snazzy that Newton does that Gmail doesn't already give me? But it looks like they all do about the same, having those important features like snoozing and then also being able to schedule emails. So I think it really just comes down to a lot of the UI, and there may be some other stuff I'm missing since I'm new to it. But that's the main appeal for me right now is the focus and the look and feel of it. So then maybe I will find looking through my inbox a more zenful experience, I think is how I saw them advertise it. CHRIS: Well, I definitely look forward to hearing more as you explore this space. I will say looping back to what you were just commenting on around deferred send, which is definitely something that I use, but you described one of the reasons that I use it. So the idea of wanting to be respectful of someone else and not send them an email on Sunday night because you happen to be working at that point. But you don't want to put that on their plate. I would say equal amounts; that's the reason I use scheduled send. And then the other reason that I use scheduled send is please, for the love of God, I do not want another email back in my inbox. So I will reply to something such that now I'm done with that, but I will schedule send it for the next morning. Because tomorrow morning me can deal with whatever reply this generates. There's some adage; I don't know if it's an adage, but the idea that every email that you send generates 1.1 emails in reply. So emails just have this weird way of multiplying. And so if you send one out there, you're probably going to get something back. And so often, if I'm trying to clear my inbox, I don't want to get another email in my inbox at that moment. So I will not actually send the reply. I will schedule it for a future time because I do not want to hear. I want no new inputs at this point. I'm trying to process them. So that's part of why I use deferred send. STEPH: I had not thought of that, that yeah, that if you schedule it for tomorrow, you've really gamified this inbox zero because you're like, yeah, if you send something, then you might get an email back. But you're like, if I wait till tomorrow to send it, then I'm less likely to have another email, and then I've hit inbox zero, and I'm set for the day. I like it. It seems helpful. CHRIS: Yeah, inbox zero sounds like an altruistic thing, but it is not. It's a way to force myself to have to make decisions, which is something that I want to get better at broadly. And that's part of the role that I have now. A lot of what I'm interested in exploring is just getting better at making decisions, being more decisive, being more action-oriented. Because I just have a tendency to make many, many spreadsheets and think about stuff for a while and take a long time to make a decision. But I don't get to do that, particularly now. But broadly in life, that's probably not the right mode to be in. So inbox zero is another thing that forces me to deal with things rather than just be like, I don't know, I don't know, I don't know, and keep looking at the same thing over and over. So just more thoughts about inbox zero, but now I'll stop talking about it. STEPH: I do like that, though. And you're totally right; it can be a very helpful constraint. And I think that's sometimes why I fight it because then I haven't curated my inbox enough that then when I go to it, there are so many interesting things that then I feel a little bit overwhelmed where I'm like, oh well, I want to read this, and I will look at that. And this seems interesting, and maybe I should be a part of this. It feels like one of those like; you could be a part of these ten amazing things. Do you want to be a part of all of them? And given a person that it's hard for me to say no to or recognize that no, I'm just going to not do anything with this, that is hard for me and would be a good skill for me to hone in on and practice and make quick decisions and be very realistic. Because I used to be subscribed to more newsletters, and then I finally had to stop subscribing to them because it had that same effect on me of that FOMO of like, I'm missing out on this great article or this great video. And I've become more honest with the fact that my Saturday morning self isn't going to want to read through a bunch of newsletters and videos about coding, that I'm going to want less screen time. So that is a really good constraint and helpful skill to cultivate for sure. CHRIS: All right, I said it was done, but one more thing. I feel like I've mentioned this in the past, but Feedbin is the thing that I use for RSS. I still believe in RSS as a technology. But everyone's moved to newsletters these days that go via email. Feedbin gives you an email address that you can use to subscribe to newsletters, and then they do the job of converting that into an RSS feed. So for me, I take something that was now a push into my inbox, and now I can pull whatever I want from that RSS feed. And on Saturday morning, if I'm feeling like, with a cup of coffee, I can enjoy some newsletter about all the new hot tips in Svelte land or whatever it is or not. But it's not clogging up my inbox. And with that, I think I'm actually done talking about inbox zero. [laughter] STEPH: Yeah, that's a nice separation. We could keep going. I have full faith in us that we could keep going about this. But I'll share a slightly different update. I've been implementing a suggestion that you provided a couple of weeks back where we were talking about Rspec's selective test running and how some applications will speed up their test. If you change one part of the codebase, then perhaps you only need to test this chunk of test. You don't actually need to run the full test suite. And that is complicated and seems hard to get right, and really requires understanding boundaries. But then also knowing Ruby, then how do you really identify? Do you really know where this method is being called and can identify all the tests that need to be run? I think we'd mentioned before there's a really good article from Shopify where they have worked on this and created an open-source project called Packwerk. So we can link to that article in the show notes. But more specifically, you suggested, well, what if you just change a test file? That seems very low stakes and also has the benefit of creating a reward where if someone does see something that they can improve in a test, then that's a very quick feedback. Let me just get this change. It's going to be fast on CI. I can merge it right away and also saves time on CI. So I've been working on implementing that change. And it's one of those the actual change is easy, like checking with Git to say, "Hey, what files have changed?" Does it have an _spec.rb at the end of it? Great. Does it not? Okay, we've changed some application files. So let's run the full test suite. That part's easy. Getting it integrated into the build system has been more complicated just because this team has done a lot of work around trying to improve and speed up their test. And there's a fair amount of complexity that's there. So then figuring out a way to stitch my change into all the different build processes that take place has proven to be more difficult. But it's also been insightful just because it has now helped me really understand and forced me to learn, okay, what are all the different steps? What's important for each one? Where can I cut off the rest of the running of the test and instead just focus on running these tests? So in some ways, it's been challenging, but then on the positive side, it's been like, okay, well, this has taught me a lot about the existing system. So at this moment, it is still a work in progress. I'll have more updates in the future. I am excited to see the rewards. I've gotten to the point where I just have a proof of concept where I've gotten pushed up, but it's not production-ready. But it's at least I just wanted the feedback that I'm in the right spot and that we're running just the right test. And so far, it does seem like it's going to be a nice win, even if it's maybe not used by everybody because it's probably rare that someone is altering just a spec file. But for people who are looking specifically to improve the CI build time and working on tests, it will be very helpful to them. So yeah, I'm sure I'll have some more updates in the future. What's going on in your world? CHRIS: Well, I definitely look forward to hearing more about that. However, we can improve CI speed; I'm super interested in that as a topic. Mid-Roll Ad Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem. So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy. That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software. So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's newrelic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed. CHRIS: Well, similar to your email adventures, I continue on my search for the perfect to-do list. It's not going great, if we're being honest. [laughs] To be clear, because I've mentioned this on a few different episodes, I'm not spending much time on this at all, some but not much. And so it's not really moving. But there are two interesting things. I took a look at TickTick, which was one that I mentioned in the past, a tool for this. It seems good. It seems like an intersection between things, which is what I'm currently using, Todoist, which I've used in the past, and some other tools. So I think I'll probably explore that a little more. It seems like a good option. Decidedly, the most interesting thing is a tool called Sunsama, which is different in some interesting ways but very interesting. So one thing to note about it is it's $20 a month, which is a lot of money for one of these tools because most of them are like, "We're $20 forever, and then it's free." And it's a surprisingly low-cost space. And so, they're definitely positioning themselves as a more costly entry. I would be fine with paying $20 a month for a tool if it really is like, no, cool, I feel great. I'm more productive. I'm happier when I'm not working, et cetera. But what's interesting is they seem to do a let's reach out to all the places that tasks can live for you. So there's your inbox for email. There's your Trello board that you've got. There are GitHub issues. There's Slack. There are all these different sources of potential tasks. And they do a really good job of integrating with those other tools and then allowing you to pull that list into Sunsama and then make each day you have a list. And those items can be like, this is a reference to a Trello card on that board. This is a reference to a Slack conversation over there. So I'm super intrigued by it. It's also got a very intentional plan your day mode, which I like because that's one of the things that I'm really looking for is at the end of the day, I want to clean everything up, make sense of all of the open items, and then reprioritize and set up for the next morning so that I can just hit the ground running. That said, I tried it, and it just didn't quite click. And I think it's one of those it takes some effort to understand how to use it. So I'm not sure that I'm going to get there. But it is super interesting because that idea of our work lives in all of these different tools these days feels very true. And so, something that is trying to act as a hub between them to integrate them is very interesting to me. Again, I haven't really gotten anywhere on this. I'm kind of just reading blog posts, as it were. So I'll report back if that changes, but -- STEPH: The search continues for the right to-do app. Yeah, that seems interesting. I don't know why I'm feeling hesitant towards it. I'm one of those individuals...you're right; there are so many tools. And the fact that they integrate with a lot of them seems really nice. I'm at the point where I just grab links to stuff, and I'm like, hey, if this is my priority, I grab a link to a Trello ticket, and then I just copy that into my to-do. I guess I like that bit of work over having to integrate with a bunch of different platforms. Because once you get used to integrating...I don't know; I'm just rambling. But I wish you the best on this journey. I'm excited to hear more. [laughter] CHRIS: Thank you. I will certainly report back. But yeah, nothing pointed to share at this point. But I do have something pointed to share on the hiring front, which is that we have hired some folks. STEPH: Hooray! CHRIS: Yay. So this has been a fun saga across a couple of different episodes. And in my mind, it feels like this much longer, more drawn out thing, but it's; actually, I think, come together relatively quickly, all things considering. We've got someone who's starting in a little over a week's time, and then someone else who's starting in, I think, two or three weeks after that. So that'll be great. Hopefully, we can transition into onboarding, which is a different whole approach. But hiring as a distinct activity can scale back significantly. As we discussed last week, I want to be in the always be hiring mindset but in the more passive mode of having conversations with folks, staying connected. And if a great candidate comes along and it's the right time, then bring them on the team but not actually actively reaching out and all that sort of stuff, which will be great. Because it turns out that takes a lot of time and also a lot of energy for me. Having those first conversations, going into it very intentionally trying to communicate about something, and there's a tone of salesmanship to it that is not my natural resting state. So I come away from each conversation being like, that was fun, but also, I'm drained now. Why am I so drained? So not having that be a thing that is filling up my calendar is great. And also super excited with the folks that'll be joining the team and to be able to now grow our little team and define the culture and the shape of the groups that we will be collectively. I'm excited for that work and what we can build together. So yeah, it's an exciting time. STEPH: That's awesome. Congratulations. Because yeah, everything you're saying sounds like it's just been a lot of work. So that's very exciting. There's someone that I was chatting with earlier today where they were talking about the value and the importance of understanding what your natural skills are and the things that bring you energy. And so you're mentioning there are certain activities that you enjoy them, but they're also draining because perhaps they are on the outer boundary of what you might define as your own natural skill or the things that get you really excited. And I found that all very interesting. It had me thinking about that today about where are the natural areas that I find that I get energy that are easier for me? And then making sure that I'm trying to prioritize my day so that I am more focused on the activities that just align with who I am and also that I'm engaged with and then also looking for ways to stretch. But they made the point that if you are always in a space where you are not using your natural talents, and you're always having to stretch, then that can be what leads to burnout. Versus if you're in that sweet spot, that zone of where you are using your natural skills, but then also stretching a bit. And I think there are some assessments and things like that that will help you then determine what are my natural skills, and what do I like to do with my time? I just like that style of thinking and recognizing, like you said, like, hey, I did a thing. It was fun, but I'm drained. So now I know that this is something that requires more effort for me. Like hiring, that's one for me. I really like interviewing. I like talking with people, but I'm so nervous for them because I know interviews suck. [laughs] I just have so much empathy for them where I'm like, this is going to be a hard day. We're going to make it as pleasant and positive as possible, but I know this is a hard day. And so I feel like I'm in it with them. And so afterwards, I feel that same relief of like whoo, okay, interview day is over. CHRIS: I don't know that I quite achieve the same level that you do but in no way am I surprised that that is your experience of hiring. And just to name it, you're a wonderful human being that feels for the people on the other side of the hiring table. Like, oh my God, this must be so stressful for you. It's so kind of you to be in that space with folks. But coming back to what you were saying a moment ago, that idea of, like, understanding where your strengths are and where they're areas that you're not quite as strong. And I think critically, the question of like which are the ones where I want to just kind of say no to? I'm like, that's fine. This is not going to be a competency of mine. And I'm going to just avoid that or find other people to work with that balance that out. So for me, sales is the thing that I don't think that's ever going to be my bag. I don't think I'm ever going to move in that direction, and that's totally fine. Whereas decisiveness, which I was describing, is like, I think that's the thing I could get better at. That is one that I don't want to sleep on that. I don't want to say, "That'll be fine. I'll just have other people make the decision." No, I need to get better at making decisions, making decisions with less information or more rapidly, having a bias towards action. All those things I think will be deeply beneficial. So I'm trying to really lean into that. Whereas yeah, again, the sales stuff I'm like, yeah, and there's plenty of examples of this otherwise. But I've also been coding a bit more this week, which has been lovely because the hiring stuff has ramped down. And that has freed me up amongst some other stuff that's been going on. And you know, I like to code, it turns out. It's fun. I just clack about on my cherry brown keys, and it's great. STEPH: Do you remember when we first got introduced to mechanical keyboards, and we had co-ownership of one of the keyboards? And we literally had days of where it was like your turn to use the keyboard. And then it was my turn to use the keyboard. How long did we keep that up before we were finally like, we should just buy our own keyboards? CHRIS: It was a while because we were working with a colleague who was trying out a Kinesis, I want to say, one of the split little bowl of keys. But yeah, we had a shared custody over a keyboard, and it was fantastic. I remember that very fondly. STEPH: The days that it was my keyboard, I would go to the office and be like, oh, today is my day at the keyboard. This is great. This is going to be such a wonderful day. [laughs] And now I'm just spoiled. CHRIS: It went on for a while, though. And this was something where we both obviously enjoy this keyboard. Why don't we just buy one of these keyboards? We totally could have done that. And yet, for some reason, both of us were like, no, but what if...I got to think about this. Again, decisiveness. [laughs] We come back to this topic of well; I had to really think about it. And then somebody got the 92-Key test or whatever it was in the office. And so I just went over and poked every one of those for a while. STEPH: Exactly. It was option overload where we're like, well, okay, we're going to buy one, and then you open it up and search, and you're like, oh, you want options? We have options. Do you know about the blues, and the browns, and the colors, and these different options? Like, I don't know any of this language that you're talking about. I just want to clackety-clack. So yeah, it took time. We had to do our research. CHRIS: And then I ended up on basic browns. So here we are. Let's see, popping back up the stack a couple of levels, hiring that went on for a while. Now it is less going on. Although to be clear, like I said, always be hiring. So if anyone out there in the world is hearing what I'm talking about with Sagewell or seeing any of the stuff that I'm putting on Twitter, which isn't much, I occasionally just post screenshots of my commit messages, which recently included better snakes as a commit message. [laughs] I have to dig into that or not. But we were just doing some snake case to camel case conversion. But the commit message was better snake, so here we are. Anyway, if any of that sounds interesting, please do reach out. But I'm excited to transition back to focusing more on the work. On that note, actually, I'm going to call it interesting things that is happening right now organizationally is; we are working with an external security firm to help with some...they helped us out with a penetration test when we needed that. And then they have stayed on retainer and are helping with various different configurations, taking our AWS S3 buckets and making sure those are nice and secure, and all that kind of stuff. But we've recently started to focus more on organizational security, specifically a bowl of acronyms. We've got SSO for single sign-on, MDM for something device management. I don't know what that first M is. I probably should learn it, but it's fine. That's why I've got help on this is I think they know what the acronym stands for. But so we're working on each of those. And on the one hand, they're probably going to be kind of annoying, like having to go through the single sign-on. It's a whole thing, and it's harder to sign into stuff sometimes. I mean, ideally, it's actually easier. But in my experience, it adds some friction at some points. And then MDM means that there's now some profile manager on the computer. So I can say like, "Every computer must have full disk encryption or else you can't use it. And we need a passcode, and it must be this long and those sorts of things." So it's organizational controls that I think are good for us having a robust security setup throughout the organization. But yeah, they're the sort of things that I think historically, I probably would have, as someone working in an organization, had been like, do we have to? Do we need these things? Couldn't I just do whatever? But now there's something about it that I really like. I'm trying to name it in my head, but I'm kind of like, I don't know. This feels like growing up as an organization. And there's always weird corollary that I've been thinking about with the Rails app that we've been building, intimately familiar with just everything that's going into it. And I know the vast majority of lines of code. I haven't written them all, but I've had an eye on all of the different features that we're building in. And it's hard to get out of that headspace where it feels like a bunch of pieces. It doesn't feel like a hole to me, even though it definitely is. But when does a bunch of boards that you nail together become a boat? To make a really weird analogy because that's what I do; it's a hobby of mine. But when does that transition happen? At some point, certainly. But that's harder for me to see on the code side. And organizationally, somehow getting these things in place feels like the organization sort of an inflection point for us, a growth point, which is I'm really excited about it. Even though they're probably going to mean a ton of annoying nuisance work for me because I'm the person in charge of making sure it all gets rolled out. And anytime anyone locks themselves out of an account, I have to help with that. And so it's probably just putting a bunch of annoying work on my plate. And yet, I don't know; I'm kind of excited about it. STEPH: I feel like that shows our roots in terms of how we approach projects that we work on where you mentioned do we need this? Do we need this yet? Because I feel that we're constantly as developers and consultants just we're trying to advise on the more simplified do we need this? Is this the right thing to spend the money on? How do we know? What are the metrics? What does success look like? And all those questions. So I feel like the way you just phrased all of that just really shows that sort of mentality that you grew up with in terms of checking in, and yeah, it's cool. Like you said, you're at a growth point where then it's like, yes, we are at this point that I've asked myself all those questions, and we're here. This feels like the right next step. CHRIS: I like the way you described it as that you grew up with, my formative growing years at thoughtbot. Mid-roll Ad Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Well, switching gears just a bit, we have a listener question for today, and this one comes from Stephanie. So not me, another Stephanie in the world. Hello, other Stephanie out there in the world. And they wrote in, "Hi, Steph and Chris, fellow software consultant here. And I'm wondering if you'd consider talking about how to craft a project estimate for a client on the pod. It's such an important aspect of consulting." Amen. I added the amen. "And I feel like I'm very much impacted as a project team member when the estimate isn't accurate." Double amen. So true. [laughs] "Would appreciate any and all thoughts, especially since it might be part of my job in the future. Thanks." I just realized I put us in consultant church by adding all those amens, but here we are. [laughs] CHRIS: I'm glad you clarified that they were additions by you and not part of the original question coming in. STEPH: Sure. I don't want to speak on behalf of Stephanie. So I have some thoughts on the matter. I think there are a couple of different ways that we can talk about this particular question because I think there are different formats as to when you're estimating and who you're providing the estimate for. But I'm going to pause because I'd love to see what you think. How do you go about approaching crafting an estimate? CHRIS: Sure. I'm happy to share some thoughts. And for a bit of context, this question came in to us, frankly, many months ago, but I did send an initial reply to Stephanie because I know that sometimes we take a little bit of time to get back to folks. So if ever you do send in a question, know that one of us will probably respond via email earlier, and then eventually, will make it on the show. And again, just to say, we do so appreciate when folks send in these questions. It's an interesting way to shape the conversation and a way to get topics that you're more interested in into the fold here. But so the two main ideas that I shared in my initial reply were, first, is an estimate really necessary? I think that's a critical question because an estimate implies that this thing is knowable. And as many of us, probably all of us, have found out at some point in our lives as software developers, it's really hard to do software estimation, like wildly difficult. And not just the thing that we'll eventually get better at it, which you do, but there's just some chaos. There's some noise in this work that we do that makes it so, so difficult to get it right. So pretty much always, I will ask, like, do we need to estimate here? What if, instead, we were to flip the whole question on its head and say, let's set a deadline. Let's say two months from now that's our deadline. And let's ruthlessly reprioritize every single week to make sure that we're building something that's meaningful, and we're getting there. And obviously, we have to have some general idea of what we're doing. Is two months a meaningful amount of time to build a rocket to go to Mars? Probably not. But is it enough time to build an app that can allow users to sign in and manage a simple list of items? Yeah, we can definitely do that, and we can probably add a bunch of more features. The other thing that I think is worth highlighting is there's a bunch of stuff that is table stakes and very easy to do. But I would, whenever doing estimation, emphasize unknowns. So, where are the external integrations with other systems? Where are the dependencies that rely on other folks to provide some inputs into this process that we can't be certain where there'll be? In my experience, the places where estimates go awry are often these little intersection points that you're like, well, this will probably take a day, maybe two. And it turns out; actually, this can somehow balloon into a month. That's not a thing that feels comfortable saying in an estimation process, but it is definitely real. I've seen it happen so many times. And so it's those unknowns. It's those little bits that I would emphasize as part of the process if you do need to do an estimate and say, all right, here's the boring stuff. I think we can do that pretty easily. But this part, I don't know, it could be a week, could be three months. And frame it in that way that there is this ambiguity there. Because if someone's asking you for an estimate and they're looking for like it is seven days and two hours exactly, it's like, well, that's not realistic. That's not how this thing works. Unfortunately, I wish it did. But pushing back and changing the conversation is the thing that I have found valuable. I think there's some other really interesting stuff in here around the team dynamics that Stephanie is talking about. But I want to send this over to other Stephanie to see your thoughts because I'm super interested to hear what you have to say as well. STEPH: Oh, I like how you hinted at the team dynamics. Yeah, that could be a fun one to circle back to. So I love how you called out highlighting the unknowns. There are a couple of ways that this comes to mind for me. So there's the idea of the weekly or the bi-weekly estimates that we make as developers and designers. So let's say we as a team are getting together to focus on a chunk of work and decide what we can and can't get through. And that feels one of those the more you get to practice it more frequently; you get to ask a bunch of questions. And that feels like a good rehearsal and exercise of how to go through estimates. And I know you and I have pretty similar strong feelings around how those estimates are then treated by the company. They should really just be used for the team to talk through the complexities in the work to be done versus used to communicate outwardly as to this is when it's definitely going to ship. So there's that more immediate practice of providing estimates. And then there's the idea for more of a consultancy or a company, and someone is coming to you, so thoughtbot being a great example of then how do we work with teams that are looking to come to us and gain an estimate for getting a certain feature implemented? So actually, I went to the source on this one. I went to Josh Clayton, who does a lot of the conversations for the Boost team when it comes to talking with clients and about the potential work that they would like to be done. And mostly our work is often teams will hire us. They have specific goals in mind, but they're really looking to hire ongoing development and services. So they really want to add to their existing staff. And then it's going to be an ongoing relationship versus a hey, we need you to quote us for how long it's going to take to implement this particular feature. And on that note, we don't do fixed-bid work. So we don't say it's X dollars for specific features. But on the realistic side, customers are often capped by a budget. And so that estimate is very important to them because it could be a difference between it's a go versus a no go. So if you have larger companies that are like, "Yep, we want to engage with thoughtbot. We really just want additional development power and design services," that's great. For those that are smaller, it could be an individual product owner, and they need to say, "I really want this feature, but I only have this much money. And frankly, if I can't ship it by this time, I'm not going to do it because it's not worth the investment to my company." And then, in those cases, those are the ones that we're going to spend more time with them to talk about what does the fallback plan look like? And what's our opportunity for simplifying the features? And Josh, in particular, referenced this as systems thinking. So he will go through the idea of drawing out the set of steps, understanding the complexity of the different screens. So what are the validations? What are the external dependencies? What is owned by us and what isn't? What is the likelihood that we're going to get permission to simplify or remove complexity? And even then, when we start to provide some estimates, it's going to be in weeks. It's not in hours; it's not in days. It's going to be in a slightly larger time frame. And then we're also going to spend more time in the discovery phase to say, okay, well, we know you need to fix this particular issue, or you need to integrate with this particular service. So we're going to need to ask a lot more questions about your codebase. What problems have you already run into? Have you tried to do this before? Do we have experience doing this? Is this something that we can lean on and ask someone in the team? And, say, how long do you think it would take for us to work on this? And that's knowledge that isn't privy to everybody. It depends on where you're at in your career as to like, oh yeah, I've done this like five times before, and I know exactly how this stuff can fall apart. I know where the complexity lies. So I think that's why estimation is so difficult is just because it does often pull from that existing experience. And so, if you don't have that experience for a particular set of work, of course, it's going to be hard to estimate because you just don't know. So that was a very broad scope of as day-to-day developer and designers; I feel like we're constantly getting practice and estimating and communicating the progress of our work. And then on the larger scope of if you are a consultant who's then looking to give estimates to clients, then understanding what other need can you sell them? Just ongoing development services. Or, if they are a smaller team and very focused, then what legwork can you do ahead of time to de-risk the project? And then understand how much control you're going to have to be able to simplify as you learn more as you go. Because you're going to, you're going to uncover some things, and you're going to learn some things. And what's that collaboration going to look like? I do have one more concrete example I can provide around some of the smaller projects that we take on. So when we are helping someone that's, say, getting a new product out to market, then we do have a more deliberate three, four-phase approach where we first focus on discovery, and ideation, and validation. And then, we move on to iteration and then launching. And I really like how you said about providing a deadline because then that helps us scope aggressively as to what is the minimum thing that we can get out into the world that will be valuable? And then there's usually some post-launch support as well. But that's often how we will structure those smaller, more specific engagements. CHRIS: I think one of the critical things that you highlighted in there is that thoughtbot doesn't do fixed-bid work. So we're going to do these 20 features, and it's going to take four months. thoughtbot does not do that, and frankly, that's a privilege to be able to take that position and say, "No, no, no," we're not going to work that way. But it is, I think, a trade-off. It's not just something that thoughtbot does to be like, listen, that doesn't sound fun. So I'm not going to do that. It's a trade-off. Not doing that comes in concert with saying, "But weekly, we're going to talk about the work that we have done and the work that remains and constantly, ruthlessly, reprioritize and re-decide what we're doing." And it's that engagement, the idea that you can have a body of work, look at it and say, "Yeah, that'll take about six months," and then go away for six months, and then come back with the finished software. Our strong belief is that that's not the way good software gets built. But instead, it's a very engaged team where the product owner and the development team are in constant communication about each of the features that are being developed. And then again, ideally, on a weekly cadence, coming up for air and saying, "How are we doing? Are we moving in the right direction? Are we getting towards the goals? If not, do we need to simplify? Do we need to change things?" And similarly, as I mentioned deadlines, I feel like deadlines is probably a word that many people think of as very bad because deadlines often come with also a fixed scope, but that can't happen. That's two constraints, and you can't have them fighting that way. But a deadline can be super useful as a way to say we're going to put something out there in the future and say we're heading towards that moment. And let's, again, cut scope. Let's change what we're building, et cetera. But critically, not say, "We got a deadline and a fixed scope. We're going to do that." And so it's, again, just ways to gently shift the conversation around and say, what if we were to look at this from a different angle? Because just having a pile of work and saying, "That'll take six months," I've never seen that play out. STEPH: Yeah, to me, deadline is a bad word when the deadline is set by a team that's not doing the work. So if you have leadership or if you have someone else that is setting this deadline and then just passing that down to someone else to then fulfill, regardless of the feedback or how things are going, then yeah, then it can be a nasty thing, which I think is a little bit of in that question that you picked up on that you highlighted where there could be some interesting team dynamics that Stephanie called out, highlighting that I'm very much impacted as a project team member when the estimate isn't accurate. And I'm making some assumptions here because I don't actually know the exact situation that Stephanie is experiencing. But it sounds like someone else externally is setting these team estimates. And so then you're handed this deadline, and then stuff goes wrong, but you're still pressured to meet this deadline. And I've certainly been part of projects that are like that. And then that is one of the number one things that then often comes up in a retro or like, we don't have control over these deadlines, or we don't know why these deadlines are being set. And then people are working extra hours and working nights and weekends to then meet this arbitrary deadline that none of us signed up for, and that's just not fair to treat deadlines in that way. So full-heartedly agree that deadlines can be a very positive thing, but they need to be set by the people doing the work. And then there has to be discussions and updates about how is this going? Do we have control to simplify this? We thought we could do this with this particular external provider. It turns out that that's a nightmare. Is there another provider we can go with? Can we ship this incrementally? Like some features, you can't. They may have to go out wholesale. But is there a small chunk of this that we can deliver that is then a success that leadership and others can brag about? And then we can keep working on the rest of it. So it's always identifying what are the smallest wins, and how do we get there and getting buy-in from the team? Going back to something that you said earlier around, it is a privilege, where so as thoughtbot, we don't do fixed-bid work. And that is a nice thing for us to be able to focus on. But for people who do need to do fixed bid work and are relying on that, I think that often requires more legwork. And maybe that becomes part of your estimate. I'm just making up how I might approach this if I were trying to do fixed bid work. But there's a discovery phase that's very important. So maybe the first part of your estimate is I need to really understand the feature and see the different screens and know what materials we do or don't have. What does the codebase look like? Do I feel like this is a codebase that I can work quickly in? And is it going to be hindersome for me? But answering a lot of those questions to then help me paint a picture of, like, okay, this is a feature that I've implemented before, so I feel pretty confident that I could do this in a month. And then also communicating that this is my estimate but just know it's an estimate. And I will continue to update you each day as to how things are going or each week as to how things are going, and things may adjust. And we can always talk about ways about simplifying this. But I think that's how I would go about it is; frankly, it's going to require more legwork for me to feel more confident as to then telling someone as to how long I think the work will take. I think that's a nice, broad scope of the different types of estimate work to be done with the general idea of if you can avoid estimates and go for more frequent updates, then that's wonderful. But then, if you are forced into a corner where you need to provide an update, then just do as much research and honesty as possible and then still include the frequent updates. CHRIS: Yeah, that I think summarizes it quite well. STEPH: As a side note, it's been a lot of fun to feel like I'm referring to myself as a third person as Stephanie is working through this problem. So that's been novel. But yeah, thank you, Stephanie, for the great question. I hope that was helpful. On that note, Shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Giant Robots Smashing Into Other Giant Robots
399: thoughtbot Boost with Joshua Clayton

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Oct 28, 2021 39:57


Chad interviews Managing Director at thoughtbot, Joshua Clayton, about what a Managing Director at thoughtbot does, what makes Boost at thoughtbot different than other teams, and the belief in integrated teams of designers and developers company-wide. Empathize with Your Customer by Josh Clayton (https://thoughtbot.com/blog/empathize-with-your-customer) thoughtbot Boost (https://thoughtbot.com/boost) Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/) Follow Josh on LinkedIn (https://www.linkedin.com/in/joshuadclayton/) or Twitter (https://twitter.com/joshuaclayton) Check out Josh's website (https://joshuaclayton.me/) Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: CHAD: This is The Giant Robots Smashing Into Other Giant Robots Podcast where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Josh Clayton, Managing Director of the thoughtbot Boost team. Hey, I know that company. Welcome, Josh. JOSH: Hey, Chad. How are you? CHAD: All right. I'm back to the show I think. I didn't get a chance to look up the last episode you're in, but it was probably hundreds of episodes ago now. JOSH: Yeah, it's got to have been a while. CHAD: [laughs] Speaking of a while, you recently…now time is all messed up for me, but I know that you have been at thoughtbot for a long time. How long has it been? JOSH: It was 12 years in August. CHAD: It's been a wonderful 12 years, Josh. JOSH: I agree. I agree. CHAD: [laughs] JOSH: It's been fun. CHAD: So in that time, you have had a few different roles. But you've been a Managing Director for a while now. JOSH: Yeah, I think that's...let's see. It was seven and a half years for the Boston team. And then it's 10 and a half months with Boost. CHAD: So your background is as a developer. And you, like a lot of people who have a background as developers at thoughtbot, myself included, still do development on a fairly regular basis. What does the Managing Director job at thoughtbot actually do? JOSH: What don't we do [laughter] is maybe a better question. Effectively, we're running that team's business. So it involves some amount of software consulting. It involves software sales. It involves managing the profitability of the team. There are marketing functions, and, I don't know, anything and everything, hiring-related things. We opened up and recently filled our Development Director position, which was open for a couple of months over the summer. We've just opened up a Design Director position. So it's everything. [laughs] It's everything it feels like. CHAD: At the beginning of this year, we did an episode about the changes that we had made at thoughtbot to reorganize the teams around rather than geographic studios around the types of work that we would do on that team. And that's how the Boost team was created. So in that episode, we gave people an overview. But I'd love to hear in your own words, what makes Boost at thoughtbot different than the other teams, and what do you focus on? JOSH: So what makes Boost different? I think one of the drivers, one of the motivators, is to embed alongside existing product teams, engineering, and design teams, and help them get better, help them grow as well as ship features and fix bugs. So I think that the way that I position it is if there's an existing product that's deployed, people are using it day in and day out. Hopefully, it's been battle-tested. There are probably some funky areas of the code. Those are the codebases that we're operating in. It might be a team of two people; it might be a team of 200 people. But there is an existing product and an existing team. They're looking for our support to help make it better. CHAD: One of the things I love about Boost and the changes that we've made, especially relative to Boost, is that at thoughtbot, we really believe in integrated teams of designers and developers. And a big part of what we've always done has been to be a complete product design and development team that brings new products to market, big and small. And because we were one of the first consulting companies in the world to switch to Ruby on Rails and because of that deep experience with Rails, and scaling, and working on existing products, we had a significant number of customers who engaged us for development for that expertise, and to help them scale, and grow, and hire, and implement best practices or solve problems that they were having in their existing codebase or on their existing team. And even though that was a significant part of our business, it was always something that seemed that we maybe even didn't really want to be doing it, or it was on the periphery of our marketing and positioning. And I was super excited when we officially created this team because it allowed us to acknowledge that this was a significant part of what we do, a significant part of our revenue, and to have a team of people that opted into doing this kind of work who would not only love the kind of work but want to even grow it and do more of it. And we haven't really had that for this kind of work. We've had individuals wanting to do it but not at an organizational level in an organization that supports it. That was more of a statement than a question, but reaction? JOSH: [laughs] I think you're spot on. It's always been interesting to me to hear that it's...obviously, at thoughtbot, we have been building MVPs and working with a lot of different types of companies over the years and helping them launch products. But I think that the type of work that I and, ideally, obviously, other people on the Boost team enjoy working on is existing platforms and working alongside existing teams. We talk about legacy systems, and I think they get a bad rap. And it's like, no, it's battle-tested. CHAD: [chuckles] JOSH: The business has proven its viability. It is still around years later. Conway's law applies in all of the stuff. Again, there are gnarly aspects of the code. But I think that's what the folks on the Boost team enjoy being challenged by is problems where these things are larger systems. They've been around for a while, and they do get pretty gnarly. CHAD: I think one of the things that held us back in the past is that it is a skill set, and it's an experience. And you are going to be on an MVP project where you're doing design and development, going from concept to launching of a new idea, usually in a matter of weeks, working directly with a founder or a team of co-founders. And then you rotate on to this significantly larger project with maybe an engineering team of tens or dozens or hundreds of other developers on it. It's a very different skill set, and you have very different challenges in that environment. And when you're constantly switching between those different kinds of projects, you can't necessarily get better at it. And that's one of the other things that I think is helping us be more successful. JOSH: I remember one of the teammates had joined the kick-off call with me at the Boost inception that very first meeting. And it was Joël. And Joël had asked pretty straightforwardly, "What is the expectation in terms of project rotations?" Because I think historically across the company, we'd aimed to do two to four-month engagements before we'd rotate folks. And I told him point-blank it's going to be six months to a year probably. And I don't think it necessarily shocked him or other folks on the team. But I think when you're going into existing codebases and working alongside existing teams, there is inherently politics at play and complexities at play where it might take you being a new developer on a team six weeks, maybe longer before you actually feel comfortable and confident navigating the codebase and knowing where you can have and make an impact. And so, I think some of the shifts there have been particularly interesting to watch the types of consulting conversations that we have within the team just because it is a different beast, I think than building and launching products. CHAD: So let's get into a little bit of detail about the kinds of projects we're doing in Boost to the extent that we can give specific examples and maybe more importantly and relevant to the audience, what are the things that we see? What are the common challenges that we see as products grow and evolve, as teams grow and evolve? What do you think one of the most common challenges that people have is? JOSH: As teams grow and evolve, I think a lot of teams run into onboarding issues and general knowledge sharing. I think when you start a small team, and you're two, three, maybe five engineers, it's pretty easy for everyone to keep probably the entirety or the majority of that domain in their head at any one time. As you work to scale a team...there's a client of ours right now where they're effectively doubling every...I don't know what the time period is, but they're growing very, very rapidly. And the feedback that some of our team have provided to them is it's really hard, and there are very much knowledge silos. And they're working through, okay? How do we tease this out? How do we share this context so that you don't have a couple of folks that are effectively blocking every single other member of the team? And so that's one of the core areas that hangs up our team and other teams. [chuckles] Our clients bring us on, and a lot of times, it's like we're rip-roaring and ready to go. And it's like, I need information from a bunch of people, and the processes aren't in place such that we can be as effective as we would want to be given the nature of work that we're doing. CHAD: I think that's a straightforward problem, and it's a good example...I don't want to make this an hour-long thoughtbot commercial. But I'll just point blank say one of the reasons why sometimes working with us is positive for clients is it's really easy to have a certain pain around your onboarding process or company, even just simple things like how long it takes to get a new person a computer or to ramp them up on their existing team. If you're just hiring an employee, it's easy to ignore the cost of that. JOSH: Yes. CHAD: But when you're bringing on thoughtbot and we have a specific start date, and it's a certain cost, and those kinds of things, it can really expose the pain that is already happening but was just being ignored and provide the impetus to actually fix the problem. JOSH: One of the things that we try to set out to do on the first day for any project is to open up a pull request to the codebase, whether it's an improvement to the onboarding like the README for the repository or whatever it might be. Ideally, we are finding ways to contribute on day one. And sometimes, that's frankly not realistic for individual contributors doing it from their own machines. But oftentimes, we'll know that going in. And as we onboard new folks and things like that, we'll say, okay, well, day one is going to be your pair programming over Tuple or some other tool so that you are able to engage and interact with a team and work on the code, even if there's still a bit of a lag between GitHub access and everything else that's the base of onboarding steps. CHAD: So another common one that people bring us on to help with is scaling challenges in terms of the actual product itself, maybe that's performance or other scaling challenges. I'm working on a project now where that was how we first got involved. The service was failing, and it was only getting worse under the increasing scale. So, what are some tips that you have for how to effectively solve some of those problems while not bringing everything else that you need to accomplish to a screeching halt? JOSH: At the end of the day, you can't fix something that you don't know is broken. Or you might have a hunch in terms of, oh, I know this page or this set of pages are slow. I think so much of what we see is teams come in, and they're like, "We don't have New Relic setup." We don't have an instrumentation setup. So they can't measure anything. And so it's like, I know this page takes three or four seconds to paint, but I don't know why. And I don't know how to fix it. It's like, okay, well, the first thing that we need to do is set up some amount of performance monitoring and application tracking just to get a sense of what that's like. There is a potential customer who had reached out back in January of this year. So this was weeks into Boost's inception. And they said, "Listen, we've got some performance-related issues. We don't really know what to do, but we know the application is really, really slow on these couple of pages." It's like, "Are you using New Relic, or Scout, or anything like that?" They're like, "No." It's like, okay, that's the first thing that you need to do. And they came back about a month ago and were like, okay, "Here's the access to all of this data. Now we're ready to go," and it's like, "Yes!" They probably didn't need to wait for that long. But it really speaks to that in order to address some of the performance-related stuff; we need to have some sense of what is going on and where. I know Steph over on The Bike Shed Podcast had talked to Nate Berkopec. And she had floated one of the questions I had had. And he basically schooled me on that podcast and reiterated it really is ultimately about measuring so much of what we're doing. CHAD: Yeah. One of the things, especially for teams that are having a problem but feel like they don't know what to do it's tough when it's actually the case. But the reality is a lot of times; there's actually very low-hanging fruit that it's just no one has the time or experience to actually identify that. And putting some monitoring in place combined with actually taking the time to look through it, especially if it's the first time you ever hit scaling problems...There's either a missing database in that index or some N+1 queries, and fortunately, once you identify that kind of problem, it's usually fairly quick to fix as well. It's a different thing when there's maybe fundamental architecture things that are causing your app to have scaling problems. But it's very unlikely that those are the first problems you're ever experiencing. The first problems you're ever going to experience if your app is running slow are going to be things happening at the database level or missing an index or something like that. And it's very unlikely that significant architecture changes need to be put in place in order to fix the first scaling problems that any product usually has. JOSH: That wasn't the type, or at least I think when we got brought on, and you were working on your most recent clients, we were well beyond...maybe we weren't, maybe I'm misrepresenting or misremembering, but it feels like we were well beyond some of the low hanging fruit. CHAD: Yeah, there was a batch of low-hanging fruit. One of the things if you're working on a product that is scaling super rapidly, what can happen is that the low-hanging fruit masks the other problems that are happening there because it all happened too quickly, all at the same time. And that was the case on this project. So it went from having hundreds of people using it to millions of people using it in the span of a month. And so there was low-hanging fruit. But removing the low-hanging fruit didn't make the app suddenly work again; it just made it so that we could look at the metrics and say, "Okay, those things are no longer the problem." The real problems are not being masked now. We now can identify the architectural changes that need to be put into place in order to operate at the scale. JOSH: Yeah, it's the low-hanging fruit when the orchard is on fire. [laughter] That is true. CHAD: Right. So you got to peel back like an onion. And this is the case with I think a lot of...whether it be a technical challenge or a team challenge. You can't always come in and very quickly solve the root problem. You might not even know what the root problem is. You just have to start solving the problem that is obvious in front of you and learning more. And then you solve that one, and then you expose the next one, and you expose the next one. And even when you can identify the root problem, you'll be like, this is the problem, and everyone agrees it's the problem. It still might be too hard to actually fix that problem. It might be an organizational or a systemic problem. And instead, you say, "Okay, we have to iteratively solve that problem." And you start peeling back those layers to get to the point where you've positioned yourself to solve that core problem. And I think we face that a lot, particularly as external consultants. We're coming in, and we can't just be the bull in the china shop. Because we haven't built the trust with everyone necessary to make the changes, or we don't know enough to know what the changes need to be. JOSH: Right. And I think the people aspect of all of these things in my opinion...and I should caveat this with I've been writing software professionally for almost 20 years. The people, in my opinion, are always the harder aspects to any engagement. It's very rare that we go into a technical project where it's like we literally cannot figure out a technical solution to this thing. We can usually figure it out. And it might take weeks or months to implement, especially as it spans multiple systems. But more often than not, it's really navigating the people and the relationships and building that trust like you had mentioned that is really what will help dictate success within that project. CHAD: I have a line that I use fairly often, and it's that I really believe very few, if any, developers sit down and are like, I'm going to write a bad solution today, or I'm going to write bad code today. That's not what people are doing. I think the majority of people genuinely try within their entire capacity to do a good job. And so that means that when I'm coming into a situation where there are problems, or things are messy, or there were bad decisions or bad code written, it can't be chalked up to like, oh, that was a bad developer, or that was a bad choice that was made. There was usually some people or organizational problem that caused that to happen in the first place. And merely fixing the bad code is not going to be what we should focus our time on. We probably need to do that. But if we don't fix the reason for that problem in the first place, it's just going to happen again. A really popular example of this is when we get approached to do a Rails upgrade on a very significant product that is very behind with Rails. First of all, it's very expensive to do that on a very significant project if there's no test coverage. People could hire us to spend and spend a lot of money just getting to the next Rails version. But if they do that and don't solve the reason why they were so behind with Rails and have no test coverage and all that stuff, along the way, it will have been wasted effort because in a year or in two years, it will be back to the way that it was before. And that's one example that's very technical. But that kind of stuff happens all the time, even with squishy things [laughs] like the structure of a team or something like that. So if you could give advice to people that are struggling with a particular problem, what would you tell them? And let's maybe make it a little bit more concrete. Like, one thing that can happen is as teams grow, like you said, not everyone can know everything. And so it starts breaking down into pods; maybe is one way to organize the team. And then you've gone into a bunch of individual teams working on discrete features. And that's happening fairly quickly. What are some ways to manage that change and manage that growth while maintaining continuity and making it go well? JOSH: I think a lot of it depends on the goals from the technical leadership in terms of areas of ownership. That'd be the first part that I would dive into, I think. We work with clients where they segregate front-end from back-end development. And that allows the teams to focus on React and TypeScript versus Ruby or Go or whatever their back end is written in. But if the goal is to share that knowledge, I think you've got options in terms of lunch and learn and shared code review and team demos and things like that. There are other ways to spread that information across the team so that everybody still has maybe not intimate knowledge of the code that's being written on a day to day basis, but they're at least aware of those patterns and practices and what each of the individual pods is may be responsible for and is delivering. So you have that team cohesion across more of the functional space, on the engineering side or on the product design side. CHAD: One thing that I think I would also add is that a lot of times, people take for granted how better developers will do their job if they understand the reason or the business drivers behind what they're working on. And it's really easy for people to take that for granted because it's like, there's a ticket to the ticket. It says what to do on the ticket. [laughs] JOSH: I have a blog post about this, actually. CHAD: [laughs] Okay, great. We'll put that in the show notes. But if someone doesn't understand the reason behind that, it's not going to go the way that you're expecting frequently. It's not as straightforward. JOSH: Yeah. You're left guessing what the underlying customer need is. And if you're guessing about the motivations, I don't want to say not in absolutes, but you're likely not going to address all of those core needs and implement an effective solution. I think in the blog post that I had written, ultimately, it was advocating for getting engineers to participate in customer interviews and really understand, like, how are people using the product that I am implementing features for? Because without that exposure, without seeing those pain points, oftentimes, it's okay, you've got a product designer or a product manager who's putting together this list of things to do. And if it's treated as a checklist or oh, I need to go implement XYZ without understanding why that needs to be done in the first place, what is the pain point? What is the customer-facing? How are they feeling as they're going through the product? Without that context and without that empathy, the solution is going to be...it might functionally work. But will it be a good user experience? Will it be a good customer experience? It's a little bit more shaky, I think. CHAD: Yeah. And in a fast-moving, fast-growing startup where everyone has a lot to do, if you're experiencing this kind of problem, it might manifest to you as stories get caught up at the beginning stages of when a developer is supposed to be working on them. And you find yourself having calls about what something is even supposed to be or supposed to do. And as the person who originated that ticket or originated that idea, you might have the feeling like, I can't believe we're having this conversation. Like, we don't have time to educate you about all the reasons why this is important and just please just do what we've said on the ticket. That is a natural reaction to that thing. And so my advice would be if you're feeling that, if your team is feeling that or something similar, there's probably something small you can do. It might not even be having developers participate in discovery or interviews or anything. It might be as simple as just making sure that on the ticket you say why it's important. If your ticket is a checklist of things to change or do, making sure that the reason why is communicated there will go a long way to having the person pick up that ticket, get the context necessary to understand, and make good decisions as they work on it. JOSH: Yeah, I remember the switch to the jobs to be done format. And I remember just being like, oh, [laughs] this makes a lot more sense because we can understand the context and the why. What is driving it? What is the problem there rather than what is the solution? And I think that shift in mindset does go a long way, like you said. CHAD: I think one of the ideas behind a well-functioning team is we talk about this idea of collaboration which is you feel like you're doing your best work, that things are moving quickly, and you're enjoying the people you're working with, and you're building upon each other's ideas, and you're making things better as a team. And I was recently talking to someone else, a candidate for VP of Engineering at one of our clients, and they were talking about flow, the idea of flow. And it wasn't a way that I had articulated it previously, but it's certainly another way of thinking about and a good way of thinking about it, especially when it comes to what is the role of a VP of engineering? Or what is the role of a CTO at a small company? Or what is the role of a development team or a product team in general? And one way to think about that is to work to maintain a flow state. And when we think about the processes that we have in terms of retrospectives where every week we gather, and we identify things that could be better, and we come up with action items, a lot of the things that drive what could be better or what we want to try to do differently are all identifying the things that take us out of the flow state and trying to fix that problem so that items flow from beginning to end smoothly, that they go from concept to production smoothly as quickly as possible, and that individual people know where the next item to take is that it's ready so they're not blocked as they begin it. And they're able to get that to staging, and get a pull request out, and get that reviewed quickly, get it to staging, get that reviewed quickly, and then deploy it to production. And in theory, even do a continuous deployment so that that whole flow is automated as much as possible. So this idea of flow resonated with me. Has it resonated with you? JOSH: Yeah, I agree. I remember seeing the comparisons people saying, okay, you're not actually looking for an engineer's passion necessarily. What you're looking for...and I come back to the hiring side of things because I'm doing that right now. But rather than looking to assess passion, it's assessing capacity and ability to get into a flow state more quickly. And obviously, so much of that is dependent on the team, and the communication style, the management style, and things like that. But flow is very much...I think once you get into that and you know how to get into that like you said, every waking moment is spent trying to optimize how do I get into this space? Because when you're in that space, when you're flowing, be it from an IC level or above, there's nothing quite like it. It's just this calm state of just everything feels like it's firing all the time perfectly, and it's good. And I think trying to make those spaces available for the rest of the team to get into that state and maintain that state makes a lot of sense. I remember years and years and years ago, there was the focus on maker versus manager time. And we talk about anchoring manager time either at the start of the end day or around lunchtime or whatever is a logical time for a break. The idea is to maintain that flow state for as long as possible so that nothing else eats into that. Because you do an hour of writing software, and then you've got a 30-minute one-on-one with a teammate. And then you go, and you run for another hour, hour and a half, and then it's another half an hour meeting. It's like you're being sucked away. It is really hard to get into that zone and then stay there. So I think there's been a focus on it for a while. And I'm really glad that there's now a name and a thing that we can point to. CHAD: I don't want to lead people astray about what Boost is. There's a big important part of Boost, which may not be immediately obvious to people, and that is design. I may have insinuated that design wasn't part of Boost before when we were talking about it, but it is. And it's designed on existing products and existing teams which is a different need than going from concept to launch of a new idea. JOSH: So we had done some work with The New England Journal of Medicine. And their team, a very small team, internal team, had basically designed an application. And they ran into a number of accessibility and usability issues. They continued to hear feedback from a lot of folks saying, "This is not effective for what we're looking to do." And they'd engaged us in a couple of different times basically to rip apart and re-architect some of the application hierarchy and the usability side of things. It's been really interesting to see okay, well, within the design side of operating within existing products, it's often lended or leaned more towards doing some amount of a design audit and usability audit, talking to customers. And less around we're going to start from scratch and more what are some of these iterative improvements that we can make to make the product more accessible, easier to understand, easier to navigate, easier to use within what is, again, these very large platforms sometimes? CHAD: I know one common request we get is we're thinking about implementing a design system or introducing a design system with the first version of the product. And we're having this growing pain around introducing new features. Is a design system the right thing to do there? That's a request we sometimes get. JOSH: Yeah. And I think a lot of it at the end of the day, what we're looking to assess is what are the knowns? How much do we know about the application, about the interface? We talk about on the software development side of things avoiding premature abstraction, and I think the same thing is true about design systems. Like, if we're going through a product, maybe it's a wizard, and apart from forms, the pages are individualized, and there's not really any common patterns. It's like, okay, well, maybe now it doesn't make sense. But when you've got an application that spans hundreds of pages, there are going to be patterns across the different pages in terms of application hierarchy and componentization and things like that. And the work that we're oftentimes brought in to do is let's assess what's there, figure out what are the common patterns here. And it's almost like refactoring and teasing things apart to where we get slight...it's a reduction in code use or rather an increase in code reuse because we're removing some of the idiosyncrasies that maybe were not teased apart into some component-based system. And that's fun work. [chuckles] It's really interesting. You get things like React when you're doing client-side rendering. And within the Rails side of things, there's a big push for the GitHub ViewComponent. RubyGem is another example. It's both ways to introduce these layers of abstraction that allow more of the engineering team to take on more of the application development, not because design isn't necessary, but they're then empowered to reuse these logical set of components. And so it amplifies the dev work, but it also amplifies the design work because the entire team is now leaning on that pre-baked work. It enables designers to shift focus and priorities to okay; what are new components? What are different ways to position this or present this information? And also, for our designers, it frees up their time to talk even more to customers, to people using the system. CHAD: Well, I guess we'd be remiss if we didn't do a more blatant plug. You mentioned it earlier, but we do have an opening for Design Director on the Boost team. JOSH: We do. CHAD: So who would be a good fit for that role? JOSH: That's a great question. I think a couple of the things that we're really focusing on for this role is someone who has done the work, so to speak, at an IC level for a lot of the work that we're doing right now with customers. And so we ask point-blank on the application sheet, have you worked in HTML and CSS? Have you facilitated design exercises like design sprints? Have you facilitated user interviews? Because so much of what we're seeing from a vision and a strategy perspective is we need to take and leverage these tools in the skill sets that our teams are looking to hone in on, and we want to take it a lot further. I think there's a big opportunity there. So I think it's less around years of experience. I wouldn't say you need to have 15 years of management experience or having been a director in order to apply for the role. But it's someone that can empathize with the team and has some opinions and some thoughts in terms of okay, what is design? What is not only visual design but product design accessibility? What does that look like in the next 5 to 10 years? Those are the people that I would love to see apply. CHAD: If someone's interested, where's the best place for them to do that? JOSH: thoughtbot.com/jobs. And the listing is there. CHAD: So what's next for Boost, Josh? What do you have your sights set on as we wrap up 2021 and head into the next year? JOSH: Oh boy. A lot of where the focus has been on this year is continuing to double down on Rails as the technology stack and get our feet wet, not that our feet aren't wet, [chuckles] continue to invest on the front end with React. I think for 2022, I think the big focus...we've run in the past some pretty successful custom trainings workshops for engineering teams. I think one that we had done earlier was late last year into early this year. We ran about 120 or so of their engineers through a custom RSpec course. So we worked with their engineering managers and some of their teams and got a sense of okay; given this codebase and given the skill sets of this 100-plus person engineering team, where should we focus? And we put together a two-day RSpec workshop. We administered over; I think, five or six weeks. We did it virtually. And the reception from that was incredible. They ended up bringing us back on. We're getting ready to start another round of consulting work where we're embedding alongside their teams. So I see that as a huge opportunity for Boost coming into next year. And then I think one of the things that we've been pushing for is reducing some of the billing time from folks in leadership positions so that they're in a better position to support their designers and developers. And I'm really excited about the progress that we've made thus far. And I'm excited to continue carrying that into next year. CHAD: Awesome. Well, thanks for taking the time to talk to me. JOSH: Thank you. CHAD: Part of this new season, Season 11 of the podcast, for the next few episodes, I'm going to be talking to each of the managing directors at thoughtbot about their teams, about the different kinds of work we do on those teams, and the challenges, and what phases are clients in those different stages of the product lifecycle. So you can subscribe to the show and find notes for this episode and all the other episodes at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. You can find me on Twitter @cpytel. Josh, if folks want to get in touch with you or follow along with you, what are the best places for them to do that? JOSH: Definitely Twitter. My handle is @joshuaclayton, all one word. CHAD: Awesome. Thanks again. This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening and see you next time. Special Guest: Joshua Clayton.

hexdevs
#12 Sharpening Your Coding Skills by Contributing to Open-source with Gui Vieira

hexdevs

Play Episode Listen Later Dec 4, 2019 56:46


Have you ever thought about contributing to open-source projects but don't know where to start or which project to choose? Gui Vieira has been the maintainer of the popular 'shoulda-matchers' gem since 2017.We talked about the job of being an open-source maintainer. not only that, but we also discussed TDD, as well as his entrepreneurship journey in the past, and the process of moving to Canada.Thanks to our sponsors:VanHack helps great tech talent get jobs abroad. Links from this episodeVisit our Podcast page and subscribe to our newsletter!shoulda-matchers gemOpen Source FridayLendesk is hiring in Vancouver, BC!

RWpod - подкаст про мир Ruby и Web технологии
12 выпуск 07 сезона. React Router v5, React-Redux v7.0.0-beta.0, Cables vs. malloc_trim, Helix, Lamby, Flexulator, DropCSS и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Mar 25, 2019 39:01


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Rails 6 adds ActiveModel::Errors#slice!, Rails 6 raises ActiveModel::MissingAttributeError when update_columns is used with non-existing attribute и Cables vs. malloc_trim, or yet another Ruby memory usage benchmark An RSpec time issue (and it's not due to timezones), An Deep Dive into TensorStream и A scary side of ActiveRecord's find Helix: Improve the Performance of Rails with Rust и Lamby - simple Rails & AWS Lambda integration using Rack Web React Router v5, React-Redux v7.0.0-beta.0 и «Пора валить из фронтенда»: Андрей Ситник о стагнации сообщества, опенсорсе и не только Advanced Map Shading и React State: Choose Wisely Announcing fromfrom, Flexulator, CrumbsJS - a lightweight, intuitive, vanilla ES6 fueled JS cookie and local storage library и DropCSS - a simple, thorough and fast unused-CSS cleaner

RWpod - подкаст про мир Ruby и Web технологии
05 выпуск 07 сезона. Ruby 2.6.1, Hanami v2.0.0.alpha1, CSSans Pro, Neutralinojs, Notable, Finance.js и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Feb 2, 2019 35:10


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Ruby 2.6.1 Released, Announcing Hanami v2.0.0.alpha1 и From JavaScript to Ruby: A few of my favourite features Replace Timecop With Rails' Time Helpers in RSpec, Often neglected API best practices, Rails handles large number of nested routes better than Sinatra и Run a Ruby Slack Bot on AWS Lambda Web Google Play Store now open for Progressive Web Apps, Making setInterval Declarative with React Hooks и CSS and JS Are at War, Here's How to Stop It CSSans Pro - the Colourful, Sassy, CSS Font, Neutralinojs - portable and lightweight cross platform application development framework, Notable - a markdown-based note-taking app that doesn't suck и Finance.js makes it easy to incorporate common financial calculations into your application

RWpod - подкаст про мир Ruby и Web технологии
39 выпуск 06 сезона. Upgrading GitHub from Rails 3.2 to 5.2, Trix 1.0.0, SystemJS 2.0.0, Browserino, TurtleDB, Tabulator и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Sep 30, 2018 35:39


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Upgrading GitHub from Rails 3.2 to 5.2, Presentation of Sorbet: A Typechecker for Ruby, Kafka vs Mosquitto - which tool is better for communication between microservices? и Jon Rowe and Sam Phippen are RSpec's new leads #to_s or #to_str? Explicitly casting vs. implicitly coercing types in Ruby, Nine tips for Rails migration mastery и Browserino - a tool that parses a user agent string into a convenient object to work with JavaScript Trix 1.0.0 - a rich text editor for everyday writing, SystemJS 2.0.0 Release и Grid vs Flexbox: Which Should You Choose? TurtleDB - a framework for developers to build offline-first, collaborative web apps, Tabulator - interactive tables and data grids for JavaScript и Apify SDK - the scalable web crawling and scraping library for JavaScript

RWpod - подкаст про мир Ruby и Web технологии
36 выпуск 06 сезона. Rack Explained For Ruby Developers, JavaScript idiosyncrasies, Vuidget, Hygen, PostGraphile, JSLinux и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Sep 9, 2018 41:23


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Rack Explained For Ruby Developers и Uploading files directly to S3 using Pre-signed POST request Understanding Transducers in Ruby, Finding the right level of DRY for your RSpec test suite и We need to talk about changelogs Value based pagination in Ruby on Rails, Parallel map in Ruby - Nectarine и Rails Presenters (video) JavaScript GitHub Pull Requests in Visual Studio Code, Announcing styled-components v4: Better, Faster, Stronger и Learning where you are looking at (in the browser) JavaScript idiosyncrasies и You don't (may not) need Moment.js Vuidget — How to create an embeddable Vue.js widget with vue-custom-element, Hygen - scalable code generator that saves you time, PostGraphile - instant GraphQL API for PostgreSQL database и JSLinux - run Linux or other Operating Systems in your browser Conferences Ruby Meditation #23 Elixir Club 12

RWpod - подкаст про мир Ruby и Web технологии
26 выпуск 06 сезона. Rails 5.2 uses AES-256-GCM authenticated encryption, ECMAScript 2018, Face-api.js, Rabbit Ear и прочее

RWpod - подкаст про мир Ruby и Web технологии

Play Episode Listen Later Jul 1, 2018 26:50


Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Rails 5.2 uses AES-256-GCM authenticated encryption as default cipher for encrypting messages, Quest for Ruby Pattern Matching и Find the cause of randomly failing tests with RSpec bisect Frankenstein's ActiveRecord: How to stitch together complex ActiveRecord queries from simple parts и Understanding Boolean Operator Precedence in Ruby (&&, and, ||, or) JavaScript ECMAScript 2018 Language Specification Approved and Posted, Doing Vue after three years with React и I abandoned React in favor of Hyperapp — Here's why TensorFlow.js, Machine Learning and Flappy Bird: Frontend Artificial Intelligence, Face-api.js — JavaScript API for Face Recognition in the Browser with tensorflow.js, Lepto - automated image Editing, Optimization and Analysis via CLI and a web interface, Rabbit Ear - a creative coding javascript library for designing origami и Tenori-on✨ - a dope electronic music instrument sequencer thinkie that Yamaha made for a hot minute