Podcasts about chris thanks

  • 20PODCASTS
  • 87EPISODES
  • 37mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Sep 27, 2022LATEST

POPULARITY

20152016201720182019202020212022


Best podcasts about chris thanks

Latest podcast episodes about chris thanks

The MIT/RESTO Mastery Podcast
Ep 57 - Part 2 "The Four Agreements"

The MIT/RESTO Mastery Podcast

Play Episode Listen Later Sep 27, 2022 57:20


In Part 1, Brandon and Chris talk through the first two agreements- Be Impeccable with Your Word & Don't Take Anything Personally- and rap about them extensively both professionally and personally. The principles touch every aspect of our life if we're consciously living them out. In Part 2, we dive into the last two agreements- Don't Make Assumptions, and Always Do Your Best. It's another deep dive on a theme that is very important to Brandon and I- as leaders, we must first master leading ourselves before we can maximize our effectiveness leading others. Check out these agreements and let us know how it provokes or inspires you. Shoot us an email or a DM on LinkedIn! - Chris Thanks for listening! If you enjoy the podcast please be sure to leave a review, follow the show (don't forget to turn on your notifications!) and share with a friend. Your continued support is what makes this mission possible. Thank you partners! We love our listeners and want them to have access to the very best in tech and supporting service providers possible. We've vetted each of our partners and highly recommend all them as key assets to help ensure our listener's success. As a way to honor our listeners, each of them has provided competitive DISCOUNTS. Simply click on the link below and take advantage of our ongoing partnerships. https://www.floodlightgrp.com/partners Floodlight Consulting Group Equipping restoration business owners to establish processes, develop their teams, increase revenues and grow their profits. Floodlight Leadership Circles The Floodlight Leadership Circles are designed to empower restoration business owners and key leaders through education, training, coaching and community engagement. Monthly Sessions Open Office Hours Expanding Partnerships Of Industry Professionals Customer Segment Leaders Continually-Updated Content Library Rich Non-Market Competing Community https://www.floodlightgrp.com/leadershipcircles Individualized Operations Consulting and Training The Floodlight team provides customized consulting and training support for restoration companies looking to grow and scale their business. 100% Tailored to each client Formal consulting sessions for owners and key leaders Full one-on-one access to the Floodlight team and their network of industry experts https://www.floodlightgrp.com

The MIT/RESTO Mastery Podcast
Ep 54 - "Quiet Quitting"

The MIT/RESTO Mastery Podcast

Play Episode Listen Later Sep 6, 2022 40:37


Quiet quitting is a topic that's been all over business news headlines and people's LinkedIn feeds lately. In this episode, Chris and Brandon take the subject on and give their perspective on how it's showing up in the restoration industry, and what we can do to address it as leaders. Check out the show and let us know what you think-Shoot us a DM on LinkedIn. - Chris Thanks for listening! If you enjoy the podcast please be sure to leave a review, follow the show (don't forget to turn on your notifications!) and share with a friend. Your continued support is what makes this mission possible. Thank you partners! We love our listeners and want them to have access to the very best in tech and supporting service providers possible. We've vetted each of our partners and highly recommend all them as key assets to help ensure our listener's success. As a way to honor our listeners, each of them has provided competitive DISCOUNTS. Simply click on the link below and take advantage of our ongoing partnerships. https://www.floodlightgrp.com/partners Floodlight Consulting Group Equipping restoration business owners to establish processes, develop their teams, increase revenues and grow their profits. Floodlight Leadership Circles The Floodlight Leadership Circles are designed to empower restoration business owners and key leaders through education, training, coaching and community engagement. Monthly Sessions Open Office Hours Expanding Partnerships Of Industry Professionals Customer Segment Leaders Continually-Updated Content Library Rich Non-Market Competing Community https://www.floodlightgrp.com/leadershipcircles Individualized Operations Consulting and Training The Floodlight team provides customized consulting and training support for restoration companies looking to grow and scale their business. 100% Tailored to each client Formal consulting sessions for owners and key leaders Full one-on-one access to the Floodlight team and their network of industry experts https://www.floodlightgrp.com

The Bike Shed
350: 21 Bell Salute

The Bike Shed

Play Episode Listen Later Aug 16, 2022 52:09


It's Steph and Chris' last show. Steph found a game, and if you've been following the journey, all of the Test::Unit test files are now live in RSpec. JWTs really grind Chris' gears. They wrap up with things they've learned, takeaways they've had, and their proudest podcasting moments. They also thank all the folks who've helped make The Bike Shed happen. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Microservices (https://www.youtube.com/watch?v=y8OnoxKotPQ) Transcript: CHRIS: One more round of golden roads, our golden. So here we go. STEPH: Oh, one more round of golden roads. Okay, maybe that's going to get to me today. [laughs] CHRIS: [singing] Golden roads take me home to the place. STEPH: [singing] I belong. CHRIS: Yeah, there you go. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together we're here to share a bit of what we've learned along the way, at least one more time. So with that [chuckles] as an intro, Steph, what would you say is new in your world? STEPH: Hey, Chris. Well, today is the big day. It is the day that you and I are recording our final Bike Shed episode, which we have all the feels about, and we will definitely dive into. But to ignore some of that for now, I have another small fun update I can provide about a new game that I found. So one of the things that's new in my world is I started playing a new board game with Tim; it's called Ticket to Ride. Have you heard of that? CHRIS: I have. I don't know if I've played it. I feel like it's a particularly popular one now. But I don't know if I've ever had the pleasure. STEPH: It's a very cute game, so we have the smaller version of it. For anyone that's not familiar, it's essentially a map. And then there's a bunch of spots where you can build trains and connect them, and then you get tickets. So your goal is that you're going to connect one location to another location. And then you get points and yada yada, but it's so much fun and especially the two-player version. It's like this perfect 20, maybe 30-minute game. I'll be honest; I'm not really a board game person. I always enjoy it. Once I get into it, then I'm like, this is great. I don't know why I was resistant to this. But every time someone's like, "Do you want to play a board game?" I'm like, "Not really." [laughs] I first have to get into it. But I have really enjoyed Ticket to Ride. That's been a really fun game to play. And it's been a nice way to, like, even during the day, we'll break for lunch and squeeze in a game. CHRIS: Well, I love good two-player games. They're hard to find. But when you find a good one, and it's got that easy pickup and play...I believe I'm going to now purchase this. And thank you for the tip. STEPH: Yeah, this is definitely one of those where it's easy to pick up, and then you can get the expanded board. So there's a two-player version, but then yeah, you can get one that's a map of the U.S. or a map of Europe. And I think it accommodates up to five players as the maximum, so not a huge group but definitely more than two. On a slightly more technical note, I have something that I'm very excited to share. It is a journey that you have been on with me, that everybody listening has been on this journey with me. And I'm very excited. I see you nodding your head, so I'm guessing that you're going to know where I'm headed with this. But I'm very excited to announce that all of the Test::Unit test files now live in RSpec. So that is a big win. I'm very, very excited for that to be a previous state of life and not an ongoing state of life. Because I have certainly developed too much niche knowledge around migrating these tests, and that became apparent to me when I was pairing with another developer that works with the client because they had offered...they had some time. They're like, "Hey, do you want help migrating a test file?" And I was like, "Sure." I was like, "But this is wonky enough, like, we should pair and work on this together because I just know some ins and outs. And I don't want you to have to learn a lot of the hard lessons that I've learned." And the test that we happened to pick up was very gnarly. It had a lot of mystery guests. And we spent, I think it was a good two hours. And we only migrated one of the tests, so not even a full file but one of the tests. And at the end of it, I was like, I know way too much about some of the oddities and quirkiness of this. And we got through it, but we decided that wasn't a good use of their time for them to go at this alone. So that's why I'm extra excited and relieved because I didn't want this task to carry on to someone else. So, hooray, we did it. CHRIS: Hooray. Just in time. You're Indiana Jones grabbing your hat right as you roll out and off to [laughs] be away from the project for a bit. So you stuck the landing. Well done, Steph. STEPH: Thank you. Thank you. So that's some great news. And then also, everything else in life is pretty much focused around getting ready for maternity leave. That's about to happen soon, and I am so ready. I have thoroughly enjoyed a lot of the things that I'm doing, [laughs] but goodness, being pregnant is hard. And I am very much ready for that leave. So also, a lot of the things that I'm doing right now are very focused on making sure everything's transitioned and communicated and that I just feel really good about that day of departure. That covers all the newness in my world other than the big thing that we're just not talking about yet. How about you? What's new in your world? CHRIS: Well, continuing to skirt the bigger topic that we will certainly get to in the episode, what is new in my world? I'm actually quite excited workwise right now. We have a much larger body of work that finally we got the clarity. All the pieces fell into place, and now we're sort of everybody rowing in the same direction. There's interesting, I think, really impactful code that we're writing for Sagewell right now. So that's really fantastic. We've got the whole team back together on the engineering side. And so we're, I think, in the strongest and most interesting point that I have experienced thus far. So that's all really fantastic. On a slight technical deep dive, you know what really grinds my gears? It's JWTs. JSON Web Tokens and I have never gotten along. It's never been a match made in heaven. And we have a webhook that comes from Plaid. Plaid is a vendor for connecting bank accounts and whatnot. And they have webhooks like many people do. So they can inform us when things change, lovely feature of how we build web apps these days. But often, there's a signature that says, "This is definitively from us, and you can trust us." And usually, it's some calculated signature, HMAC, or something like that. For some reason, Plaid's uses JWTs, and more than that, they use JWKs. So there's JWT which is the signature. That JWT itself is signed with a JWK. You have to fetch the JWK from their server based on the key ID in the header of the JWT. But how do you know if you can trust the JWT before you've gotten the JWK? All of this broke in a recent upgrade. We went from Heroku-20 to Heroku-22 to the new platform with Heroku, which bumped us to OpenSSL 3.0, and it turns out JWT doesn't work with it. And so that's sad. It's a no. It's going to be a no. It turns out the way that OpenSSL 3.0 works is incompatible with some of the code paths in JWT. And so I was like, wait, we just can't do this? And it's low-level cryptographic primitive stuff that I'm not comfortable messing around with. I'm not going to hop in there and roll up my sleeves. And even just getting to the point that I understood what was broken about this took like an hour and a half just to sort of like, wait, which is okay...so the JWT signs and encodes. And this will be a theme that we come back to later, but I think web development should be simpler. I think we should strive for simplicity. And this is a perfect example where I'm guessing Plaid uses JWTs and that approach to communicating security things often, but I've not seen it used much for signing webhooks. And, oof, it led to a complicated day. And it's unfixable now as far as I can tell. There is a commit on the JWT Ruby repo as of five days ago, but it doesn't build in our system. And it's not released. And it's just a mess. So yeah, engineering is complicated. I'm both wildly excited about what we're doing at Sagewell, and then today was this local minimum of like, oh, JWTs again. Again, we find ourselves battling. And you won today, but hopefully not for too long. STEPH: Oof, how did this manifest that you first noticed? So is it because a webhook suddenly stopped working, and that was like the error that rose up, and that's what helped you dive into it? CHRIS: Yeah, we have a little bit of code in the controller for where Plaid events come in. We calculate and verify the signature of the webhook to make sure that it's valid, and we reject it otherwise. And we alert ourselves via Sentry, and then we also have a Datadog scan that can show what's the status code of the response. Because these are incoming HTTP payloads or requests, and so we can see there were 200 up until this magical day when suddenly everything changed. And that was when we switched Heroku stacks. And then we can see it also in Sentry. So we're able to look at it, and we're like, why are none of the Plaid webhooks able to verify the signature anymore? That seems weird. And so then Datadog confirmed that it consistently was broken from this point in time. And then we were able to track that back. It was also pretty easy to guess because the error was "pkeys are immutable in OpenSSL 3.0," and that was the data. And I was like, oh, cool, that sounds fun. Let me go figure out what that means. STEPH: [laughs] Well, it's a nice use of Datadog. I remember in the past you were talking about adding it. And I was excited because I've never been at that point where a team has just introduced it; either a team doesn't have it, and they wish they had more insights, or they have it and don't use it. And nobody ever checks the board. So that's a nice anecdote for Datadog helping you out. Yeah, I'm not envious of your situation, friend. CHRIS: I do love the cup half full take [laughs] that you have on the overall situation, but that's nice how Datadog worked out for you. And you know what? It was. Thank you, Steph, for once again being that voice of positivity. STEPH: I appreciate that you enjoy it because there are times that when someone points it out to me that I do that, I have to be like, "I'm sorry, I'm not trying to be toxic positivity over here. [chuckles] That's just how my brain works." CHRIS: Oh, you are definitively not toxic positivity. That's a different thing. Because you ended with but also, I feel bad for you, and I'm glad that I'm not in your shoes. So you are the right level of positivity. I don't think I could have talked to you for three and a half years as co-host on a podcast if I didn't appreciate the level of positivity or the general approach that you bring to thinking about stuff. STEPH: Okay. Well, to borrow a phrase from Matt Sumner, who has been a guest on the show, cool, cool, cool, cool. I'm glad my positivity has been well calibrated. And I was about to say I'm interested to hear how this turns out for the team. [laughs] But we're in an awkward spot where I mean, you and I, we can still totally chat. But listeners won't get to hear the rest of that particular saga. I mean, you can share. I mean, you do you. I'm setting all sorts of boundaries for you right now. Okay. And now I'm just rambling, and I'm getting weird with it. Because the truth is that, you know, we won't be back. And this is our final episode together. So I think let's just go ahead and rip off the Band-Aid. Let's dive into it. Let's talk about it. Given that it's our last episode that we are recording, we thought of a couple of things that we'd like to talk about. You brought up a great idea that I'm excited to dive into. Do you want to lead us in? CHRIS: Sure. Well, if we go back all the way to Episode 172, that is the first episode that you came on as a guest. I actually continue to really love the title of that episode, which is What I Believe About Software. And it both captured that conversation really well, but also, more generally, it's actually become the tagline of the show when we do our little introduction. What do we believe about building great software? Et cetera. And I think that's been the throughline of the conversations that we've had is what remains true. What are the themes? Not necessarily the specific technologies, although we certainly talk about that. But what do we believe about building great software? And so today, I thought it would be fun for us to talk about what do we still believe about building great software? It's roughly three and a half years or so that we've been doing this. What's still true? STEPH: Oh, well, I have the first unequivocal one, the thing that I still believe about building great software, and that's you should hire thoughtbot. That's definitely the way to go. We'll help you get it done, not that I'm biased in any way. CHRIS: No. I'd say collectively between us; there's zero bias with regard to thoughtbot or any other web development shop out there. But thoughtbot is the best. STEPH: All right, perfect. So we've got the first one, the clutch one of hire thoughtbot. And then I also really like this topic. And I still think back to that first episode that I recorded with you and how much fun that was and how that really got me to start thinking about this. Because it was something that, at the time, I didn't really reflect on a lot in terms of what does it take to build great software? I was often just doing the day-to-day actions but then not really going high-level think about it. So I'm excited this is one of the topics that we're revisiting. So for the next one, this one is, I don't know, maybe it's a little cutesy, but I was trying to think of an alliteration that I enjoyed. And so this one is be an assumption assassin. So what assumptions are you making? And then how can you validate or disprove them? And that is something that I find myself doing constantly. And it always yields better work, better questions, better software, better code, better code reviews. And that's my first one is be an assumption assassin and identify what assumptions you have. And I had a really good example come up today while I was having a conversation with Joël about something that I was looking to merge. But I was a little hesitant about it because there are some oddities that I won't dig in too deeply. But essentially, there's a test that I migrated that highlights an existing concern in the code. And I was like, should I go ahead and merge this test that documents it, or should I wait to fix that concern and address it? And he brought up a good point. And he's like, "Well, we're assuming it's a bug and an issue, but it may not actually be depending on how the software is being used." And so then he was encouraging me to reevaluate that assumption that I had where I'm like, oh, this is definitely a problem to, like, I don't know, is it a problem? Let's ask somebody. CHRIS: First off, I love that as a theme, as one of the things that you still believe about software. Second, I believe you correctly said that you were looking for an alliteration, but my brain heard acronym. STEPH: [laughs] CHRIS: And so then I was like, B-A-A-A. Is it BAAA? What are you going for there? Oh, you just wanted a bunch of As. Okay, I got it now. Secondly or thirdly, I think I'm on my third now. Apparently, within Sagewell team culture, one of the things that I'm most known for is... there are two phrases: one is just to name it, and the other is to be clear. And these are the two things that I do apparently constantly so much that it's become a meme within the team. It's just like, okay, everybody's been talking. But I just want to make sure we're on the same page here. So just to be clear, or just to name it, here's what I'm seeing. But I agree; I think taking those things...what are the implicit bits? What are the assumptions? And making them more explicit. Our job as developers is just to yell at computers all the time and make them try and do human stuff. And there's so much room for lossy conversions at every point in that conversation chain. And so yeah, being very clear, getting rid of assumptions, love it. It's all great stuff. Actually, in a very related note, the first on my list is that code is for humans to read. This is one of the things that I believe most deeply and most impacts the way that I write software. Any given piece of functionality that we want to author in our code feels like 10, 20, 50, frankly, almost infinite different versions of the code that would produce nearly identical functionality. So at the end of the day, the actual symbols and strings of text that we bring together to write the code is all about other humans, other people on your team, you five months from now, you a week from now, frankly, or me. I'm going to say me, me a week from now. I want to do future me and everyone else on the team a solid and spend that extra 10% of okay, I have something that works now, but let me try and push it around and try and massage it into a shape that is a little more representative of how we're actually thinking about the code, how we talk about it as an organization. Is that the word that we use to describe that domain concept? Maybe we could change that just a little bit. Can I push more of this into the private API? What actually needs to be known here? And I think that's where I'm happiest is in those moments because that's where all of the parts of the job come together, the bit where I trick a computer into doing what I want and simultaneously making it so that that code is revisitable, clear, expressive, all of those things. So yeah, code is for humans. And that's true across every language, and framework, and domain that I have worked in. And I've only believed it more and more so over time. So yeah, that's mine. STEPH: Yeah, I love that one. That's one of the things that comes to mind when people talk about disliking code reviews. And I can imagine there are a number of reasons that people may have had a poor experience with a code review process. But at the end of the day, if you're not getting that feedback or validation from fellow humans, then how do you know that you've been successful, that you've written something that other people can follow up on? Which goes back to the assumptions in terms of like, you're assuming that you have written something that your future self or that other people are going to be able to read and maintain down the road. So yeah, I love that one. One of the other things that I still hold really true to building great software is prioritize early and often. So always be checking in to understand with your users, with your tech concerns, with data that you may have, new insights, and then just confirm that yes, you and the team are constantly working on the thing that has been prioritized and that is the most important. And also, be ready to let go. That can be really hard. I have definitely had those moments in my career where I've spent two weeks working really hard on something. And then we've realized that the thing that we were pursuing isn't that valuable, or it's something that users don't need or actually want. And so it was better to let go of it than to pursue it and ship it anyways. So that's one of my other mantras that I have adopted now is prioritize, prioritize, prioritize. CHRIS: Unsurprisingly, I agree wholeheartedly with all of that. We're still searching for that thing, that core thing that we disagree on other than Pop-Tarts and IPAs. But I don't know that today is the episode that we're actually going to find that. But yeah, prioritizing is such a critical activity. And it is this interesting collaboration point. It gets different groups together. It's this trade-off. It's this balance. And it's a way to focus on and make explicit the choices that we're making. And we're always making choices. We're always making trade-offs. And so being more explicit, being more connected and collaborative around those I believe in so, so, so much. So love that that was something on your list. Let's see, next up on my list is reduce complexity, just sort of as an adage, just always be reducing complexity. It is amazing to me in my time, particularly as a consultant, but even now, this is something that I hold very true is just it's so easy to grow a system in anticipation of future complexity or imagine that the performance concerns that we're going to run into will be so large that we must switch from Postgres and a nice, simple atomic database into a sharded, clustered Kafka queue adventure. And there are absolutely cases that make sense for that sort of thing. But at a minimum, I beg of you, anyone starting a new system, don't start with microservices. Don't start with an event queue-based system. These are wildly complex versions of what often can be done with so much simpler of an application. And this scales through to everything. What's the complexity of an API? Do we need caching in that API layer? Or can we just be a little bit inefficient for a little while and avoid the complexity and the overhead of caching? Turns out caching is a tricky thing to get right, just as an aside. And so the idea of like, oh, let's just sprinkle in a little bit of caching. It'll be easy, and then we'll get better performance, like, yeah, but did you get it right? Or did you introduce a subtle bug into your program that's going to be really hard to debug later? Because do you cache in development? Well, maybe, I'm not sure, could be. So over time, this is something that I've sort of always felt, but I've only ratcheted it up. It's only something that I've come to believe in more and to hold more firmly to. I think earlier in my career, it was something that I felt, but I would more easily be swayed by aspirational ideas of the staggering amounts of traffic that we would be getting soon or the nine different ways that the data model will expand. And so, we should code the current version in anticipation of that. And I have become somewhat the old man on his lawn yelling at the clouds like, "Nah, we don't need it yet. We can grow to that." And there's a certain category of things that are useful to try and get out in front of and don't introduce additional complexity, but they're a tiny, tiny list. And so, for most things, my stance is what's the simplest thing that we can get away with right now, that still provides a meaningful experience to our users, that doesn't compromise on security or robustness or correctness but just solves the problem we have right now? And over and over and over again, that has served me incredibly well. So yeah, keep that complexity at bay. STEPH: That is one that I've definitely struggled with. And frankly, it works in my favor, that idea of keeping things simple. Because I'm terrible when it comes to predicting the future or trying to build things in a way that I just don't have enough information to really drive the architecture or the application that I'm building. So anytime I'm trying to then stretch and reach for the future in those ways unless I really have a concrete understanding of I am building for these particular scenarios, it's really hard to do. So I very much like keeping it simple and not optimizing before you need to. And it reminds me of I think it's Mark Twain, who has a quote, "Worrying is like paying a debt that you don't owe." And that's something that comes to mind for me when also writing code and building features and software is that I tend to be someone who will worry about stuff. And I'm like, oh, is this going to be easy to extend? Is it going to be what it needs to be six months from now if we need to add more features to this and build on top of it? And I have to remind myself it's like, well, let's just wait. Let's wait till we get there and we know more. One of my other ideas that couples nicely with the one that you just shared in regards to keeping things simple and then waiting for those needs to arise is that mistakes are going to happen. They are a part of the process. As we are learning and growing and we're stretching our skills and trying things out, things are going to go wrong. We're going to introduce bugs. And to take those opportunities, that's when we start to use that feedback to then improve things like observability, like capturing logs, and how we handle error reporting or having a plan for emergencies. So maybe that's the part of worrying that can pay off is thinking through, all right, if something does break, or if something gets shipped that shouldn't, then what is our plan in how we handle that? How do we roll back? Or how do we get things back to a stable build? CHRIS: It's funny. I was actually visiting with a friend this past weekend, and we were chatting more generally about life things but the idea of worrying and anticipation and trying to prepare for every bad outcome. And there's the adage of an ounce of prevention is worth a pound of cure. But increasingly, both in life, depending on the context, and in code, I've found that I've shifted to the opposite of it's impossible to stop everything. There are going to be bugs that are going to get out there. There are going to be places where we code things incorrectly. And I would rather...I still want to try as hard as I can to get things right, to be clear. I'm not giving up on trying. But I'm all the more focused on how do we know and how do we recover when those things happen? So it's interesting that you just described exactly that, which, again, is a very human life conversation, and yet it applies to the code. STEPH: I love that rephrasing of it. Instead of the mistakes are going to happen, it's, like, how do we know, and how do we recover? I think that's perfect. I've also found that by answering the how do we know and how do we recover, that really helps you build trust with clients as well. Because again, things are going to happen, things are going to break. And the more prepared you are for that and then the better plan that you have, and then they can watch how you execute that plan, and it's going to establish a lot of deep trust with other engineers and also the team that you're working with, that you have been thoughtful and that you have ideas on how are we going to address this? Instead of waiting for that moment to happen. That's going to happen too. You're going to make decisions in the heat of the moment. But I have found that to be a really useful way to establish yourself with a team in terms of I care about this team and these processes and this application. So how do we handle the bad times, not just the good times? I do want to circle back because you alluded to the fact that you and I, we've tried to find things that we disagree on. And so far, Pop-Tarts and beer have been the two things that we disagree on. But I do have a question for you that maybe I will disagree with you on. But I need to know some more about it first. You have alluded to there's the Brussels snack, (Oh, I'm going to get this wrong.) Brussels sprout snack hour or working lunch, something combination of those words. [laughs] And it's the working lunch that has stuck out to me, and I've wanted to ask you about it. So here I am. I'm asking you about it. What's a working lunch? What's the Brussels snack happy hour, snackariffic working lunch look like? CHRIS: This is fantastic. I love that you waited until the last episode that this was rolling around in the back of your head. And you're like, are you making the team work through lunch? And now, on this final episode, we get to address the controversy that has been brewing in the back of your head. Spoiler alert, no, this is just ridiculous nomenclature. These are two meetings that we have that are more like, let's get the dev team together and talk about stuff that's in our platform sort of developer experience. Or stuff in observability often is talked about in this context because it doesn't quite impact users, but it's how we think about the work. And so there are two different meetings that alternate every other week. So every Friday afternoon, we do this, but it's one of two meetings depending on the day. So there's a crispy Brussels snack hour that was the first one that was named, which was named purely for nonsense reasons because we don't have anything else that's named nonsensically in our organization. And so I was like, oh when we name this meeting, we should make it nonsense because we don't have any other...We don't have, you know, an SOA microservices fleet with Barbie doll and Galactus and all of the other wonderful names. Those are references to the greatest video ever about microservices; if you've not seen it, that will be in the show notes. It's required reading. But anyway, we don't have that. And so we thought, let's be funny with the name of this. So the crispy Brussels snack hour is one, and the crispy Brussels we wanted something that was...the first one is a planning meeting. The second is like, let's actually sort of ensemble program. Let's get the four of us together, and we'll work on some of the stuff that we're talking about here but as a group. And so I wanted the idea of we're working, and so I was like, oh, this will be the crispy Brussels work lunch. But it's purely a name. It's the same time slot. It's 3:00 o'clock on a Friday afternoon. [laughs] So it is not at all us working through lunch. I don't think we should work through lunch. I'm concerned that you thought that for a while, and you were just like, I'm a little worried, but I'm not going to bring it up. But I'm glad we got to cover this before we wrapped up this whole Bike Shed co-hosting adventure together. STEPH: I feel relieved and also a little robbed of an opportunity for us to have something that we disagree on because I thought this might be a thing. [laughs] CHRIS: We can continue searching for that thing. But maybe it's okay that we agreed on most stuff for the run [laughs] of this fun, little show that we did together. STEPH: Yeah, that's gone on quite a time. We've got like three years together that we have managed to really only find two, I mean, very important of course, two things. But yeah, it's been pretty limited to those two areas. And each time that you'd mentioned the work lunch, I was like, huh, I need to ask about that because I have feelings about it. But then, you always would dive into very interesting stories of things that came out of it, and I quickly forgot about it. So this feels good. This feels like very good important closure. I'm glad that this finally surfaced. But circling back, since I took us on a detour for a little bit, what are some other things that you still hold deeply about building great software? CHRIS: I've really got one last thing on the list. It's interesting, there's not a ton technically in this list, which I think represents broadly how I feel about software, and I think how you feel about software. It's like, it's actually mostly about how the people interact at the end of the day. And you can program in any language or framework, and you can get the job done. We certainly have our preferences and things that we enjoy. But the last one really rounds us out, which is think about the users. I always want to be anchoring the conversations that we're having, the approach that we're taking to building the software in what do the users think? Who are our users? What do we know about them? What do they care about? How are they using this technology? How is it impacting their lives? We've talked a number of times about potentially actually watching the sales demo as an engineering team, trying to understand what's the messaging that we're putting out into the world for this piece of software that we're building? Or write along with customer support and understand what are the pain points that people are hitting? And really, like, real humans, what are they experiencing? Potentially with a name attached. And that just changes the way that you think about the software. There's also even the lower-level version of it. As we're building classes or modules, what are the public facets of that, and what are the private API? What's the stuff that we're hiding away? And what's the shape that we are exposing to the outside world for varying definitions of outside? And how can we just bring in a little bit of empathy to try and think about, again, in the case of like the API for a class, it's probably you on the other side of it, but it's future you in a slightly different mindset with a little bit less information and context on the current problem that you're working on. And so, how can we make things easier for ourselves in the code, for our users at the end of the day? How can we deliver real value that is not mired in the minutiae of technical complexity and whatnot but really is trying to help people live better lives? That's a little too fancy as I say it out loud. But it is kind of the core of what I believe, so I'm not going to take it back. STEPH: I love how you've expanded users where more traditionally, it's people that are then using the software. But then you've expanded it to include developers because that is something that is often on my mind and something that I just agree with wholeheartedly in terms of when you're writing software; as you mentioned before, software is for people. And so we want to include others. And it does improve people's lives. People show up to work every day, and if you've been thoughtful if your past you has been thoughtful, it's either going to give you your future self a better day, or it's going to give other people a better day. So I think that's a very fair statement, improving lives by being thoughtful in regards to focusing on the users, people consuming software, and working in the codebase. CHRIS: I know we've talked about this before, but I was having a conversation with one of the developers on the team at Sagewell just last week, and they were mentioning how they really loved working on admin features. And I was like, oh, that's interesting. Let's talk more about that. And it was really it's that same thing that I think you and I have discussed of like there's that immediacy. There's that connection. These are actually colleagues, but you can build software to make their day better. You can understand in detail what the pain points are. What's the workflow that as you watch it, you're like, oh, I could put a button up in the corner of the screen that would automate almost all of this and your day would be that much faster? Oh, let me do that. That's exciting. And so I love that as another variation of it, like, yeah, there's for other developers. There's also for the admin team or other users in the organization of the software. There are so many different versions of users, but I think I think we build a better thing if we think about them more. STEPH: I have definitely worked with teams where I can tell that certain people are demoralized, and it comes down to they feel frustrated and often disconnected from the people that they are building for. And so then you really feel isolated. I'm pushing code around, but I don't really see the benefit or the purpose of it. And I think that's very hard for developers who typically want to build something that's going to be useful and not feel like it's just going to be thrown away. So connecting your team to those users, I certainly understand. Getting to build something for your colleagues and then they get to say how much they like it is an incredible, rewarding experience. You also touched on something that I really appreciate, where you highlighted that a lot of the technical decisions that we make are important, but they're not at the center of the things that we believe when it comes to building great software. And that's something that I will often reflect on. Like, as we were thinking through these particular ideas that we still hold true today, how mine are more people and process-focused and rarely deep in the technical weeds. And there are times that I think, well, shouldn't there be something that's more technical, something that's very concrete? Yes, you should build your code this way or build your application or use a specific technology. But after all the projects and teams that I've been a part of, that's just usually not the most important part. And so I appreciate that you highlighted that because sometimes I have to remind myself that, yes, those things can be challenging, but it's often with people and process. That's where the heart of great software lies. CHRIS: That's a fantastic phrase, I think, that really encapsulates all of the conversations that we're having here. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: Actually shifting gears a little bit, so we've just talked about what we still believe about building great software. I'm intrigued. We've been chatting for a number of years here on this microphone, these microphones. We have separate ones because we're in different states. But I'm interested; what have we changed our minds about? What have you changed your mind about, Steph? I got a couple of ideas, but I'm intrigued to hear yours. STEPH: Nothing. I've never been wrong. I've stuck to everything that I've ever thought. CHRIS: That must be boring. STEPH: [laughs] Yeah, that's totally not true there. There are definitely things that I've changed my mind about. One of the things that I've changed my mind about is that people who know the most will ask the fewest questions. That's something that I used to consider the trademark of someone who is a more experienced senior developer in terms of you really know what you're doing. And so you typically don't ask for help or need help very often. And so, I'm going way back in terms of things that I have changed my mind about. But I have definitely changed my mind where people who know the most are actually the ones that do a really great job of constantly asking questions and asking for feedback. And I think that is still a misconception that people still carry forward. The idea that if you're asking a lot of questions or asking for help that you are not as skilled in your work, and I view it as quite the opposite, that you are very good at what you do and that you know precisely the value of your time. And then also reaching out to others for help, and then also just getting validation on things that you may have concerns around. So that's one I've changed my mind on is that I think the more experienced you are, the more questions you tend to ask. CHRIS: Oh, I love that one. It's a behavior that I know...I think we've talked about this before. But as consultants, we try and model it just the like; it's totally fine to ask questions. And because we often come in with less context, it makes sense for us to be asking questions, but I will definitely intentionally lean into it in those contexts to be like, everybody keeps throwing around this acronym. I don't actually know what that is. Let me raise my hand. And my favorite moment is when people disagree on what the acronym or what the particular word or what the particular project is. Like, I ask the question, and people are like, "Oh, it's this," and someone across the room is like, "Wait, that's what it means? I thought it was this totally other thing." I'm like, cool, glad that we sorted that out. Glad that we got that one up in the air. But I actually remember many, many, many years ago, at this point, there was a video series of...PeepCode was the company, and there was the Play by Play series. And so there were particular prominent developers, particularly in the Ruby community. And they would come and sort of be interviewed and pair program. And it was amazing getting to watch these big names that you had heard of, like Yehuda Katz is the one that stands out in my mind. He was one of the authors of merb, which was a framework that was merged with Rails, I want to say around the 3.0 time. And just an absolute, very big name in this world and someone that I looked up to and respected. And watching this video, they had to Google for particular API signatures and Rails methods. They were like, "Oh, how does that work? Is it link to and then you pass the name?" I forget what it was specifically. But it was just this very human normalizing moment of this person who has demonstrably done incredible work in our community and produced very complex software still needs to Google for the order of arguments to a particular method within Rails. I was like, oh, okay, that's good to know. And with complete humility in the moment, I was just like, yeah, this is normal. Like, it's impossible to hold all of that in your head. And seeing that early on shook me off the idea that that's the thing to do is just memorize everything. It's like no, no, get good at asking the questions. Get good at debugging. Get good, yeah, asking questions. It's a core skill rather than a thing that you grow out of. But I definitely shared early on I was like, not allowed to ask questions, that'll be scary. STEPH: I love that example. Because counterintuitively, to me, it demonstrates confidence when someone can say, "Oh, I don't remember how this works," or "Let me go look it up." And so I just very much appreciate when I see someone demonstrating that level of confidence of let's keep going. Let's keep making progress. I'm going to ask for help because that is totally fine, and we are in a safe space. Or I'm going to create a safe space for us to do that. One of my favorite versions of this where you shared like if you ask about an acronym and then people disagree, one of my favorite versions is to ask about a particular area of the codebase and be like, what would you say this code is doing here? What do you think users do here? Like, what is the purpose? What's the point of this? [chuckles] And then having people be able to say, "Oh, yeah, this definitively does this thing." Or people are like, "You know, I'm not sure. I don't even know if that code is getting run." That's one of my favorite outcomes of asking questions. How about you? What's something you've changed your mind about? CHRIS: I made a list of a couple of things like remote is on there. I didn't know if I'd like remote. I wanted to try it for a while. Tried it, turns out I like it a lot. It's complex. You got to manage it, whatever. But that I think everybody's talked about that a bunch. I think probably the most interesting one is deadlines. Initially, in my career, I didn't really feel anything about them. And then I experienced the badness of deadlines. Deadlines are bad. Deadlines are things that come down from on high and then you fail to hit them, and then you're sad. And maybe along the way, you're very stressed and work long hours to try and get there. But they're perhaps arbitrary. And what do they even mean? And also, we have this fixed scope, and they're just bad. And then there was a period of my time where, like, deadlines are bad. The only thing that we do is we show up, and we make the software as quickly as we can. But in reality, there are times that we need that constraint. And in fact, I have found a ton of value in deadlines when used intentionally. So we can draw a line in the sand, and we can say, at this point in time, we will have a version of the software. We have a marketing campaign that we need to align with this. So we got to have something at that point. And critically, if you're going to have a deadline, you've now fixed a point in time. You need to flex other things. And critically, I think the thing to flex is the scope. So we need to have team management. We have user accounts right now, but now we need to organize them into teams. That is like a category of functionality. It's not a singular feature. And so yeah, we can ship teams in the next quarter. What exactly that means is up in the air. And as long as we're able to have conversations essentially on a day-to-day at least weekly cadence as to what will make it in by that deadline and what won't, and we're able to have sometimes the hardest conversations but the very necessary conversations of the trade-offs that we have to make as we're building that software, then I find deadlines are absolutely fantastic tools for focusing and for actually reducing scope but in a really useful way. And getting something out there in the hands of users so that you start to get real feedback so that you start to learn, is this useful? What are the ways that people are using this? What should we lean into and do more of? What maybe should we roll back, actually? So yeah, deadlines. First, I didn't know them, then I feared them. Now I love them but only under the right circumstances. It's a double-edged sword, definitely. STEPH: I, too, have felt the terribleness of deadlines and railed against them pretty hard because I had gone through a negative experience with them but have also shifted my feelings about them where they can be incredibly useful. So I really liked that's one of the things that you've changed your mind about. It also reminds me of one of the other things...I'm going to circle back for a moment to one of the things that I believe about creating great software is to not wait for perfection, and deadlines are a really good tool that helps you not wait for perfection. Because I have also seen teams really struggle or sometimes fail because they waited until there was something perfect to present, and then you realize that you've built the wrong thing. So I do want to transition and talk a bit about the show because it's our last episode, and we should talk about it, and the fun adventures that we've had and some of the things that we've learned or things that we're feeling in the moment. So given that it's been a wonderful three years for me, it's been four years for you since you've been a host on the show. How are you feeling? CHRIS: I'm feeling a bunch of different things sort of all at once. I am definitely going to miss this immensely. Particularly, I loved when I started, and I got to interview a bunch of thoughtboters and other people from the community. But frankly, three-plus years of getting to chat with you has been just such a delight. There's been an ease to it. We kind of just show up and talk about what we're doing. And yet there are these themes that have run through it. And it has definitely helped me hone and shape my thinking and my ability to communicate about what I'm thinking. I've learned that you have a literal superpower to remember the last thing that you were talking about. Listeners, you may not know this, but we are not quite the put-together folks that we may sound like in these recordings. We have a wonderful editor, Mandy Moore, who makes us sound so much better than we are. But we'll often pause and stop and then discuss what we want to talk about next. And Steph always knows the exact phrase that she or I left off on. And it has been so valuable to the team. But really, it's been just such a pleasure getting to have these conversations. It's also been something that has just gently been in the back of my mind at all times. And so, I'm observing the work in any given week as I'm doing it. It's almost like meditation in a certain way, whereas I'm working on something, like, oh, this is actually really cool. I want to take a note about this and talk about it on The Bike Shed with Steph. And having this outlet, having this platform to be able to have those conversations and knowing that there are people out there is fantastic, although it's very weird because really, every one of these recordings is just you and I on a video call. And so there is an audience, I'm pretty sure. I think people listen to the show; I don't know, occasionally they write in, so it seems like they do. But at the end of the day, this really just feels like a conversation with a friend, and that has been so valuable to have. And yeah, I'm definitely going to miss that. It's been a wonderful run, you know, four years is a long time. It's about as long as I've done most things in my career. And so I'm very happy with what we have done here. And there's a trite saying that isn't...yeah, whatever; I'll just say it, which is, "Don't be sad that it's over. Be glad that it happened." And I guess I'm still going to be sad that it's over. But I am so glad that I got the opportunity to do this, that you joined in this adventure and that we got to chat each week. It's been really delightful. STEPH: I really liked how you refer to this as being a meditative state. And that is something that I have certainly picked up from you and thoroughly enjoyed that I have this space that I get to show up and bring these ideas and topics and then get to talk them out with you. And that has been such a nice way to either end the week or start a week. I mean, it doesn't matter. Anytime that we record, it's this very nice moment of the week where we get to come together and talk through some of the difficulties and share our stories. And that's been one of my favorite moments is because you and I get to show up and share everything that's going on. But then when someone writes into the show or if they send a tweet or something and they share their story or their version of something that happened, or if they said that we made them laugh, that was one of my favorite accomplishments is the idea that something that we have done was silly enough or fun enough that it has brought them joy and made them laugh. So I, too, I'm very, very much going to miss this. It has been a wonderful adventure. And I thank you for encouraging me to come on this adventure because I was quite nervous in the beginning. And this has definitely been an aspect of my life that started out as something that was very challenging and stretching my limits, and now it has become this very fun aspect and something that I get to show up and do and then get to share with everyone. And I do feel very proud of it, very much in part to Thom Obarski, who was our initial producer and helped us have that safe space to chat about things. And now Mandy, who keeps the show running smoothly and helps us sound our best week to week. So it's been a wonderful adventure. This is going to be hard to let go. And I think it's going to hit me most. Like, this was one of those things as we're talking about it, it's, like, I'll see you next week. This will be fine. But I think it's going to hit me when there's something that I want to talk about where I'm like, oh, this would be great to talk about, and I'll add it to The Bike Shed Trello board. And I'll be like, oh yeah, that's not a thing anymore, at least not quite in the same way that it was. CHRIS: So what I'm taking away from this is that you're immediately going to delete my phone number the minute we hang up this call and stop recording. [laughs] STEPH: Oh yeah. I preemptively deleted. So that's already done. Friendship is over at this point. CHRIS: That's smart. Yeah, because you might forget otherwise in the heat of the moment as we're wrapping this whole thing up. STEPH: [laughs] CHRIS: But actually, on that note, in a slightly more serious vein, again, there's this weird aspect where the audience is out there. But we're very sort of disconnected, particularly at the moment in time where we're recording. But it has been so wonderful getting various notes from listeners, listener questions, but also just commentary and the occasional thanks and focusing; oh, you pointed me in the direction, or you helped me think through a complicated piece of work or process a problem that we were having. And so that has been just so, so rewarding. And one of the facets of this that has been so interesting to me is being able to connect to people and basically put out there what we believe about software, and for the folks that resonate with it and be able to have that connection instantly. And meeting people, and they're like, "Oh, I've listened to The Bike Shed. I like all these things." I'm like, oh, cool, we get to skip way further into the conversation because I've already said a bunch, and you say you like that thing. So, cool, we're halfway through our introductory chat. And I know that we agree about a bunch of things, and that's really wonderful. And frankly, I'm going to miss that immensely. So for anyone out there who's found something valuable in this, who's enjoyed listening week to week, or perhaps even back to Upcase for things, I would love to hear from you. I'd love to connect to folks. Send me an email, Twitter. I'm on all the places. I'm Chris Toomey in various spots or ctoomey.com on the internet. Chris Toomey on GitHub. I'm findable, I think. Chris Toomey developer will probably get you there. But I would really love to hear from folks, to connect to folks, you know, someday down the road; I think I'll be hiring again. And that'll be fun. I would love to work with some of the folks that have listened to this show or meet you at a conference, or if I happen to be traveling to a city or you're traveling to Boston. Really for me, so much of what this show is about is connecting with people around how we think about building great software. And so, I would love to continue that forward into the future. So yeah, say hi, if you're interested. STEPH: I agree. That's been one of the most fun aspects of being co-host of the show. And I'll be honest, you are welcome to contact me, but I am going to be off-grid for probably six months. [laughs] So just know that there will be a bit of a delay before you hear back from me. But I would definitely love to hear from you. I also want to say a very heartfelt thanks to a couple of people, just folks that have made this journey incredible and have made it so much fun. One, in particular, is everyone at thoughtbot for their continuous stream of knowledge. I mean, frankly, my software opinions wouldn't be half as interesting if it wasn't for everyone at thoughtbot constantly sharing their knowledge and being a source of inspiration. So I deeply appreciate everyone that has contributed to topics and ideas and just constantly churning out blog posts because those are phenomenal. And I also want to give a shout-out to my husband, Tim, because he has listened to The Bike Shed for many years and even helped out with a number of show notes when that was something that you and I used to do before Mandy made our life so much easier and took that over for us. And has intervened a number of times when Utah mid-recording would decide it's time to play. So I want to give a very special thank you to him because he has been a very big supporter of the show and frankly helped me manage through a lot of the recordings for when I had an 80-pound dog that was demanding my attention. CHRIS: I think continuing on the note of thanks; similarly, I'm so grateful to thoughtbot as an organization for everything that is represented in my career. It's a decade-plus that I have been following and then listening to the podcasts and then joining the organization, and then getting so many wonderful opportunities to learn about this thing called web development. And then, even after I left the organization, I was able to stay on here on The Bike Shed and hang out and still chat with you, Steph, which has been really wonderful. So thank you, thoughtbot, so much. Thank you to Joël Quenneville, who will be the continuing host of the show. This show is not going anywhere. And, Steph, you and I aren't really going anywhere, but we won't be around anymore. But we are leaving it in the very, very capable hands of Joël, and I'm super excited to hear the direction that he takes it and Joel's incredibly thoughtful and nuanced approach to thinking about programming and communicating. So I think that will be really wonderful. And lastly, I definitely want to thank Derek Prior and Sage Griffin, the two original hosts of this show, who really produced something wonderful, and for many years, I think it was about four years that they hosted together. I was an avid listener despite actually working at the company the whole time and really loved the thing that they produced and was so grateful that they entrusted me with continuing it forward. And hopefully myself and then with the help of you along the way, we've...I think we've done an okay job, but now it is time to pass the torch or the green lantern. That's the adage I've been going with. Gotta pass the lantern, pass the mantle on to the next one. So, Joël, it's going to be in your hands now. STEPH: Yeah, I'm so looking forward to future episodes with Joël Quenneville. They are going to be fabulous. So I've been thinking in terms of this being our finale episode and then a fun ending for it, so there's a thing called the 21-gun salute, which is the military honor that's performed by firing cannons or artillery. Not to be confused with the three-volley salute, which I definitely confused earlier that is reserved and used at funerals, which this is not. So using the 21-gun salute, I was like, hmm, it is The Bike Shed, and we have this cute ring ring that goes. So I think for our finale, we should have a 21-bell salute as we exit the shed and right off into the sunset. CHRIS: I love it. I couldn't imagine a more perfect send-off. So with that, what do you think? Should we wrap up? STEPH: Yes, but I have one more silly thing to add. I've thought of a new software idiom that I'm excited about. And so, this may be my final send-off into glory that I'd like to share with you. And I think that we should make like a shard and split. CHRIS: [laughs] I so appreciate that in this moment, this final moment that we have together, you choose to go with a punny joke. It is so on brand for the show. It is absolutely perfect. And I think with that note, shall we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
347: Tracking Velocity

The Bike Shed

Play Episode Listen Later Jul 26, 2022 38:50


Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough? They answer a listener's question about where to get started testing a legacy app. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. jnicklas/capybara_table: (https://github.com/jnicklas/capybara_table) Capybara selectors and matchers for working with HTML tables Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Just gotta hold on. Fly this thing straight to the crash site. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. CHRIS: And I'm Steph Viccari. STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs] CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are. STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened. CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari. STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody. CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time. STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world? CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever. But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain. It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it. And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into. First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while? Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass. So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something. The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough. STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes. But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take. CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate. And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something. But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen. There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal. And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common. But to highlight something that's really great and wonderful that I've been working with, there is a project called capybaratable. capybaratable is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it. And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it. And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found. STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future. CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you? [laughter] STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do. So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for. On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done. CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience. STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered. So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine. But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle. And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does. And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land. CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up? STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow. I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case. CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions. There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in. STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough? So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace? And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity? CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting. There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that. There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets. And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual. I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction. STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well. For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase? Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in. I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily. I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals. And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off. And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing. But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like. I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster? And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours. Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed. I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to? I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough. CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation. So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations. Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose. Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations. But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations. And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that. If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do. So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question. STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself. So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily. And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing. CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team. But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all. My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this. STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful. But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off. But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start. And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making. And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows. So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well. So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went. So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you. CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks. And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected. Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that. Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things. And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it. So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind. One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true? Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited. And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation. The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day. So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done. And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts. STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project. If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything? CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph? STEPH: I think those are fabulous thoughts. I think you covered it all. CHRIS: Sounds good. Well, in that case, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
346: Occasional Biscuits

The Bike Shed

Play Episode Listen Later Jul 19, 2022 37:13


Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon. Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Greenland (https://www.imdb.com/title/tt7737786/) Geostorm (https://www.imdb.com/title/tt1981128/) San Andreas (https://www.imdb.com/title/tt2126355/) Armageddon (https://www.imdb.com/title/tt0120591/) This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles] I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby. CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling. STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride. CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact? STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right? CHRIS: They sure do. STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go? CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say. It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not? STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones. CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great. STEPH: Oh yeah, that's a good combo. CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that. STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts. CHRIS: Right. That is what they say in the movie. STEPH: [laughs] Okay. CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space. STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role. CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out. STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up. CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically. And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work. I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time. STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo. CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me. In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think... I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah. STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'. CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two. STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced. I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people. And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either. In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it. And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability. And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point." And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up. CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it's also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase. Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that. But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it. I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles] STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day. CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially? STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing. CHRIS: Sold. STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs] CHRIS: And occasional biscuits, I hope. STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world? CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things. Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work. We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood. And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that. I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true. So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice. I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication. STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing. Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point. CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything. We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us. As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them. And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain. STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up." CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations. But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff. But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please. So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that. So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application. STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that. CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system. In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph? STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase. So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application. I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized? Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage? I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app. CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software? Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't. And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points. If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code. Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application? I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button. But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works. The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency. Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box. But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this. STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one. I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there. I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity. CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way. STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application? That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me. One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me. And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase. CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration. I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is. Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo. Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff. STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on. But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code? There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs] CHRIS: We've been burned before. STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question. CHRIS: What would Michael Bay do? STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
345: Fire Drill

The Bike Shed

Play Episode Listen Later Jul 12, 2022 49:22


Chris is getting ready to travel, and of course, Sagewell started the day with an incident, a situation, if you will... Steph talks books perfect for vacations and feels sufficiently scarred regarding still working with moving fixtures over to FactoryBot. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Back to Basics: Boolean Expressions (https://thoughtbot.com/blog/back-to-basics-booleans) Sarah Drasner tweet (https://twitter.com/sarah_edo/status/1538998936933122048) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: All right, I am now officially recording as well. Let me make sure my microphone is in front of my face. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What's new in my world? Today is an interesting day. We are recording on a Friday, which is not normal for us, was normal for a long time and then stopped, but now it's back to being normal. But it's the morning, which is confusing. Also, I am traveling this evening. I leave on a flight going to Europe. So I'm going to do a red-eye, that whole thing. So I got a lot to pack into today, literally packing being one of those things. And then this morning, because obviously, this is the way the world should play out, we started the day with an incident at Sagewell, a situation. Some code had gotten out there that was doing some stuff that we didn't want it to do. And so we had to sort of call in the dev team. And we all huddled together and tried to figure it out. Thankfully, it was a series of edge cases. It was sort of one of those perfect storms. So when this edge case happens in this context, then a bad thing could happen. Luckily, we were able to review the logs; nothing bad happened. While I'm unhappy that we had this situation play out... basically, it was a caching thing, just to throw that out there. Caching turns out to be very hard. And the particular way it played out could have manifested in behavior that would have been not good in our system, or an admin would have inadvertently done something that would have been incorrect. But on the positive side, we have an incident review process that we've been slowly incubating within the team. One of our team members introduced it to us, and then we've been using it on a few different cases. And it's really great to just have a structured process. I think it's one of those things that will grow over time. It's a very simple; what's the timeline of what happened? What's the story as to why it happened and why it wasn't caught earlier? What are the actions that we're going to take? And then what's the appendix? What's the data that we have around it? And so it's really great to just have that structure to work within. And then similarly, as far as I can tell, the first even observable instance of this behavior in our system was yesterday morning. We saw it, started to respond to it, saw one more. We were able to chase it down in the logs. Overall, the combination of the alerting that we have in Sentry and the way in which we respond to the alerting in Sentry, which I think is probably the most critical part. Datadog is our log metrics tool right now. So we're able to go through Datadog, and we have Lograge configured to add more detail to our log lines. And so we're able to see a very robust story of exactly what happened and ask the question, did anything actually bad happen? Or was it just possible that something bad could happen? And it turns out just possible. Nothing actually happened. We were able to determine that. We were even able to get a more detailed picture of who were all the users who potentially could have been impacted. Again, I don't think there was any impact. But all total, it was both a very stressful process, especially as I'm about to go on vacation. It's like, oh cool, start to the day where I'm trying to wrap up things, and instead, we're going to spend a couple of hours chasing down an incident. But that said, these things will happen. The way in which we were able to respond, the alerting and observability that we had in place make me feel good. STEPH: I like the incident structure that you just laid out. That sounds really nice in clarifying what happened when it happened in the logs. And the fact that you're able to go through and confirm if anything really bad happened or not is really nice. And I was also just debating this is one of those things, right? Right when you're about to go on vacation, that's when something's going to break. And that's like, is that good or bad? Is it good that I was here to take care of it right before, or is it bad? Because I'd really like to not be here to take care of it. [laughs] You may have mixed feelings. I have mixed feelings. CHRIS: I think I'm happy. Unsurprisingly, this exists in one of the most complex parts of our codebase. And it involves caching. And I remember when we introduced the caching, I looked at it, and I was like, hmm, we have a performance hotspot that involves us making a lot of requests to an external system. And so we thought about it a little bit, and we were like, well, if we do a little bit of caching here, we can actually reduce that down from seven calls down to one over external HTTP. And so okay, that seems to make sense. We had a pull request. We did a formal review. And even I looked at the pull request where this was introduced initially, and my comments on it were like, yep, this all looks good. Makes sense to me. But it's caching-related. So let's be very careful and look very closely at it and determine if there's anything, but it's so hard to know. And in fact, the code that actually was at play here was introduced a month ago. And interestingly, the observable side effect only occurred in the past two days, which we find very surprising. But again, it's this weird like, if A happens and then within a short period after that B happens...and so it's not quite a race condition. But it was something where a lot of stuff had to happen in a short span of time for this to actually manifest. And so, again, we were able to look through the logs and see all of the instances where it could have happened and then what did happen. Everything was fine, but yeah, it was interesting. I feel actually good to have seen it. And I think we've cleared everything up related to it and been very proactive in our response to it so that all feels good. And also, this is the sort of thing we've done this a few times now where we've had what I would call lesser incidents. There was no customer-facing impact to this. Similarly, previous incidents, we've had no or very minimal customer-facing impact. So at one point, we had a situation where we weren't processing our background jobs for a little while. So we eventually caught up and did everything we needed to. It just meant that something may not have happened in as timely a fashion as necessary. But there were no deep ramifications to that. But in each of those cases, we've pushed ourselves to go through the incident process to make sure that we're building the muscle as a team to like, actually, when the bad one comes, we want to be ready. We want to have done a couple of fire drills first. And so partly, I viewed this as that because again, there was smoke, but no fire is how we would describe it. STEPH: Nice. And that also makes sense to me how you were saying y'all introduced this about a month ago, but you were just now seeing that observable side effect. I feel like that's also how it goes. Like, you implement, especially with caching, some performance improvement, and then you immediately see that. And it's like, yay, this is wonderful. And then it's not til sometime passes that then you get that perfect storm of user interactions that then trigger some flow that you didn't consider or realize that could create an issue with that caching behavior. So yeah, that resonates. That seems right. All caching problems usually take about a month or two when you've just forgotten about what you've done. And then you have to go back in. CHRIS: Yep. Yep, yep, yep. So now we've done the obvious thing, which is we've removed every cache from the system whatsoever. There are no caches anymore because it turns out we just can't be trusted with caches in any form whatsoever. ActiveRecord, we turned off caching, Redis we threw it out. No, I'm kidding. We still have lots of caching in the app. But, man, caching is so hard. STEPH: I would love if that's in the project README where it says, "We can't be trusted with caches. No caches allowed." [laughs] CHRIS: Yeah, we have not gone all the way to forbid caching within the application. It's a trade-off. But this does have that you get those scars over time. You have that incident that happens, and then forever you're like, no, no, no, we can't do X. And I feel like I'm just a collection of those. Again, I think we've talked about this in previous episodes. But consulting for as long as I did, I saw a lot of stuff. And a lot of it was not great. And so I basically just look at everything, and I'm like, urgh, no, this will be hard to maintain. This is going to go wrong. That's going to blow up someday. And so, I'm having to work on trying to be a little more positive in my development work. But I do like that I have that inclination to be very cautious, be very pessimistic, assume the worst. I think it leads to safer code in general. There was actually a tweet by Sarah Drasner that was really wonderful. And it's basically a conversation between her and another developer. It's a pretend conversation. But it's like, "But why don't you like higher-order components?" And then it's Squints. "Well, in the summer of 2018, something bad happened, Takes a long drag of a cigarette. something very bad." It's just written so well and captures the ethos just perfectly. Like, sit down. Let me tell you a tale of the time in 2018. [laughs] So I'll include a link to that in the show notes because she actually wrote it so well too. It's got like scene direction within a tweet and really fantastic stuff. But yeah, we'll allow some caching to continue within the app. STEPH: That's amazing. So I was just thinking where you're talking about being more pessimistic versus optimistic. And there's an interesting nuance there for me because there's a difference in like if someone's pessimistic where if someone just brings up an idea and someone's like, "Nope, like, that's just not going to work," and they just always shoot it down, that level of being pessimistic is too much. And it's just going to prevent the team from having a collaborative and experimental environment. But always asking the question of like, well, what's the worst that could happen? And what are the things that we should mitigate for? And what are the things that are probably so unlikely that we should just wait and see if that happens and then address it? That feels like a really nice balance. So it's not just leaning into saying no to everything. But sure, let's consider all the really bad things that could happen, make a plan for those, but still move forward with trying things out. And I realized I do this in my own life, like when someone asks me a question around if there's something that we want to do that's a bit kind of risky. And the first thing I always think of is like, well, what's the worst that could happen? And I think that has confused people that I immediately go there because they think that I'm immediately saying no to the idea. And so I have to explain like, no, no, no. I'm very intrigued, very interested. I just have to think through what's the worst that can happen. And if I'm okay with that, then I feel better about accepting it. But my emotional state, I have to think through what's the worst and then go from there. CHRIS: Wow, it's a very bottom-up approach for your life planning there. [chuckles] STEPH: Yep, I think that's, you know, it's from being a developer for so long. It has impacted now how I make other decisions. Good or bad? Who knows? Yeah, it turns out being a developer has leaked into my personal life. I've got leaky abstractions over here. So, good or bad? Who knows? CHRIS: Leaky abstractions all the way down. Yeah, circling back to, like, I don't think I'm pessimistic per se. The way that I see this playing out often is there will be a discussion of an architectural approach, or there's a PR that goes up. And my reaction isn't no, or this has a known failure point; it is more of uh, this makes me uncomfortable. And it's that like; I can't even say exactly why, and that's what makes it so difficult. And I think this is a place that can be really complicated for communication, particularly between developers who have been around for a little bit longer and have done this sort of thing and have gathered these battle scars and developers who are a bit newer. Having that conversation and being like, um, I can't say exactly why. I can tell you some weird stories. I might not even remember the stories. Some of it just feeds into just like, does this code make me uncomfortable? Or does this code make me happy? And I tend towards wildly explicit code for these reasons. I want to make it as clear as possible and match as close as possible to the words that we're saying because I know that the bugs hide in the weird corners of our code. So I try and have as few corners. Make very rounded rooms of code is a weird analogy that doesn't play, but here we go. That's what I do on this show is I make weird analogies. Actually, we were working on some code that was dealing with branching conditional things. So we had a record which has a boolean value on it. So we've got true or false, and then we've got two states, and then we've gotten an enum with three states. So all total, we have six possible states. But as we were going through this conversation, I was pairing with another developer on the team. And I was like, something feels weird here. And I actually invoked the name of Joël Quenneville because much of the data structure thought that I had here I associate with work that Joël has done around Maybe and things like that. And then also, my suggestion was let's build a truth table because that seems like a fun way to manage this and look at it and see what's true. Because I know that there are spots on this two-by-three grid that should never happen. So let's name that and then put that in the code. We couldn't quite get it to map into the data type, like into that Boolean in the enum. Because it's possible to get into those states, but we never should. And therefore, we should alert and handle that and understand, like, how did this even happen? This should never happen. And so we ended up taking what was a larger method body with some of the logic in it and collapsing it down to very explicitly enumerate the branches of the conditional and then feed out to a method. Like, call a method that had a very explicit name to say, okay, if it's true and we're in this enum state, then it's bad, alert bad. And then the other case like, handle the good case. And I was very happy with what we refactored down to because this is another one of those very complex parts of our code. Critical infrastructure-y is how I would describe it. And so, in my mind, it was worth the I'm going to go with pathological refactoring that we got to there. But yes, I was channeling Joël in that moment. I'm very happy to have had many conversations with him that help me think through these things. STEPH: That's awesome. Yeah, those truth tables can be so helpful. There's a particular article that, of course, Joël has written that then describes how a truth table works and ways that you can implement it into your habits. It's called Back to Basics: Boolean Expressions. I will be sure to include a link in the show notes. CHRIS: But yeah, I think that summarizes my day and probably the next couple of days as I prepare for an adventure over to Europe and chat about developer spidey sense. But yeah, what's new in your world? STEPH: Yeah, that's a big day. There's a lot going on. Well, I actually want to circle back because you mentioned that you're packing and you're going on this trip. And I'm curious, do you have any books queued up for vacation? CHRIS: I do, yeah. I'm currently reading Elantris by Brandon Sanderson. Folks might be aware of his work from the highest-funded Kickstarter of all time, which was absurd. Did you see this happen? STEPH: I don't think so, uh-uh. CHRIS: He did this fun, cheeky little Kickstarter. The video was sort of a fake around...oh, it almost sounded like he might be retiring or something like that. And then he's like, JK, I wrote five new books. And so the Kickstarter was for those books with different tiered packages and whatnot. I think he got just the right viral coefficient going on. And apologies for the spoiler if anyone's not seen the video, but it's been out there for a while. So he wrote some books, and that's what the Kickstarter is for. You get some books. You sort of join a book club, and you'll get one a quarter. A million dollars seems like that will be a bunch for that. That'd be great. If he raised a million dollars, that'd be amazing. $40 million four-zero million dollars. [laughs] I'm just watching it play out in real-time as well. It just skyrocketed up. The video, I think, was structured just right. He got it onto the...it was on Reddit and Twitter and just bouncing around, and people were sharing it. And just everything about it seemed to go perfectly. And yes, the highest-funded Kickstarter of all time, I believe, certainly within the publishing world. But yeah, Brandon Sanderson, prolific author, and his stuff ends up just being kind of light and fun. And so I was reading Elantris for that. It's been a little bit slower to pick up than I would like. So I'm now in the latter half. I'm hoping it'll go a little bit more quickly and be...I'm just kind of looking for a fun read, some fantasy thing to go on an adventure. But as the next book, I downloaded a second one just to make sure I'm covered. I have a book by John Scalzi, who's a sci-fi, fantasy, more on the sci-fi end of the spectrum. And I've read some of his other stuff and enjoyed it. And this particular book has a very consistent set of reviews. I've read the reviews a few times. And everybody who reviews it is just like, "This isn't the greatest book I've ever read, but man was it a fun ride." Or "Yeah, no, best book? No. Fun book? Yes." And just like, "This book was a fun ride. This was great." And I was like, perfect. That is exactly what I'm looking for on a European vacation. The book is called The Kaiju Preservation Society, which also plays on monsters, Pacific Rim Godzilla. Kaiju, I think, is the word for that category of giant dinosaur-like monster. And so it's the Kaiju Preservation Society, which, I don't know, means some stuff, and I'm going to go on a fun adventure. So yeah, those are my books. STEPH: Nice. I've got one that I'm reading right now. It's called Clementine: The Life of Mrs. Winston Churchill, written by Sonia Purnell. And Sonia Purnell tends to focus on female historical figures. And so it's historical fiction, which is a sweet spot for me. The only thing I'm debating on is because I'm realizing as I'm reading through it, I'm questioning, okay, well, what's real and what's not? Because I don't want to be that person that's like, did you know? And then, I quote this fictional fact about somebody that was made up for the novel. [laughs] So I'm realizing that maybe historical fiction is fun, but then I'm having to fact-check all the things because then I'm just curious. I'm like, oh, did this really happen, or how did it go down? So it's been pretty good so far. But then it makes me wish that historical fiction novels had at the back of them they're like, these are all the events that were real versus some of the stuff that we fictionalized or added a little flair to. I'm in that interesting space. I also like how you highlighted that you chose a fun book. I was having a conversation with a colleague recently about downtime. And like, do you consume more tech during downtime? Like, are you actively looking for technical blog posts or technical books to read or podcasts, things like that? And I was like, I don't. My downtime is for fun. Like, I want it to be all the things that are not tech. Maybe some tech sneaks in there here and there, but for the most part, I definitely prioritize stuff that's fun over more technical content in my spare time, which has taken me a little while to not feel guilty about. Earlier in my career, I definitely felt like I should be crunching technical content all the time. And now I'm just like, nope, this is a job. I'm very thankful that I really enjoy my job, but it's still a job. CHRIS: It is an interesting aspect of the world that we work in where that's even a question. In my previous life as a mechanical engineer, the idea that I would go home and read about mechanical engineering...I could attend a conference, but I would do that for very particular reasons and not because, like, oh, it's fun. I'll go meet my friends. For me, this was a big reason that I moved into tech because I am one of those folks who will, like, I will probably watch a video about Remix in particular because that's my new thing that I like to play around with and think about. But it needs to be a particular shape of thing I've found. It needs to be exploratory, puzzle-y. Fun code, reading, learning work that I do needs to be separated from my work-work in a certain way. Otherwise, then it feels like work, then it is sort of a drudgery. But yeah, my brain just seems to really like the puzzle of programming and trying to build things. And being able to come into a world where people share as much as they do blogs and conference talks and all of that is utterly fantastic. But it is a double-edged sword because I 100% agree that the ability to disconnect to, like, work a nine-to-five and then go home at the end of the day. Yeah, go home, you know, because you remember when we went to an office and then we would go home afterwards? I have to commute every once in a while into the city and -- STEPH: You mean go downstairs or go to another room? That's what you mean? [laughs] CHRIS: I used to commute every day, and it took a lot of time. And now when I do it, I feel that so viscerally because I'm like, it's just a lot easier to just walk to my office in my house. But yes, I 100% I'm aligned to that like, yeah, no, you're done with work for the day, walk away. That's that. And learning a new technology or things like that, that's part of the job. There shouldn't be the expectation that that just happens. There's continuing education in every other field. It's like, oh, we'll pay for your master's degree so you can go learn a thing. That's the norm in every other...not in every other industry but many, many, many industries. And yet the nature of our world the accessibility of it is one of the most wonderful things about it. But it can be a double-edged sword in that if there are the expectations that, oh yeah, and then, of course, you're going to go home and have side projects and be learning things. Like, no, that is an unreasonable expectation, and we got to cut that off. But then again, I do do that. So I'm saying two things at the same time, and that's always complicated. STEPH: But I agree with what you're saying because you're basically respecting both sides. If people enjoy this as a hobby, more power to you.; that's great. This is what you enjoy doing. If you don't want to do this as a hobby and respect it as a job, then that's also great too. There can be both sides, and no side should feel guilty or judged for whichever path that they pursue. And I absolutely agree, if there are new skills that you need to learn for a job, then there should be time that's carved out during your work hours that then you get to focus on those new skills. It shouldn't be an expectation that then you're going to work all day and then spend your evening hours learning something else. And same for interviews; there shouldn't be a field that says, "Hey, what are your side projects?" Or at least that should not be an important part of the interview. There should be an alternative to be like, "Or what work code do you want to talk about?" Or something else that's more in that nine-to-five window that you want to talk about. That way, there's a balance between like, sure, if you have something that you want to talk about on the side, great, but if not, then let's focus on something that you've done during your actual work hours because that's more realistic. CHRIS: I do think there's an interesting aspect at play because the world of development moves so rapidly and because it's constantly changing. And to frame it differently, I don't think we've got this thing figured out. And so many people lament how quickly it changes and that there's a new framework every other week. And there's a bit of churn that is perhaps unnecessary. But at the same time, I do not feel like as a community, as a working population, that we're like, yeah, got it, crushed it. We know how to make great software, no question about it. It's going to be awesome. We're going to be able to maintain it for forever, don't even worry about it. New feature? We can get that in there. They're actually still pretty rare. So we need to be learning, and evolving, and exploring new techniques. I think the amount of thinking is probably good mostly in the development world. But organizations have to make space for that with their teams. And thoughtbot obviously does that with investment days. That's just such a wonderful structure that embraces that reality and also brings happiness, and it's just a pleasant way to work. And frankly, my team does not have that right now. We do the crispy Brussels snack hour, which also now has a corresponding crispy Brussels work lunch, which is one week we think about it, and the next week we do the thing. We're trying to make space for that. But even that is still more intentional and purposeful and less exploratory and learning. And so it's an interesting trade-off. I deeply believe in this thing, and also, the team that I'm leading isn't doing it right now. Granted, we're an early-stage startup. We got to build a bunch of stuff. I think that's fine for right now. But it is a thing that...again, I'm saying two things at the same time, always fun. STEPH: Well, and there might be a nice incremental approach to this as well. So thoughtbot has the entire day, and maybe it's less than a full day. So perhaps it's just there's an hour or two hours or something like that where you start to introduce some of that self-improvement time and then blossom out from there. Because yeah, I understand that not all teams may feel like they have the space for that. But then I agree with everything else you said that it really does improve team morale and gives people a space to then be able to get to explore some of those questions that they had earlier. So then they don't feel like they have to then dedicate some weekend time or off hours' time to then look into a question. And I admit, I'm totally guilty too. I am that person that then I've worked extra hours, but it's because, like you said, if there's a puzzle that my brain is stuck on and I just feel the need to get through it. But then I look at that as am I doing this because I want to? Yes. Okay, then as long as I'm happy and I don't feel like this is increasing any concern around burnout, then I don't worry about it. MIDROLL AD: Debugging errors can be a developer's worst nightmare… but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! Circling back to your original question about what's going on in my world, and you mentioned scarring earlier. I feel sufficiently scarred [laughs] in regards to still working with moving fixtures over to FactoryBot. This week has really confirmed that fixtures don't trigger a lot of the callbacks, the model callbacks that exist. And so this really means that you can just create bad data that your application doesn't actually allow your application to create. So there are tests that are exercising behavior that should never exist. And then porting that over to FactoryBot then highlights that because then as soon as I move that record over and then try to create it or do something with it, then the app, the test, do the right thing and let me know saying no, no, no, we've added validations. You can't do that anymore. That has been grinding my gears in terms of trying to then translate. Because then I have to really dive into the code to understand it. And the goal here is to stay as high level as possible and not have to dive in too much. But then that means that I do have to dive in and understand more. So this has frankly just been one of those times in my career where you just kind of have to slog through the work. It's important work to be done. It'll be great once it's done. But it's a painful process. And the best way that I've found to make it more enjoyable is to be in heavy communication with Joël, who's on the project with me, just so if we get stuck on something, then we can chat with each other. And then also there's one file that's particularly gnarly. And so we moved over one test. We were successful, and which felt great because then we could at least document like, okay, when we come back to this, at least we have one example that highlights the wonkiness that we ran into. But we've decided, okay, we're done with that file. We're going to take a break. There's a lot there, but we're going to move on and give ourselves a break and do some of the easier ones, and then we'll circle back to the harder one. Which was, I think, just a bit of bad luck in terms of, like, as we're going down the list, that happened to be like the gnarliest one, and it was like the first one that Joël picked up. And so I'm going through a couple of files, and Joël is like, "What? [laughs] How are you making progress?" And we realize it's just because that file, in particular, is very hard to find all the mystery guests and then to move everything over. Finding a positive note through all of the cruft, I will say this is helping with some of my code sleuthing skills. So as I am running into these problems and then looking for mystery guests, I'm noticing ways that I can then, as quickly as possible, try to triage and identify as to why one test doesn't match another test. Some of it is more specific to the application setup, so it won't be as applicable to future projects. But then some other areas have been really helpful. Like, I'm using caller a lot more to understand, like, I know this is getting called, but I don't know who's calling you. So I can put in a line that basically outputs like, show me your stack traces to how you got here. So that's been really nice as well. So it has improved some of my code sleuthing skills and also my spidey sense in terms of it's typically mystery guests. Like when a test isn't passing, it's because fixtures are creating extra data that are getting pulled in when there are queries that are being run. But they're not explicitly referenced in the test setup itself. So that's typically then where I start is looking for what record looks relevant to this test that I haven't pulled over to my test setup. CHRIS: I appreciate you finding the silver lining, the positive bit of this. Because as you're describing, the work that you're doing sounds like I think you use the word slog, which seems like a very accurate term. But sometimes we have to do that sometimes for a variety of reasons. We end up either having to introduce new code or fix old code, but this is sometimes the work. And this is something that I think you and I share about this show is we get to show all sides of the work. And the work can be glamorous and new. And oh, I've got this greenfield app that I'm building, and it's wonderful. Look at the architecture. And I know in the moment that I'm building someone else's legacy code three years from now. [laughs] And so telling the other side of the story and providing that rounded point of view, because like, yeah, this is all part of it. Again, I don't believe that this is a solved problem, building robust software that we can maintain. And so yeah, you're doing the good work in there. And I thank you for sharing it with us. STEPH: Thanks. Just don't use fixtures in your test, I beg of you. Please don't do that to the legacy code that you're writing for future developers. [laughs] That is my one request. CHRIS: And I will maybe add on to that, sparingly use callbacks. Maybe don't use them at all, and certainly don't use the combination because, my goodness, that'll lead you into some fun times. But yeah, just two small recommendations there. STEPH: Oh, there's something else I wanted to share. I saw that Slack added a new audio feature that allows you to record the pronunciation of your name, which is the feature that I was so excited about when we added it to our internal tool called Hub at thoughtbot. And now Slack has it on their profile so that way you can upload the pronunciation. And then anyone looking at your profile can then listen to how to pronounce your name. There are a couple of other features that they released, I think just in June, so about a month ago from the recording of today. [laughs] That's weird to say, but here we are. So I'll include a link in the show notes so folks can see that feature in addition to others, but I'm super excited. CHRIS: Oh, that is nice. I also like all right, so Slack now has it. Hub now has it. But I don't have access to Hub anymore. And I don't have access to every Slack in the world yet. But here's my suggestion. All right, everybody, stick with me here. I want you to own a domain. I want you to have a personal site on it. And I want the personal site to include the pronunciation of your name. I get that that's a big ask. And I get that there are other platforms that are calling to you, and you may be writing on those. But you know what? Just stand up a little site, just a little place on the internet that you own. And if it includes the pronunciation of your name, I will be forever grateful. STEPH: I like this idea. I initially was taking your idea and immediately running with it as you were speaking it because then I wondered if everyone had their own YouTube channel. But I don't know how hard it is to create a YouTube channel. I am not a YouTube channeler, so I don't know what that looks like. [laughs] But not everybody will know how to purchase a domain. So that might be another approach. CHRIS: I think it's pretty easy to do a YouTube channel. I'm conflating a couple of things. This is my basket of beliefs about people on the internet, but I kind of think everybody should own their own little slice of the internet. And so totally, YouTube is a place where the people make some stuff, make videos, put them on YouTube, absolutely. But ideally, you own something. I see a lot of people that are on YouTube, and that's it, and so their entire audience lives on YouTube. And if YouTube someday decides to change or remove them or say Medium as an example, Medium actually, I think, does a more interesting version of this where your identity kind of gets subsumed into Medium. And I really think everybody should just have their own little, tiny slice of the internet that's there. It has their name that they own that no platform can decide; hey, we've shifted, and now your stuff is gone. Cool URIs don't change as they say, and that's what I want. And then yeah, if you can have the pronunciation of your name on there, that's extra nice. Although I say that, and I don't know that I would do it because my name feels very obvious. One day someone was like, "Oh, how do you pronounce your last name?" I forget if I actually replied with the pronunciation. Or if I was like, "I need to know what options you're considering. I'm so interested because I've really only got the one." Maybe I'm anchored. Maybe I'm biased. [chuckles] I've been doing this for a while. But I really cannot think of another pronunciation of my name. STEPH: You might hear another one that you really like, and you need to pivot. CHRIS: Oh gosh. STEPH: That's the point where you start pronouncing your name differently. CHRIS: Wow, that would be a lot. And then, I could have a change log on my personal site where people can see this is the pronunciation, and this is what the pronunciation used to be. STEPH: [laughs] I like this idea. I also like this idea that everybody has their own slice of internet land. I like this encouragement that you're providing for everyone. On a slightly different note, there's a blog post that I'm really excited to talk about. It's written by Eric Bailey, who's a former thoughtboter. It's called The Optics of Pair Programming. And given how much pair programming that I'm doing, especially with Joël on the current project, it was a really wonderful read. And it also helped me think about pairing from a different perspective because we do have a very strong pairing culture at thoughtbot. So there's a lot of nuance, especially social nuances that can go along with when you invite someone to pair with you that I had not considered until I read this wonderful post by Eric. And we'll be sure to include a link in the show notes. But to provide an overview, essentially, Eric shares that given coming from thoughtbot where we do have a very open approach to pairing where pairing sessions are voluntary and then also last as long as the problem will last...but then when you're at a new company, you could experience pushback if you're inviting someone to pair and then to consider why that pushback may exist. And some of the high-level areas that Eric highlighted are power dynamics, assessment, privacy, and learning styles. So to dive into each of some of those, there's a power dynamics of it's important to consider who's offering to pair. So if I've joined a team as a consultant, there may be a power dynamic there that someone is feeling where their team is paying for my time. So they may feel like they can't say no if I offered to pair. They feel like they need to say yes to the invitation, even if they don't really want to. Or probably a more classic example would be like, what if your boss wants to pair or someone that's just more senior than you? Then it could leave you feeling like, well, I can't say no to this person, can I? Which yes, you totally can say no to that person, but it may leave you in a place where you feel like you can't. And so, it puts you in this sort of uncomfortable and powerless position. The other one is assessment, so offering to pair with someone could feel like you are implying that you want to assess their skills or that you're implying that they're not up to the task and therefore they need your help. So then that could also place someone in an uncomfortable position. There's also privacy. So someone who isn't confident may not want someone to observe their behavior or observe how they're working. It could make them feel really anxious, which then I love that Eric points this out. Ironically, pairing is really good at addressing that lack of confidence because then you get to see how other people work through their problems or how they think, or they may also have some anxiety. Or it just helps you become more comfortable in talking and thinking through with other people. So that one is a tough one where it's hard to get over that initial hurdle. But actually, the more you pair, then the less anxious you'll feel when you pair. And then there's also learning styles because pairing really involves a lot of deep thinking but in our personal time. And it can be hard to balance both of those, and it's just not as effective for some people. So I know that even as much as I really enjoy pairing, I just need to sit with code on my own sometimes. I need to think about it. I need to run it; I need to look at it. So it's really nice to talk with someone. But then I also need that alone time to then just think through it on my own because I can't have that same deep focus if I'm also worried about how the other person is experiencing that session because then my mental energy is going towards them. So that covers a number of the social nuances that can be included or running through someone's mind when you extend an invitation to them to pair. And it really resonated with me the areas that Eric highlights in this blog post. He also talks about a couple of strategies, which I'd love to dive into as well. But I'm going to pause here and see what thoughts you have. CHRIS: Yeah, I love this post. And it got me thinking about pairing and the broader human backdrop of all of the processes and workflows that we have. Everything he highlighted about pairing feels true. Although similar to you and to Eric, I've worked in a context where pairing was a very natural, very regular part of the work and sort of from the very top-down. And so everyone pairing between developers of any different level or developers and designers or really anyone in the...it was just such a part of how we worked that no one really questioned it or at least not after the first couple of weeks. I imagine joining thoughtbot those first weeks; you're like, oh God. As I shared, I think in the previous episode that we recorded, my pairing interview was with Joe Ferris, the CTO of thoughtbot, [laughs] writing a book about good and bad code. And I was like, I don't know what anything is here but very quickly getting over that hurdle. And having that normalizing experience was actually really great, and then have been comfortable with it since. But the idea that there are so many different social dynamics at play feels true. And then as I think about other things, like stand-up is one that I think of as this very simple this is a way to communicate where we're at. And where necessary or where useful, allow people to interject or step in to say, "Oh, let me help you get unblocked there or whatever it is." But so often, I see stand-up being a ritual about demonstrating that you are, in fact, doing work, which is like, here's what I did yesterday. I don't know if it's useful. Then mention that you're working on this project. But the enumeration of look, obviously, work was done by me. You can see it; here are the receipts. It's very much this social dynamic at play. And retro is another one where like, if retro is very much owned by one voice and not a place that change actually happens where people feel safe airing their opinions or their concerns, then it's going to be a terrible experience. But if you can structure it and enforce that it is a space that we can have a conversation, that everyone's voice is welcome and that real change happens as a result of, then it's a magical tool for making sure we're doing the right things. But always behind these are the people, and feelings, and the psychology at play. And so this was just such an interesting post to read and ruminate on that a little bit more. STEPH: Yeah, I agree, especially with a comment that you made about those daily syncs where I really just want to focus on today and what you have that you're blocked on. So it's a really nice update in case there are any cross-collaboration opportunities. That's really what I'm looking for in a daily update. And so I appreciate when people don't go through a laundry list of what they did yesterday because it's like, that's great. But then, like you said, it's just like you're trying to prove here's what I've done, and I trust you; you're working. So just let me know what you're doing today, friend. So Eric does a wonderful job of also including some strategies for ways that then you can address some of these concerns and then how there may be some extra anxiety that's increased when you're inviting somebody to pair. There are some wonderful strategies. I'll let folks read through the blog post itself. There are a couple in particular that came to mind for me because I was then self-assessing how do I tend to approach pairing with someone? And some ways that I want them to feel very comfortable with that experience. And there's a couple. There's one where I recognize that I need to build trust with each person. I can't just go on to a team and expect everyone to know that I have good intentions and that I'm going to do my best to be a fun, helpful pairing partner, and that it's not a zone of judgment. And that has to be cultivated with each person. Because especially as a consultant, if I'm joining a team, the people who hired me are not necessarily the people that I'm working with. It's someone that's probably in leadership or management that has then brought on thoughtbot. And so then the people that I'm working with they don't know me, and they don't know what my pairing style is going to be. So looking for ways to build trust with each person and then also inviting them or asking for help myself. So there's a bit of vulnerability that has to be shown to build trust with someone to say," Hey, I'm stuck on a problem. I would love a second set of eyes. Would you be willing to help me out with this?" So then that way, they're coming in to help me initially versus I'm going in and saying, "Hey, can I help you?" I have found that to be an effective strategy. And there's one that I do really want to talk about, and that's not everyone is going to pair well together. Like, you may find someone who always leaves you feeling just stressed or demoralized. And while it's important to consider your role and why that's true, that does not mean it's your fault and necessarily your problem to fix. So similar to having to manage up, you may need to coach the person that you're pairing with in ways that help you feel comfortable pairing. But if they don't listen to your requests and implement any of that feedback, then just don't pair with that person. That is a very fine option to recognize people that are not receptive to your needs and, therefore, not someone that you need to then force into being a great pairing buddy. And I emphasize that last one because it took me a little while to become comfortable with that and accepting that it wasn't my fault that I wasn't having a great pairing session with people. Similar to when I'm learning from someone that if someone is explaining something to me and they're making me feel inadequate while they're explaining it to me, that's not necessarily my fault. Like, I used to internalize that as like, oh, I just can't get this. But I am now a very staunch believer in if you can't explain it to me in a way that I understand, then that's probably more on you than on me. And that has also taken me time to just really accept and embrace. But once you do, it is so freeing to realize that if someone's explaining a concept and you're still not getting it, it's like, hey, how can we try harder together versus you just making me try harder? CHRIS: I like that right there of like, if I don't understand this, it may actually be you, not me, or something to that effect. Let's get that on a bumper sticker and put that in The Bike Shed store so that everybody can buy it and put it on their cars or at least just us. But yeah, that starting from the bottom sometimes it's just not going to work great. There are even...I think what you're describing sounds a little more complicated, individuals who are personally not great at communicating or pairing or things like that. And that's going to happen. We're going to run into folks that...pairing is communication. That's just the core of it, and some folks, that may not be their strongest suit. But I think there's another category of just like different working styles. And whereas I might...judge is such a heavy word, but I'm going to use it. I might judge someone who is not doing a great job at communicating to someone else, or understanding their point of view, or striving to do that, or taking feedback. Like, those are not great things. Whereas there may just be two different development styles or backgrounds, or there are other reasons that actually they may be not an ideal fit. That said, I have definitely found that in almost every variation of pairing, I've seen work at some point. Like, when I was very early on in my career pairing with folks that are very senior, I didn't get most of it, but I got some stuff. And then folks that are very much on the same level or folks that have a deep knowledge in framework, code base language, whatever and folks that are new to it but have a different set of experiences. Basically, every version of that, I found that pairing is actually an incredibly powerful technique for knowledge sharing, for collaboration, for all of that. So although there are rare cases where there might be some misalignment, in general, I think pairing can work. I do think you hit on something earlier of there are certain folks that are more private thinkers, is how I would describe it, where thinking out loud is complicated for them. I'm very much someone who talks. That's how I figure out what I think is I say stuff. And I'm like, oh, I agree with what I just said. That's good. But I find I actually struggle. There's something I think of...maybe I'm just a loudmouth is what I'm hearing as I say it, but that is how I process things. Other folks, that is not true. Other folks, it's quite internal, and actually trying to vocalize that or trying to share the thought process as they're going may be uncomfortable. And I think that's perfectly reasonable and something that we should recognize and make space for. And so pairing should not be forced upon a team or an individual because there are just different mindsets, different ways of thinking that we need to account for. But again, the vast majority of cases...I've seen plenty of cases where it's someone's like, "I don't like to pair. That's not my thing." And it's actually that they've had bad experiences. And then when they find a space that feels safe or they see the pattern demonstrated in a way that is collegial, and useful, and friendly, then they're like, oh, actually, I thought I didn't like pairing. I thought I didn't like retro. I thought I didn't like stand-up. But actually, all of these things can be good. STEPH: Yeah, absolutely. It's a skill like anything else. You need to see value in it. And if you haven't seen value in it yet or if it's always made you anxious and uncomfortable, then it's something that you're going to avoid as much as possible until someone can provide a valuable, positive experience around how it can go. I'm going to pull back the curtains just a little bit on our recording and share because you've mentioned that you are very much you think out loud, and that's how you decide that you agree with yourself. And I think already at least twice while we've been recording this episode, I have started to say something, and I'm like, no, wait, I don't agree with that and have backed myself up. CHRIS: [laughs] STEPH: And I'm like, no, I just thought through it; I'm going to cancel it out, [laughs] and then moved in a different direction. So I, too, seem to be someone that I start to say things, and I'm like, oh, wait, I don't actually agree with what I just said [laughs], so let's remove that. CHRIS: Yep. You've described it as Michael Scott-ing on a handful of different episodes or maybe things that were cut from episodes. But where you start a sentence and then you're like, I don't know where I was going to end up there. I hoped I'd figure it out by the end, but then I did not get there. And yeah, I think we've all experienced that at various times. STEPH: That's some of my favorite advice from you is where you've been like, just lean into it, just see where it goes. Finish it out. We can always take it out later. [laughs] Because I stop myself because I immediately start editing what I'm trying to say and you're like, "No, no, just finish it, and then we'll see what happens." That's been fun. CHRIS: This is how you find out what you think. You say it out loud, and then you're like, never mind. That was ridic – STEPH: [laughs] CHRIS: I do. Actually, now I'm thinking back, and I have plenty of those where I'll say a thing, and I'm like, nope, never mind, send that one back. [chuckles] As an aside, so we do this thing where we host a podcast, and we get to talk. But we're both now describing the pattern where we'll start to say something, and we'll be like no, no, no, actually, not that. And I think, dear listeners out there, you probably don't hear any of this, the vast majority of it, because we have wonderful editors behind the scenes, Thom Obarski for many years, and now Mandy Moore, who's been with us for a while. And so once again, thank you so much to the editor team that allows us to, I think, again, feel safe in this conversation that we can say whatever feels true and then know that we'll be able to switch that around. So thank you so much to the editors who help us out and make us sound better than we are. STEPH: Yeah, that has made a big difference in my capabilities to podcast. If we were doing this live, ooh goodness, this might be a whole different, weird show. [laughs] CHRIS: I mean, the same is true for code, right? I deeply value the ability to make an absolute mess in my local editor and have nine different commits that eventually I throw two out. And then I revert that file, and then eventually, the PR that I put up that's my Instagram selfie. That's like, I carefully curated this, but what's behind the scenes it's just a pile of trash. So yeah, the ability to separate the creation and the editing that's a meaningful thing to have in life. STEPH: Oh, I can't unsee that now. [laughs] A pull request is now the equivalent of that curated Instagram selfie. That is beautiful. [laughs] CHRIS: To be clear, I don't think I've ever taken an Instagram selfie. But I get the idea, and I felt like it was an analogy that would work. Again, I try out analogies on this show, and many of them do not stick. But I think that one is all right. STEPH: It might even go back to pairing because then you've got help in taking that picture. So hey, you're making a mess with somebody until you get that right perfect thing, and then you push it up for the world to see. So safe spaces for all the activities, I think that's the takeaway. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Screaming in the Cloud
Incidents, Solutions, and ChatOps Integration with Chris Evans

Screaming in the Cloud

Play Episode Listen Later Jul 7, 2022 33:28


About ChrisChris is the Co-founder and Chief Product Officer at incident.io, where they're building incident management products that people actually want to use. A software engineer by trade, Chris is no stranger to gnarly incidents, having participated (and caused!) them at everything from early stage startups through to enormous IT organizations.Links Referenced: incident.io: https://incident.io Practical Guide to Incident Management: https://incident.io/guide/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: DoorDash had a problem. As their cloud-native environment scaled and developers delivered new features, their monitoring system kept breaking down. In an organization where data is used to make better decisions about technology and about the business, losing observability means the entire company loses their competitive edge. With Chronosphere, DoorDash is no longer losing visibility into their applications suite. The key? Chronosphere is an open-source compatible, scalable, and reliable observability solution that gives the observability lead at DoorDash business, confidence, and peace of mind. Read the full success story at snark.cloud/chronosphere. That's snark.cloud slash C-H-R-O-N-O-S-P-H-E-R-E.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest is Chris Evans, who's the CPO and co-founder of incident.io. Chris, first, thank you very much for joining me. And I'm going to start with an easy question—well, easy question, hard answer, I think—what is an incident.io exactly?Chris: Incident.io is a software platform that helps entire organizations to respond to recover from and learn from incidents.Corey: When you say incident, that means an awful lot of things. And depending on where you are in the ecosystem in the world, that means different things to different people. For example, oh, incident. Like, “Are you talking about the noodle incident because we had an agreement that we would never speak about that thing again,” style, versus folks who are steeped in DevOps or SRE culture, which is, of course, a fancy way to say those who are sad all the time, usually about computers. What is an incident in the context of what you folks do?Chris: That, I think, is the killer question. I think if you look at organizations in the past, I think incidents were those things that happened once a quarter, maybe once a year, and they were the thing that brought the entirety of your site down because your big central database that was in a data center sort of disappeared. The way that modern companies run means that the definition has to be very, very different. So, most places now rely on distributed systems and there is no, sort of, binary sense of up or down these days. And essentially, in the general case, like, most companies are continually in a sort of state of things being broken all of the time.And so, for us, when we look at what an incident is, it is essentially anything that takes you away from your planned work with a sense of urgency. And that's the sort of the pithy definition that we use there. Generally, that can mean anything—it means different things to different folks, and, like, when we talk to folks, we encourage them to think carefully about what that threshold is, but generally, for us at incident.io, that means basically a single error that is worthwhile investigating that you would stop doing your backlog work for is an incident. And also an entire app being down, that is an incident.So, there's quite a wide range there. But essentially, by sort of having more incidents and lowering that threshold, you suddenly have a heap of benefits, which I can go very deep into and talk for hours about.Corey: It's a deceptively complex question. When I talk to folks about backups, one of the biggest problems in the world of backup and building a DR plan, it's not building the DR plan—though that's no picnic either—it's okay. In the time of cloud, all your planning figures out, okay. Suddenly the site is down, how do we fix it? There are different levels of down and that means different things to different people where, especially the way we build apps today, it's not is the service or site up or down, but with distributed systems, it's how down is it?And oh, we're seeing elevated error rates in us-tire-fire-1 region of AWS. At what point do we begin executing on our disaster plan? Because the worst answer, in some respects is, every time you think you see a problem, you start failing over to other regions and other providers and the rest, and three minutes in, you've irrevocably made the cutover and it's going to take 15 minutes to come back up. And oh, yeah, then your primary site comes back up because whoever unplugged something, plugged it back in and now you've made the wrong choice. Figuring out all the things around the incident, it's not what it once was.When you were running your own blog on a single web server and it's broken, it's pretty easy to say, “Is it up or is it down?” As you scale out, it seems like that gets more and more diffuse. But it feels to me that it's also less of a question of how the technology has scaled, but also how the culture and the people have scaled. When you're the only engineer somewhere, you pretty much have no choice but to have the entire state of your stack shoved into your head. When that becomes 15 or 20 different teams of people, in some cases, it feels like it's almost less than a technology problem than it is a problem of how you communicate and how you get people involved. And the issues in front of the people who are empowered and insightful in a certain area that needs fixing.Chris: A hundred percent. This is, like, a really, really key point, which is that organizations themselves are very complex. And so, you've got this combination of systems getting more and more complicated, more and more sort of things going wrong and perpetually breaking but you've got very, very complicated information structures and communication throughout the whole organization to keep things up and running. The very best orgs are the ones where they can engage the entire, sort of, every corner of the organization when things do go wrong. And lived and breathed this firsthand when various different previous companies, but most recently at Monzo—which is a bank here in the UK—when an incident happened there, like, one of our two physical data center locations went down, the bank wasn't offline. Everything was resilient to that, but that required an immediate response.And that meant that engineers were deployed to go and fix things. But it also meant the customer support folks might be required to get involved because we might be slightly slower processing payments. And it means that risk and compliance folks might need to get involved because they need to be reporting things to regulators. And the list goes on. There's, like, this need for a bunch of different people who almost certainly have never worked together or rarely worked together to come together, land in this sort of like empty space of this incident room or virtual incident room, and figure out how they're going to coordinate their response and get things back on track in the sort of most streamlined way and as quick as possible.Corey: Yeah, when your bank is suddenly offline, that seems like a really inopportune time to be introduced to the database team. It's, “Oh, we have one of those. Wonderful. I feel like you folks are going to come in handy later today.” You want to have those pathways of communication open well in advance of these issues.Chris: A hundred percent. And I think the thing that makes incidents unique is that fact. And I think the solution to that is this sort of consistent, level playing field that you can put everybody on. So, if everybody understands that the way that incidents are dealt with is consistent, we declare it like this, and under these conditions, these things happen. And, you know, if I flag this kind of level of impact, we have to pull in someone else to come and help make a decision.At the core of it, there's this weird kind of duality to incidents where they are both kind of semi-formulaic and that you can basically encode a lot of the processes that happen, but equally, they are incredibly chaotic and require a lot of human impact to be resilient and figure these things out because stuff that you have never seen happen before is happening and failing in ways that you never predicted. And so, this is where incident.io plays into this is that we try to take the first half of that off of your hands, which is, we will help you run your process so that all of the brain capacity you have, it goes on to the bit that humans are uniquely placed to be able to do, which is responding to these very, very chaotic, sort of, surprise events that have happened.Corey: I feel as well—because I played around in this space a bit before I used to run ops teams—and, more or less I really should have had a t-shirt then that said, “I am the root cause,” because yeah, I basically did a lot of self-inflicted outages in various environments because it turns out, I'm not always the best with computers. Imagine that. There are a number of different companies that play in the space that look at some part of the incident lifecycle. And from the outside, first, they all look alike because it's, “Oh, so you're incident.io. I assume you're PagerDuty. You're the thing that calls me at two in the morning to make sure I wake up.”Conversely, for folks who haven't worked deeply in that space, as well, of setting things on fire, what you do sounds like it's highly susceptible to the Hacker News problem. Where, “Wait, so what you do is effectively just getting people to coordinate and talk during an incident? Well, that doesn't sound hard. I could do that in a weekend.” And no, no, you can't.If this were easy, you would not have been in business as long as you have, have the team the size that you do, the customers that you do. But it's one of those things that until you've been in a very specific set of a problem, it doesn't sound like it's a real problem that needs solving.Chris: Yeah, I think that's true. And I think that the Hacker News point is a particularly pertinent one and that someone else, sort of, in an adjacent area launched on Hacker News recently, and the amount of feedback they got around, you know, “You're a Slack bot. How is this a company?” Was kind of staggering. And I think generally where that comes from is—well, first of all that bias that engineers have, which is just everything you look at as an engineer is like, “Yeah, I can build that in a weekend.” I think there's often infinite complexity under the hood that just gets kind of brushed over. But yeah, I think at the core of it, you probably could build a Slack bot in a weekend that creates a channel for you in Slack and allows you to post somewhere that some—Corey: Oh, good. More channels in Slack. Just when everyone wants.Chris: Well, there you go. I mean, that's a particular pertinent one because, like, our tool does do that. And one of the things—so I built at Monzo, a version of incident.io that we used at the company there, and that was something that I built evenings and weekends. And among the many, many things I never got around to building, archiving and cleaning up channels was one of the ones that was always on that list.And so, Monzo did have this problem of littered channels everywhere, I think that sort of like, part of the problem here is, like, it is easy to look at a product like ours and sort of assume it is this sort of friendly Slack bot that helps you orchestrate some very basic commands. And I think when you actually dig into the problems that organizations above a certain size have, they're not solved by Slack bots. They're solved by platforms that help you to encode your processes that otherwise have to live on a Google Doc somewhere which is five pages long and when it's 2 a.m. and everything's on fire, I guarantee you not a single person reads that Google Doc, so your process is as good as not in place at all. That's the beauty of a tool like ours. We have a powerful engine that helps you basically to encode that and take some load off of you.Corey: To be clear, I'm also not coming at this from a position of judging other people. I just look right now at the Slack workspace that we have The Duckbill Group, and we have something like a ten-to-one channel-to-human ratio. And the proliferation of channels is a very real thing. And the problem that I've seen across the board with other things that try to address incident management has always been fanciful at best about what really happens when something breaks. Like, you talk about, oh, here's what happens. Step one: you will pull up the Google Doc, or you will pull up the wiki or the rest, or in some aspirational places, ah, something seems weird, I will go open a ticket in Jira.Meanwhile, here in reality, anyone who's ever worked in these environments knows that step one, “Oh shit, oh shit, oh shit, oh shit, oh shit. What are we going to do?” And all the practices and procedures that often exist, especially in orgs that aren't very practiced at these sorts of things, tend to fly out the window and people are going to do what they're going to do. So, any tool or any platform that winds up addressing that has to accept the reality of meeting people where they are not trying to educate people into different patterns of behavior as such. One of the things I like about your approach is, yeah, it's going to be a lot of conversation in Slack that is a given we can pretend otherwise, but here in reality, that is how work gets communicated, particularly in extremis. And I really appreciate the fact that you are not trying to, like, fight what feels almost like a law of nature at this point.Chris: Yeah, I think there's a few things in that. The first point around the document approach or the clearly defined steps of how an incident works. In my experience, those things have always gone wrong because—Corey: The data center is down, so we're going to the wiki to follow our incident management procedure, which is in the data center just lost power.Chris: Yeah.Corey: There's a dependency problem there, too. [laugh].Chris: Yeah, a hundred percent. [laugh]. A hundred percent. And I think part of the problem that I see there is that very, very often, you've got this situation where the people designing the process are not the people following the process. And so, there's this classic, I've heard it through John Allspaw, but it's a bunch of other folks who talk about the difference between people, you know, at the sharp end or the blunt end of the work.And I think the problem that people are facing the past is you have these people who sit in the, sort of, metaphorical upstairs of the office and think that they make a company safe by defining a process on paper. And they ship the piece of paper and go, “That is a good job for me done. I'm going to leave and know that I've made the bank—the other whatever your organization does—much, much safer.” And I think this is where things fall down because—Corey: I want to ambush some of those people in their performance reviews with, “Cool. Just for fun, all the documentation here, we're going to pull up the analytics to see how often that stuff gets viewed. Oh, nobody ever sees it. Hmm.”Chris: It's frustrating. It's frustrating because that never ever happens, clearly. But the point you made around, like, meeting people where you are, I think that is a huge one, which is incidents are founded on great communication. Like, as I said earlier, this is, like, a form of team with someone you've never ever worked with before and the last thing you want to do is be, like, “Hey, Corey, I've never met you before, but let's jump out onto this other platform somewhere that I've never been or haven't been for weeks and we'll try and figure stuff out over there.” It's like, no, you're going to be communicating—Corey: We use Slack internally, but we have a WhatsApp chat that we wind up using for incident stuff, so go ahead and log into WhatsApp, which you haven't done in 18 months, and join the chat. Yeah, in the dawn of time, in the mists of antiquity, you vaguely remember hearing something about that your first week and then never again. This stuff has to be practiced and it's important to get it right. How do you approach the inherent and often unfortunate reality that incident response and management inherently becomes very different depending upon the specifics of your company or your culture or something like that? In other words, how cookie-cutter is what you have built versus adaptable to different environments it finds itself operating in?Chris: Man, the amount of time we spent as a founding team in the early days deliberating over how opinionated we should be versus how flexible we should be was staggering. The way we like to describe it as we are quite opinionated about how we think incidents should be run, however we let you imprint your own process into that, so putting some color onto that. We expect incidents to have a lead. That is something you cannot get away from. However, you can call the lead whatever makes sense for you at your organization. So, some folks call them an incident commander or a manager or whatever else.Corey: There's overwhelming militarization of these things. Like, oh, yes, we're going to wind up taking a bunch of terms from the military here. It's like, you realize that your entire giant screaming fire is that the lights on the screen are in the wrong pattern. You're trying to make them in the right pattern. No one dies here in most cases, so it feels a little grandiose for some of those terms being tossed around in some cases, but I get it. You've got to make something that is unpleasant and tedious in many respects, a little bit more gripping. I don't envy people. Messaging is hard.Chris: Yeah, it is. And I think if you're overly virtuoustic and inflexible, you're sort of fighting an uphill battle here, right? So, folks are going to want to call things what they want to call things. And you've got people who want to import [ITIL 00:15:04] definitions for severity ease into the platform because that's what they're familiar with. That's fine.What we are opinionated about is that you have some severity levels because absent academic criticism of severity levels, they are a useful mechanism to very coarsely and very quickly assess how bad something is and to take some actions off of it. So yeah, we basically have various points in the product where you can customize and put your own sort of flavor on it, but generally, we have a relatively opinionated end-to-end expectation of how you will run that process.Corey: The thing that I find that annoys me—in some cases—the most is how heavyweight the process is, and it's clearly built by people in an ivory tower somewhere where there's effectively a two-day long postmortem analysis of the incident, and so on and so forth. And okay, great. Your entire site has been blown off the internet, yeah, that probably makes sense. But as soon as you start broadening that to things like okay, an increase in 500 errors on this service for 30 minutes, “Great. Well, we're going to have a two-day postmortem on that.” It's, “Yeah, sure would be nice if we could go two full days without having another incident of that caliber.” So, in other words, whose foot—are we going to hire a new team whose full-time job it is, is to just go ahead and triage and learn from all these incidents? Seems to me like that's sort of throwing wood behind the wrong arrows.Chris: Yeah, I think it's very reductive to suggest that learning only happens in a postmortem process. So, I wrote a blog, actually, not so long ago that is about running postmortems and when it makes sense to do it. And as part of that, I had a sort of a statement that was [laugh] that we haven't run a single postmortem when I wrote this blog at incident.io. Which is probably shocking to many people because we're an incident company, and we talk about this stuff, but we were also a company of five people and when something went wrong, the learning was happening and these things were sort of—we were carving out the time, whether it was called a postmortem, or not to learn and figure out these things. Extrapolating that to bigger companies, there is little value in following processes for the sake of following processes. And so, you could have—Corey: Someone in compliance just wound up spitting their coffee over their desktop as soon as you said that. But I hear you.Chris: Yeah. And it's those same folks who are the ones who care about the document being written, not the process and the learning happening. And I think that's deeply frustrating to me as—Corey: All the plans, of course, assume that people will prioritize the company over their own family for certain kinds of disasters. I love that, too. It's divorced from reality; that's ridiculous, on some level. Speaking of ridiculous things, as you continue to grow and scale, I imagine you integrate with things beyond just Slack. You grab other data sources and over in the fullness of time.For example, I imagine one of your most popular requests from some of your larger customers is to integrate with their HR system in order to figure out who's the last engineer who left, therefore everything immediately their fault because lord knows the best practice is to pillory whoever was the last left because then they're not there to defend themselves anymore and no one's going to get dinged for that irresponsible jackass's decisions, even if they never touched the system at all. I'm being slightly hyperbolic, but only slightly.Chris: Yeah. I think [laugh] that's an interesting point. I am definitely going to raise that feature request for a prefilled root cause category, which is, you know, the value is just that last person who left the organization. That it's a wonderful scapegoat situation there. I like it.To the point around what we do integrate with, I think the thing is actually with incidents that's quite interesting is there is a lot of tooling that exists in this space that does little pockets of useful, valuable things in the shape of incidents. So, you have PagerDuty is this system that does a great job of making people's phone making noise, but that happens, and then you're dropped into this sort of empty void of nothingness and you've got to go and figure out what to do. And then you've got things like Jira where clearly you want to be able to track actions that are coming out of things going wrong in some cases, and that's a great tool for that. And various other things in the middle there. And yeah, our value proposition, if you want to call it that, is to bring those things together in a way that is massively ergonomic during an incident.So, when you're in the middle of an incident, it is really handy to be able to go, “Oh, I have shipped this horrible fix to this thing. It works, but I must remember to undo that.” And we put that at your fingertips in an incident channel from Slack, that you can just log that action, lose that cognitive load that would otherwise be there, move on with fixing the thing. And you have this sort of—I think it's, like, that multiplied by 1000 in incidents that is just what makes it feel delightful. And I cringe a little bit saying that because it's an incident at the end of the day, but genuinely, it feels magical when some things happen that are just like, “Oh, my gosh, you've automatically hooked into my GitHub thing and someone else merged that PR and you've posted that back into the channel for me so I know that that happens. That would otherwise have been a thing where I jump out of the incident to go and figure out what was happening.”Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: The problem with the cloud, too, is the first thing that, when there starts to be an incident happening is the number one decision—almost the number one decision point is this my shitty code, something we have just pushed in our stuff, or is it the underlying provider itself? Which is why the AWS status page being slow to update is so maddening. Because those are two completely different paths to go down and you are having to pursue both of them equally at the same time until one can be ruled out. And that is why time to identify at least what side of the universe it's on is so important. That has always been a bit of a tricky challenge.I want to talk a bit about circular dependencies. You target a certain persona of customer, but I'm going to go out on a limb and assume that one explicit company that you are not going to want to do business with in your current iteration is Slack itself because a tool to manage—okay, so our service is down, so we're going to go to Slack to fix it doesn't work when the service is Slack itself. So, that becomes a significant challenge. As you look at this across the board, are you seeing customers having problems where you have circular dependency issues with this? Easy example: Slack is built on top of AWS.When there's an underlying degradation of, huh, suddenly us-east-1 is not doing what it's supposed to be doing, now, Slack is degraded as well, as well as the customer site, it seems like at that point, you're sort of in a bit of tricky positioning as a customer. Counterpoint, when neither Slack nor your site are working, figuring out what caused that issue doesn't seem like it's the biggest stretch of the imagination at that point.Chris: I've spent a lot of my career working in infrastructure, platform-type teams, and I think you can end up tying yourself in knots if you try and over-optimize for, like, avoiding these dependencies. I think it's one of those, sort of, turtles all the way down situations. So yes, Slack are unlikely to become a customer because they are clearly going to want to use our product when they are down.Corey: They reach out, “We'd like to be your customer.” Your response is, “Please don't be.” None of us are going to be happy with this outcome.Chris: Yeah, I mean, the interesting thing that is that we're friends with some folks at Slack, and they believe it or not, they do use Slack to navigate their incidents. They have an internal tool that they have written. And I think this sort of speaks to the point we made earlier, which is that incidents and things failing or not these sort of big binary events. And so—Corey: All of Slack is down is not the only kind of incident that a company like Slack can experience.Chris: I'd go as far as that it's most commonly not that. It's most commonly that you're navigating incidents where it is a degradation, or some edge case, or something else that's happened. And so, like, the pragmatic solution here is not to avoid the circular dependencies, in my view; it's to accept that they exist and make sure you have sensible escape hatches so that when something does go wrong—so a good example, we use incident.io at incident.io to manage incidents that we're having with incident.io. And 99% of the time, that is absolutely fine because we are having some error in some corner of the product or a particular customer is doing something that is a bit curious.And I could count literally on one hand the number of times that we have not been able to use our products to fix our product. And in those cases, we have a fallback which is jump into—Corey: I assume you put a little thought into what happened. “Well, what if our product is down?” “Oh well, I guess we'll never be able to fix it or communicate about it.” It seems like that's the sort of thing that, given what you do, you might have put more than ten seconds of thought into.Chris: We've put a fair amount of thought into it. But at the end of the day, [laugh] it's like if stuff is down, like, what do you need to do? You need to communicate with people. So, jump on a Google Chat, jump on a Slack huddle, whatever else it is we have various different, like, fallbacks in different order. And at the core of it, I think this is the thing is, like, you cannot be prepared for every single thing going wrong, and so what you can be prepared for is to be unprepared and just accept that humans are incredibly good at being resilient, and therefore, all manner of things are going to happen that you've never seen before and I guarantee you will figure them out and fix them, basically.But yeah, I say this; if my SOC 2 auditor is listening, we also do have a very well-defined, like, backup plan in our SOC 2 [laugh] in our policies and processes that is the thing that we will follow that. But yeah.Corey: The fact that you're saying the magic words of SOC 2, yes, exactly. Being in a responsible adult and living up to some baseline compliance obligations is really the sign of a company that's put a little thought into these things. So, as I pull up incident.io—the website, not the company to be clear—and look through what you've written and how you talk about what you're doing, you've avoided what I would almost certainly have not because your tagline front and center on your landing page is, “Manage incidents at scale without leaving Slack.” If someone were to reach out and say, well, we're down all the time, but we're using Microsoft Teams, so I don't know that we can use you, like, the immediate instinctive response that I would have for that to the point where I would put it in the copy is, “Okay, this piece of advice is free. I would posit that you're down all the time because you're the kind of company to use Microsoft Teams.” But that doesn't tend to win a whole lot of friends in various places. In a slightly less sarcastic bent, do you see people reaching out with, “Well, we want to use you because we love what you're doing, but we don't use Slack.”Chris: Yeah. We do. A lot of folks actually. And we will support Teams one day, I think. There is nothing especially unique about the product that means that we are tied to Slack.It is a great way to distribute our product and it sort of aligns with the companies that think in the way that we do in the general case but, like, at the core of what we're building, it's a platform that augments a communication platform to make it much easier to deal with a high-stress, high-pressure situation. And so, in the future, we will support ways for you to connect Microsoft Teams or if Zoom sought out getting rich app experiences, talk on a Zoom and be able to do various things like logging actions and communicating with other systems and things like that. But yeah, for the time being very, very deliberate focus mechanism for us. We're a small company with, like, 30 people now, and so yeah, focusing on that sort of very slim vertical is working well for us.Corey: And it certainly seems to be working to your benefit. Every person I've talked to who is encountered you folks has nothing but good things to say. We have a bunch of folks in common listed on the wall of logos, the social proof eye chart thing of here's people who are using us. And these are serious companies. I mean, your last job before starting incident.io was at Monzo, as you mentioned.You know what you're doing in a regulated, serious sense. I would be, quite honestly, extraordinarily skeptical if your background were significantly different from this because, “Well, yeah, we worked at Twitter for Pets in our three-person SRE team, we can tell you exactly how to go ahead and handle your incidents.” Yeah, there's a certain level of operational maturity that I kind of just based upon the name of the company there; don't think that Twitter for Pets is going to nail. Monzo is a bank. Guess you know what you're talking about, given that you have not, basically, been shut down by an army of regulators. It really does breed an awful lot of confidence.But what's interesting to me is the number of people that we talk to in common are not themselves banks. Some are and they do very serious things, but others are not these highly regulated, command-and-control, top-down companies. You are nimble enough that you can get embedded at those startup-y of startup companies once they hit a certain point of scale and wind up helping them arrive at a better outcome. It's interesting in that you don't normally see a whole lot of tools that wind up being able to speak to both sides of that very broad spectrum—and most things in between—very effectively. But you've somehow managed to thread that needle. Good work.Chris: Thank you. Yeah. What else can I say other than thank you? I think, like, it's a deliberate product positioning that we've gone down to try and be able to support those different use cases. So, I think, at the core of it, we have always tried to maintain the incident.io should be installable and usable in your very first incident without you having to have a very steep learning curve, but there is depth behind it that allows you to support a much more sophisticated incident setup.So, like, I mean, you mentioned Monzo. Like, I just feel incredibly fortunate to have worked at that company. I joined back in 2017 when they were, I don't know, like, 150,000 customers and it was just getting its banking license. And I was there for four years and was able to then see it scale up to 6 million customers and all of the challenges and pain that goes along with that both from building infrastructure on the technical side of things, but from an organizational side of things. And was, like, front-row seat to being able to work with some incredibly smart people and sort of see all these various different pain points.And honestly, it feels a little bit like being in sort of a cheat mode where we get to this import a lot of that knowledge and pain that we felt at Monzo into the product. And that happens to resonate with a bunch of folks. So yeah, I feel like things are sort of coming out quite well at the moment for folks.Corey: The one thing I will say before we wind up calling this an episode is just how grateful I am that I don't have to think about things like this anymore. There's a reason that the problem that I chose to work on of expensive AWS bills being very much a business-hours only style of problem. We're a services company. We don't have production infrastructure that is externally facing. “Oh, no, one of our data analysis tools isn't working internally.”That's an interesting curiosity, but it's not an emergency in the same way that, “Oh, we're an ad network and people are looking at ads right now because we're broken,” is. So, I am grateful that I don't have to think about these things anymore. And also a little wistful because there's so much that you do it would have made dealing with expensive and dangerous outages back in my production years a lot nicer.Chris: Yep. I think that's what a lot of folks are telling us essentially. There's this curious thing with, like, this product didn't exist however many years ago and I think it's sort of been quite emergent in a lot of companies that, you know, as sort of things have moved on, that something needs to exist in this little pocket of space, dealing with incidents in modern companies. So, I'm very pleased that what we're able to build here is sort of working and filling that for folks.Corey: Yeah. I really want to thank you for taking so much time to go through the ethos of what you do, why you do it, and how you do it. If people want to learn more, where's the best place for them to go? Ideally, not during an incident.Chris: Not during an incident, obviously. Handily, the website is the company name. So, incident.io is a great place to go and find out more. We've literally—literally just today, actually—launched our Practical Guide to Incident Management, which is, like, a really full piece of content which, hopefully, will be useful to a bunch of different folks.Corey: Excellent. We will, of course, put a link to that in the [show notes 00:29:52]. I really want to thank you for being so generous with your time. Really appreciate it.Chris: Thanks so much. It's been an absolute pleasure.Corey: Chris Evans, Chief Product Officer and co-founder of incident.io. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice along with an angry comment telling me why your latest incident is all the intern's fault.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The Bike Shed
344: Spinner Armageddon

The Bike Shed

Play Episode Listen Later Jun 28, 2022 38:50


Steph has an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. Chris got to do something exciting this week using dry-monads. Success or failure? This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Bartender (https://www.macbartender.com/) dry-rb - dry-monads v1.0 - Pattern matching (https://dry-rb.org/gems/dry-monads/1.0/pattern-matching/) alfred-workflows (https://github.com/tupleapp/alfred-workflows/blob/master/scripts/online_users.rb) Raycast (https://www.raycast.com/) ruby-science (https://github.com/thoughtbot/ruby-science) Inertia.js (https://inertiajs.com/) Remix (https://remix.run/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. STEPH: What type of bird is the strongest bird? CHRIS: I don't know. STEPH: A crane. [laughter] STEPH: You're welcome. And on that note, shall we wrap up? CHRIS: Let's wrap up. [laughter] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris, I saw a good movie I'd like to tell you about. It was just over the weekend. It's called The Duke, and it's based on a real story. I should ask, have you seen it? Have you heard of this movie called The Duke? CHRIS: I don't think so. STEPH: Okay, cool. It's a true story, and it's based on an individual named Kempton Bunton who then stole a particular portrait, a Goya portrait; if you know your artist, I do not. But he stole a Goya portrait and then essentially held at ransom because he was a big advocate that the BBC News channel should be free for people that are living on a pension or that are war veterans because then they're not able to afford that fee. But then, if you take the BBC channel away from them, it disconnects them from society. And it's a very good movie. I highly recommend it. So I really enjoyed watching that over the weekend. CHRIS: All right. Excellent recommendation. We will, of course, add that to the show notes mostly so that I can find it again later. STEPH: On a more technical note, I have a small update, or it's more of a question. It's an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. This has been quite a journey that Joël and I have been on for a while now. And we're making progress, but we're realizing that we're spending like 95% of our time in the test setup and porting that over, specifically because we're mapping fixture data over to FactoryBot, and we're just realizing that's really painful. It's taking up a lot of time to do that. And initially, when I realized we were just doing that, we hadn't even really talked about it, but we were moving it over to FactoryBot. I was like, oh, cool. We'll get to delete all these fixtures because there are around 208 files of them. And so that felt like a really good additional accomplishment to migrating the test over. But now that we realize how much time we're spending migrating the data over for that test setup, we've reevaluated, and I shared with Joël in the Slack channel. I was like, crap. I was like, I have a bad idea, and I can't not say it now because it's crossed my mind. And my bad idea was what if we stopped porting over fixtures to FactoryBot and then we just added the fixtures to a directory that RSpec would look so then we can rely on those fixtures? And then that way, we're literally then ideally just copying over from Test::Unit over to RSpec. But it does mean a couple of things. Well, one, it means that we're now running those fixtures at the beginning of RSpec test. We're introducing another pattern of where these tests are already using FactoryBot, but now they have fixtures at the top, and then we won't get to delete the fixtures. So we had a conversation around how to manage and mitigate some of those concerns. And we're still in that exploratory. We're going to test it out and see if this really speeds us up referencing the fixtures. The question that's wrapped up in this is there's something different between how fixtures generate data and how factories generate data. So I've run into this a couple of times now where I moved data over to just call a factory. But then I was hitting these callbacks or after-save-hooks or weird things that were then preventing me from creating the record, even though fixtures was creating them just fine. And then Joël pointed out today that he was running into something similar where there were private methods that were getting called. And there were all sorts of additional code that was getting run with factories versus fixtures. And I don't have an answer. Like, I haven't looked into this. And it's frankly intentional because I was trying hard to not dive into understanding the mechanics. We really want to get through this. But now I'm starting to ponder a little more as to what is different with fixtures and factories? And I liked that factories is running these callbacks; that feels correct. But I'm surprised that fixtures doesn't, or at least that's the experience that I'm having. So there's some funkiness there that I'd like to explore. I'll be honest; I don't know if I'm going to. But if anybody happens to know what that funkiness is or why fixtures and factories are different in that regard, I would be very intrigued because, at some point, I might look into it just because I would like to know. CHRIS: Oh, that is interesting. I have not really worked with fixtures much at all. I've lived a factory life myself, and thus that's where almost all of my experience is. I'm not super surprised if this ends up being the case, like, the idea that fixtures are just some data that gets shoveled into the database directly as opposed to FactoryBot going through the model layer. And so it's sort of like that difference. But I don't know that for certain. That sounds like what this is and makes sense conceptually. But I think this is what you were saying like, that also kind of pushes me more in the direction of factories because it's like, oh, they're now representative. They're using our model layer, where we're defining certain truths. And I don't love callbacks as a mechanism. But if your app has them, then getting data that is representative is useful in tests. Like one of the things I add whenever I'm working with FactoryBot is the FactoryBot lint rake task RSpec thing that basically just says, "Are your factories valid?" which I think is a great baseline to have. Because you may add a migration that adds a default constraint or something like that to the database that suddenly all your factories are invalid, and it's breaking tests, but you don't know it. Like subtly, you change it, and it doesn't actually break a test, but then it's harder later. So that idea of just having more correctness baked in is always nice, especially when it can be automated like that, so definitely a fan of that. But yeah, interested if you do figure out the distinction. I do like your take, though, of like, but also, maybe I just won't figure this out. Maybe this isn't worth figuring it out. Although you were in the interesting spot of, you could just port the fixtures over and then be done and call the larger body of work done. But it's done in sort of a half-complete way, so it's an interesting trade-off space. I'm also interested to hear where you end up on that. STEPH: Yeah, it's a tough trade-off. It's one that we don't feel great about. But then it's also recognizing what's the true value of what we're trying to deliver? And it also comes down to the idea of churn versus complexity. And I feel like we are porting over existing complexity and even adding a smidge, not actual complexity but adding a smidge of indirection in terms that when someone sees this file, they're going to see a mixed-use of fixtures and factories, and that doesn't feel good. And so we've already talked about adding a giant comment above fixtures that just is very honest and says, "Hey, these were ported over. Please don't mimic this. But this is some legacy tests that we have brought over. And we haven't migrated the fixtures over to use factories." And then, in regards to the churn versus complexity, this code isn't likely to get touched like these tests. We really just need them to keep running and keep validating scenarios. But it's not likely that someone's going to come in here and really need to manage these anytime soon. At least, this is what I'm telling myself to make me feel better about it. So there's also that idea of yes, we are porting this over. This is also how they already exist. So if someone did need to manage these tests, then going to Test::Unit, they would have the same experience that they're going to have in RSpec. So that's really the crux of it is that we're not improving that experience. We're just moving it over and then trying to communicate that; yes, we have muddied the waters a little bit by introducing this other pattern. So we're going to find a way to communicate why we've introduced this other pattern, but that way, we can stay focused on actually porting things over to RSpec. As for the factories versus fixtures, I feel like you're onto something in terms of it's just skipping that model layer. And that's why a lot of that functionality isn't getting run. And I do appreciate the accuracy of factories. I'd much rather know is my data representative of real data that can get created in the world? And right now, it feels like some of the fixtures aren't. Like, how they're getting created seemed to bypass really important checks and validations, and that is wrong. That's not what we want to have in our test is, where we're creating data that then the rest of the application can't truly create. But that's another problem for another day. So that's an update on a trade-off that we have made in regards to the testing journey that we are on. What's going on in your world? CHRIS: Well, we got to do something exciting this week. I was working on some code. This is using dry-monads, the dry-rb space. So we have these result objects that we use pretty pervasively throughout the app, and often, we're in a controller. We run one of these command objects. So it's create user, and create user actually encompasses a ton of logic in our app, and that object returns a result. So it's either a success or a failure. And if it's a success, it'll be a success with that new user wrapped up inside of it, or if it's a failure, it's a specific error message. Actually, different structured error messages in different ways, some that would be pushed to the form, some that would be a flash message. There are actually fun, different things that we do there. But in the controller, when we interact with those result objects, typically what we'll do is we'll say result equals create user dot run, (result=createuser.run) and then pass it whatever data it needs. And then on the next line, we'll say results dot either, (results.either), which is a method on these result objects. It's on both the success and failure so you can treat them the same. And then you pass what ends up being a lambda or a stabby proc, or I forget what they are. But one of those sort of inline function type things in Ruby that always feel kind of weird. But you pass one of those, and you actually pass two of them, one for the success case and one for the failure case. And so in the success case, we redirect back with a notice of congratulations, your user was created. Or, in the failure case, we potentially do a flash message of an alert, or we send the errors down, or whatever it ends up being. But it allows us to handle both of those cases. But it's always been syntactically terrible, is how I would describe it. It's, yeah, I'm just going to leave it at that. We are now living in a wonderful, new world. This has been something that I've wanted to try for a while. But I finally realized we're actually on Ruby 2.7, and so thus, we have access to pattern matching in Ruby. So I get to take it for a spin for the first time, realizing that we were already on the correct version. And in particular, dry-monads has a page in their docs specific to how we can take advantage of pattern matching with the result objects that they provide us. There's nothing specific in the library as far as I understand it. This is just them showing a bunch of examples of how one might want to do it if they're working with these result objects. But it's really great because it gives the ability to interact with, you know, success is typically going to be a singular case. There's one success branch to this whole logic, but there are like seven different ways it can fail. And that's the whole idea as to why we use these command objects and the whole Railway Oriented Programming and that whole thing which I have...what is this word? [laughs] I feel like I should know it. It's a positive rant. I have raved; that is how our users kindly pointed that out to us. I have raved about the Railway Oriented Programming that allows us to do. But it's that idea that they're actually, you know, there's one happy path, and there are seven distinct failure modes, seven unhappy paths. And now, using pattern matching, we actually get a really expressive, readable, useful way to destructure each of those distinct failures to work with the particular bits of data that we need. So it was a very happy day, and I got to explore it. This is, again, a feature of Ruby, not a feature of dry-monads. But dry-monads just happens to embrace it and work really well with it. So that was awesome. STEPH: That is awesome. I've seen one or two; I don't know, I've seen a couple of tweets where people are like, yeah, Ruby pattern matching. I haven't found a way to use it. So I'm excited that you just shared a way that you found to use it. I'm also worried what it says about our developer culture that we know the word rant so well, but rave, we always have to reach back into our memory to be like, what's that positive word or something that we like? [laughs] CHRIS: And especially here on The Bike Shed, where we try to gravitate towards the positive. But yeah, it's an interesting point that you make. STEPH: We're a bunch of ranters. It's what we do, pranting ranters. I don't know why we're pranting. [laughs] CHRIS: Because it's that exciting. That's what it is. Actually, there was an interesting thing as we were playing around with the pattern matching code, just poking around in the console session with it, and it prints out a deprecation warning. It's like, warning: this is an experimental feature. Do not use it, be careful. But in the back of my head, I was like, I actually know how this whole thing plays out, Ruby 2.7, and I assure you, it's going to be fine. I have been to the future, at least I'm pretty sure. I think the version that is in Ruby 2.7 did end up getting adopted basically as it stands. And so, I think there is also a setting to turn off that deprecation warning. I haven't done it yet, but I mostly just enjoyed the conversation that I had with this deprecation message of like, listen, I've been to the future, and it's great. Well, it's complicated, but specific to this pattern matching [laughs] in Ruby 3+ versions, it went awesome. And I'm really excited about that future that we now live in. STEPH: I wish we had that for so many more things in our life [laughs] of like, here's a warning, and it's like, no, no, I've seen the future. It's all right. Or you're totally right; I should avoid and back out of this now. CHRIS: If only we could know how the things would play out, you know. But yeah, so pattern matching, very cool. I'll include a link in the show notes to the particular page in the dry-monads docs. But there are also other cool things on the internet. In an unrelated but also cool thing that I found this week, we use Tuple a lot within our organization for pair programming. For anyone who's not familiar with it, it's a really wonderful piece of technology that allows you to pair program pretty seamlessly, better video quality, all of those nice things that we want. But I found there was just the tiniest bit of friction in starting a Tuple call. I know I want to pair with this person. And I have to go up and click on the little menu bar, and then I have to find their name, then I have to click a button. That's just too much. That's not how...I want to live my life at the keyboard. I have a thing called Bartender, which is a little menu bar manager utility app that will collapse down and hide the icons. But it's also got a nice, little hotkey accessible pop-up window that allows me to filter down and open one of the menu bar pop-out menus. But unfortunately, when that happens, the Tuple window isn't interactive at that point. I can't use the arrow keys to go up and down. And so I was like, oh, man, I wonder if there's like an Alfred workflow for this. And it turns out indeed there is actually managed by the kind folks at Tuple themselves. So I was able to find that, install it; it's great. I have it now. I can use that. So that was a nice little upgrade to my workflow. I can just type like TC space and then start typing out the person's name, and then hit enter, and it will start a call immediately. And it doesn't actually make me more productive, but it makes me happier. And some days, that's what matters. STEPH: That's always so impressive to me when that happens where you're like, oh, I need a thing. And then you went through the saga that you just went through. And then the people who manage the application have already gotten there ahead of you, and they're like, don't worry, we've created this for you. That's one of those just beautiful moments of like, wow, y'all have really thought this through on a bunch of different levels and got there before me. CHRIS: It's somewhat unsurprising in this case because it's a very developer-centric organization, and Ben's background being a thoughtbot developer and Alfred user, I'm almost certain. Although I've seen folks talking about Raycast, which is the new hotness on the quick launcher world. I started eons ago in Quicksilver, and then I moved to Alfred, I don't know, ten years ago. I don't know what time it is anymore. But I've been in Alfred land for a while, but Raycast seems very cool. Just as an aside, I have not allowed myself... [laughs] this is another one of those like; I do not have permission to go explore this new tool yet because I don't think it will actually make me more productive, although it could make me happier. So... STEPH: I haven't heard of that one, Raycast. I'm literally adding it to the show notes right now as a way so you can find The Duke later, and I can find Raycast later [chuckles] and take a look at it and check it out. Although I really haven't embraced the whole Alfred workflow. I've seen people really enjoy it and just rave about it and how wonderful it is. But I haven't really leaned into that part of the world; I don't know why. I haven't set any hard and fast rules for myself where I can't play around with these technologies, but I haven't taken the time to do it either. CHRIS: You've also not found yourself writing thousands of lines of Vimscript because you thought that was a good idea. So you don't need as many guardrails it would seem. That's my guess. STEPH: This is true. CHRIS: Whereas I need to be intentional [laughs] with how I structure my interaction with my dev tools. STEPH: Instead, I'm just porting over fixtures from one place to another. [laughs] That's the weird space that I'm living in instead. [laughs] CHRIS: But you're getting paid for that. No one paid me for the Vimscript I wrote. [laughter] STEPH: That's fair. Speaking around process-y things, there's something that's been on my mind that Valeria, another thoughtboter, suggested around how we structure our meetings and the default timing that we have for meetings. So Thursdays are my team-focused day. And it's the day where I have a lot of one on ones. And I realized that I've scheduled them back to back, which is problematic because then I have zero break in between them, which I'm less concerned about that because then I can go for an hour or something and not have a break. And I'm not worried about that part. But it does mean that if one of those discussions happens to go over just even for like two or three minutes, then it means that someone else is waiting for me in those two to three minutes. And that feels unacceptable to me. So Valeria brought up a really good idea where I think it's only with the Google Meet paid version. I could be wrong there. But I think with the paid version of it that then you can set the new default for how long a meeting is going to last. So instead of having it default to 30 minutes, have it default to 25 minutes. So then, that way, you do have that five-minute buffer. So if you do go over just like two or three minutes with someone, you've still got like two minutes to then hop to the next call, and nobody's waiting for you. Or if you want those five minutes to then grab some water or something like that. So we haven't implemented it just yet because then there's discussion around is this a new practice that we want everybody to move to? Because I mean, if just one person does it, it doesn't work. You really need everybody to buy into the concept of we're now defaulting to 25 versus 30-minute meetings. So I'll have to let you know how that goes. But I'm intrigued to try it out because I think that would be very helpful for me. Although there's a part of me that then feels bad because it's like, well, if I have 30 minutes to chat with somebody, but now I'm reducing it to 25 minutes each time, I didn't love that I'm taking time away from our discussion. But that still feels like a better outcome than making somebody wait for three to five minutes if something else goes over. So have you ever run into something like that? How do you manage back-to-back meetings? Do you intentionally schedule a break in between or? CHRIS: I do try to give myself some buffer time. I stack meetings but not so much so that they're just back to back. So I'll stack them like Wednesdays are a meeting-heavy day for me. That's intentional just to be like, all right, I know that my day is going to get chopped up. So let's just really lean into that, chop the heck out of Wednesday afternoons, and then the rest of the week can hopefully have slightly longer deep work-type sessions. And, yeah, in general, I try and have like a little gap in between them. But often what I'll do for that is I'll stagger the start of the next meeting to be rather than on the hour or the half-hour, I start it on the 15th minute. And so then it's sort of I now have these little 15-minute gaps in my workflow, which is enough time to do one or two small things or to go get a drink or whatever it is or if things do run over. Like, again, I feel what you're saying of like, I don't necessarily want to constrain a meeting. Or I also don't necessarily want to go into the habit of often over-running. I think it's good to be intentional. Start meetings on time, end meetings on time. If there's a great conversation that's happening, maybe there's another follow-up meeting that should happen or something like that. But for as nonsensical of a human as I believe myself to be, I am rather rigid about meetings. I try very hard to be on time. I try very hard to wrap them up on time to make sure I go to the next one. And so with that, the 15-minute staggering is what I've found works for me. STEPH: Yeah, that makes sense. One-on-ones feels special to me because I wholeheartedly agree with being very diligent about like, hey, this is our meeting time. Let's do a time check. Someone says that at the end, and then that way, everybody can move on. But one on ones are, there's more open discussion space, and I hate cutting people off, especially because it might not be until the last 15 minutes that you really got into the meat of the conversation. Or you really got somewhere that's a little bit more personal or things that you want to talk about. So if someone's like, "Yeah, let me tell you about my life goals," and you're like, "Oh, no, wait, sorry. We're out of time." That feels terrible and tragic to do. So I struggle with that part of it. CHRIS: I will say actually, on that note, I'm now thinking through, but I believe this to be true. Everyone that reports to me I have a 45-minute one-on-one with, and then my CEO I set up the one-on-one. So I also made that one a 45-minute one-on-one. And that has worked out really well. Typically, I try and structure it and reiterate this from time to time of, like, hey, this is your space, not mine. So let's have whatever conversation fits in here. And it's fine if we don't need to use the whole time, but I want to make sure that we have it and that we protect it. Because I often find much like retro, I don't know; I think everything's fine. And then suddenly the conversation starts, and you're like, you know what? Actually, I'm really concerned now that you mentioned it. And you need that sort of empty space that then the reality sort of pop up into. And so with one on one, I try and make sure that there is that space, but I'm fine with being like, we can cut this short. We can move on from one-on-one topics to more of status updates; let's talk about the work. But I want to make sure that we lead with is there anything deeper, any concerns, anything you want to talk through? And sort of having the space and time for that. STEPH: I like that. And I also think it speaks more directly to the problem I'm having because I'm saying that we keep running over a couple of minutes, and so someone else is waiting. So rather than shorten it, which is where I'm already feeling some pain...although I still think that's a good idea to have a default of 25-minute meetings so then that way, there is a break versus the full 30. So if people want to have back-to-back meetings, they still have a little bit of time in between. But for one on ones specifically, upping it to 45 minutes feels nice because then you've got that 15-minute buffer likely. I mean, maybe you schedule a meeting, but, I don't know, that's funky. But likely, you've got a 15-minute buffer until your next one. And then that's also an area that I feel comfortable in sharing with folks and saying, "Hey, I've booked this whole 45 minutes. But if we don't need the whole time, that's fine." I'm comfortable saying, "Hey, we can end early, and you can get more of your time back to focus on some other areas." It's more the cutting someone off when they're talking because I have to hop to the next thing. I absolutely hate that feeling. So thanks, I think I'll give that a go. I think I'll try actually bumping it up to 45 minutes, presuming that other people like that strategy too, since they're opting in [laughs] to the 45 minutes structure. But that sounds like a nice solution. CHRIS: Well yeah, happy to share it. Actually, one interesting thing that I'm realizing, having been a manager at thoughtbot and then now being a manager within Sagewell, the nature of the interactions are very different. With thoughtbot, I was often on other projects. I was not working with my team day to day in any real capacity. So it was once every two weeks, I would have this moment to reconnect with them. And there was some amount of just catching up. Ideally, not like status update, low-level sort of thing, but sort of just like hey, what have you been working on? What have you been struggling with? What have you been enjoying? There was more like I needed bigger space, I would say for that, or it's not surprising to me that you're bumping into 30 minutes not being quite long enough. Whereas regularly, in the one on ones that I have now, we end up cutting them short or shifting out of true one-on-one mode into more general conversation and chatting about Raycast or other tools or whatever it is because we are working together daily. And we're pairing very regularly, and we're all on the same project and all sorts of in sync and know what's going on. And we're having retro together. We have plenty of places to have the conversation. So the one-on-one again, still, I keep the same cadence and the same time structure just because I want to make sure we have the space for any day that we really need that. But in general, we don't. Whereas when I was at thoughtbot, it was all the more necessary. And I think for folks listening; I could imagine if you're in a team lead position and if you're working very closely with folks, then you may be on the one side of things versus if you're a little bit more at a distance from the work that they're doing day to day. That's probably an interesting question to ask, and think about how you want to structure it. STEPH: Yeah, I think that's an excellent point. Because you're right; I don't see these individuals. We may not have really gotten to interact, except for our daily syncs outside of that. So then yeah, there's always like a good first 10 minutes of where we're just chatting about life and catching up on how things are going before then we dive into some other things. So I think that's a really good point. Cool, solving management problems on the mic. I dig it. In slightly different news, I've joined a book club, which I'm excited about. This book club is about Ruby. It's specifically reading the book Ruby Science, which is a book that was written and published by thoughtbot. And it requires zero homework, which is my favorite type of book club. Because I have found I always want to be part of book clubs. I'm always interested in them, but then I'm not great at budgeting the time to make sure I read everything I'm supposed to read. And so then it comes time for folks to get together. And I'm like, well, I didn't do my homework, so I can't join it. But for this one, it's being led by Joël, and the goal is that you don't have to do the homework. And they're just really short sections. So whoever's in charge of leading that particular session of the book club they're going to provide an overview of what's covered in whatever the reading material that we're supposed to read, whatever topic we're covering that day. They're going to provide an overview of it, an example of it, so then we can all talk about it together. So if you read it, that's wonderful. You're a bit ahead and could even join the meeting like five minutes late. Or, if you haven't read it, then you could join and then get that update. So I'm very excited about it. And this was one of those books that I'd forgotten that thoughtbot had written, and it's one that I've never read. And it's public for anybody that's interested in it. So to cover a little bit of details about it, so it talks about code smells, ways to refactor code, and then also common patterns that you can use to solve some issues. So there's a lot of really just great content that's in it. And I'll be sure to include a link in the show notes for anyone else that's interested. CHRIS: And again, to reiterate, this book is free at this point. Previously, in the past, it was available for purchase. But at one point a number of years ago, thoughtbot set all of the books free. And so now that along with a handful of other books like...what's Edward's DNS book? Domain Name Sanity, I believe, is Edward's book name that Edward Loveall wrote when he was not a thoughtboter, [laughs] and then later joined as a thoughtboter, and then we made the book free. But on the specific topic of Ruby Science, that is a book that I will never forget. And the reason I will never forget it is that book was written by the one and only CTO Joe Ferris, who is an incredibly talented developer. And when I was interviewing with thoughtbot, I got down to the final day, which is a pairing session. You do a morning pairing session with one thoughtbot developer, and you do an afternoon pairing session with another thoughtbot developer. So in the morning, I was working with someone on actually a patch to Rails which was pretty cool. I'd never really done that, so that was exciting. And that went fine with the exception that I kept turning on Caps Lock on their keyboard because I was used to Caps Lock being CTRL, and then Vim was going real weird for me. But otherwise, that went really well. But then, in the afternoon, I was paired with the one and only CTO Joe Ferris, who was writing the book Ruby Science at that time. And the nature of the book is like, here's a code sample, and then here's that code sample improved, just a lot of sort of side-by-side comparisons of code. And I forget the exact way that this went, but I just remember being terrified because Joe would put some code up on the screen and be like, "What do you think?" And I was like, oh, is this the good code or the bad code? I feel like I should know. I do not know. I'm not sure. It worked out fine, I guess. I made it through. But I just remember being so terrified at that point. I was just like, oh no, this is how it ends for me. It's been a good run. STEPH: [laughs] CHRIS: I made it this far. I would have loved to work for this nice thoughtbot company, but here we are. But yeah, I made it through. [laughs] STEPH: There are so many layers to that too where it's like, well if I say it's terrible, are you going to be offended? Like, how's this going to go for me if I speak my truths? Or what am I going to miss? Yeah, that seems very interesting (I kind of like it.) but also a terrifying pairing session. CHRIS: I think it went well because I think the code...I'd been following thoughtbot's work, and I knew who Joe was and had heard him on podcasts and things. And I kind of knew roughly where things were, and I was like, that code looks messy. And so I think I mostly got it right, but just the openness of the question of like, what do you think? I was like, oh God. [laughs] So yeah, that book will always be in my memories, is how I would describe it. STEPH: Well, I'm glad it worked out so we could be here today recording a podcast together. [laughs] CHRIS: Recording a podcast together. Now that I say all that, though, it's been a long time since I've read the book. So maybe I'll take a revisit. And definitely interested to hear more about your book club and how that goes. But shifting ever so slightly (I don't have a lot to say on this topic.) but there's a new framework technology thing out there that has caught my attention. And this hasn't happened for a while, so it's kind of novel for me. So I tend to try and keep my eye on where is the sort of trend of web development going? And I found Inertia a while ago, and I've been very, very happy with that as sort of this is the default answer as to how I build websites. To be clear, Inertia is still the answer as to how I build websites. I love Inertia. I love what it represents. But I'm seeing some stuff that's really interesting that is different. Specifically, Remix.run is the thing that I'm seeing. I mentioned it, I think, in the last episode talking about there was some stuff that they were doing with data loading and async versus synchronous, and do you wait on it or? They had built some really nice levers and trade-offs into the framework. And there's a really great talk that Ryan Florence, one of the creators of Remix.run, gave about that and showed what they were building. I've been exploring it a little bit more in-depth now. And there is some really, really interesting stuff in Remix. In particular, it's a meta-framework, I think, is the nonsense phrase that we use to describe it. But it's built on top of React. That won't be true for forever. I think it's actually they would say it's more built on top of React Router. But it is very similar to Next.js for folks that have seen that. But it's got a little bit more thought around data loading. How do we change data? How do we revalidate data after? There's a ton of stuff that, having worked in many React client-side API-heavy apps that there's so much pain, cache invalidation. How do you think about the cache? When do you fetch from the network? How do you avoid showing 19 different loading spinners on the page? And Remix as a framework has some really, I think, robust and well-thought-out answers to a lot of that. So I am super-duper intrigued by what they're doing over there. There's a particular video that I think shows off what Remix represents really well. It's Ryan Florence, that same individual, the creator of Remix, building just a newsletter signup page. But he goes through like, let's start from the bare bones, simplest thing. It's just an input, and a form submits to the server. That's it. And so we're starting from web 2.0, long, long ago, sort of ideas, and then he gradually enhances it with animations and transitions and error states. And even at the end, goes through an accessibility audit using the screen reader to say, "Look, Remix helps you get really close because you're just using web fundamentals." But then goes a couple of steps further and actually makes it work really, really well for a screen reader. And, yeah, overall, I'm just super impressed by the project, really, really intrigued by the work that they're doing. And frankly, I see a couple of different projects that are sort of in this space. So yeah, again, very early but excited. STEPH: On their website...I'm checking it out as you're walking me through it, and on their website, they have "Say goodbye to Spinnageddon." And that's very cute. [laughs] CHRIS: There's some fundamental stuff that I think we've just kind of as a web community, we made some trade-offs that I personally really don't like. And that idea of just spinners everywhere just sending down a ball of application logic and a giant JavaScript file turning it on on someone's computer. And then immediately, it has to fetch back to the server. There are just trade-offs there that are not great. I love that Remix is sort of flipping that around. I will say, just to sort of couch the excitement that I'm expressing right now, that Remix exists in a certain place. It helps with building complex UIs. But it doesn't have anything in the data layer. So you have to bring your own data layer and figure out what that means. We have ActiveRecord within Rails, and it's deeply integrated. And so you would need to bring a Prisma or some other database connection or whatever it is. And it also doesn't have more sort of full-featured framework things. Like with Rails, it's very easy to get started with a background job system. Remix has no answer to that because they're like, no, no, this is what we're doing over here. But similarly, security is probably the one that concerns me the most. There's an open conversation in their discussion portal about CSRF protection and a back and forth of whether or not Remix should have that out of the box or not. And there are trade-offs because there are different adapters that you can use for auth. And each would require their own CSRF mitigation. But to me, that is the sort of thing that I would want a framework to have. Or I'd be interested in a framework that continues to build on top of Remix that adds in background jobs and databases and all that kind of stuff as a complete solution, something more akin to a Rails or a Laravel where it's like, here we go. This is everything. But again, having some of these more advanced concepts and patterns to build really, really delightful UIs without having to change out the fundamental way that you're building things. STEPH: Interesting. Yeah, I think you've answered a couple of questions that I had about it. I am curious as to how it fits into your current tech stack. So you've mentioned that you're excited and that it's helpful. But given that you already have Rails, and Inertia, and Svelte, does it plug and play with the other libraries or the other frameworks that you have? Are you going to have to replace something to then take advantage of Remix? What does that roadmap look like? CHRIS: Oh yeah, I don't expect to be using Remix anytime soon. I'm just keeping an eye on it. I think it would be a pretty fundamental shift because it ends up being the server layer. So it would replace Rails. It would replace the Inertia within the stack that I'm using. This is why as I started, I was like, Inertia is still my answer. Because Inertia integrates really well with Rails and allows me to do the sort of it's not progressive enhancement, but it's like, I want fancy UI, and I don't want to give up on Rails. And so, Inertia is a great answer for that. Remix does not quite fit in the same way. Remix will own all of the request-response lifecycle. And so, if I were to use it, I would need to build out the rest of that myself. So I would need to figure out the data layer. I would need to figure out other things. I wouldn't be using Rails. I'm sure there's a way to shoehorn the technologies together, but I think it sort of architecturally would be misaligned. And so my sense is that folks out there are building...they're sort of piecing together parts of the stack to fill out the rest. And Remix is a really fantastic controller and view from their down experience and routing layer. So it's routing, controller, view I would say Remix has a really great answer to, but it doesn't have as much of the other stuff. Whereas in my case, Inertia and Rails come together and give me a great answer to the whole story. STEPH: Got it. Okay, that's super helpful. CHRIS: But yeah, again, I'm in very much the exploratory phase. I'm super intrigued by a lot of what I've seen of it and also just sort of the mindset, the ethos of the project as it were. That sounds fancy as I say it, but it's what I mean. I think they want to build from web fundamentals and then enhance the experience on top of that, and I think that's a really great way to go. It means that links will work. It means that routing and URLs will work by default. It means that you won't have loading spinner Armageddon, and these are core fundamentals that I believe make for good websites and web applications. So super interested to see where they go with it. But again, for me, I'm still very much in the Rails Inertia camp. Certainly, I mean, I've built Sagewell on top of it, so I'm going to be hanging out with it for a while, but also, it would still be my answer if I were starting something new right now. I'm just really intrigued by there's a new example out there in the world, this Remix thing that's pushing the envelope in a way that I think is really great. But with that, my now…what was that? My second or my third rave? Also called the positive rant, as we call it. But yeah, I think on that note, what do you think? Should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
343: Opt-In To Oversharing

The Bike Shed

Play Episode Listen Later Jun 21, 2022 30:31


Chris is weathering through a slight lull, a holding period, where his team waits for new features to become available with some of the platforms they integrate with, and as they think out new facets of the platform they're building. Steph has been thinking recently about working in isolation. It's a topic that Joël Quenneville pointed out to her and mentioned. Can engineers work in isolation and be successful? Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Always be singing. STEPH: I can't remember if I've shared the story with you. But I had a beautiful little human moment with someone at airport security. Because when I travel with my mic, I always get stopped because there's the middle long, thin piece that looks like what you would screw on to a gun like for a silencer. And so [laughs] as he was going through, the person was looking at it, and then he called over a buddy. And then they called over another buddy, and there's like three TSA agents all looking at the X-ray screen. And finally, they're like, "Yeah, we need to flag it." So they moved it over. And then he was digging through, and he pulled out the big metal piece. And I said, "It's for a microphone." And he's like, "Okay," and he kept looking, and then he finally found the microphone. And he lit up because I guess he wasn't really sure to believe me at first when I said it. But he lit up, and he was like, "Karaoke?" [laughs] I was like, "No, it's for podcasting." CHRIS: But not 100% no because we do sing plenty on this show, so... STEPH: I think that's what made me think of it. It was your singing. [laughs] CHRIS: Yep. My wonderful, wonderful singing. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly Podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris? What's new in your world? CHRIS: What's new in my world? We are in sort of a...what's the word? There's a bit of a lull right now, not like a big lull, but we had a bunch of clear work that came into the team, did a bunch of iterations, some testing, built some new features, et cetera. And now there's a small holding period basically where we wait for some new features to become available with some of the platforms that we integrate with and also as we think out some new facets of the platform that we're building. So we've got this little bit of time here where we're not necessarily building out as many new novel features. But instead, as a dev team, we're taking this moment to be like, oh, cool, let's tie down. I want to make a sailing analogy here, but I don't know sailing. It's like tie down the somethings and batten the hatches, maybe. That sounds like a thing. [chuckles] But so we have a couple of projects going right now. We want to really accept the truth and lean into Sidekiq. So right now, we have a mix of ActiveJob and Sidekiq jobs. And they're confusing, and et cetera, et cetera. So we want to kind of lean into that, upgrade dependencies, that sort of thing. We are, again, doing a little bit of work on the observability foundation of our system. so how do we know what's going on at runtime? Also, working on just some core features and functionality. We have done a little bit of an exploration into the event processing stuff, some of that that I've been talking about. It's actually been very interesting. So we're working with Customer.io as a platform, which is omnichannel communication behavior-based messaging sort of thing. So when a user does X, send them an email and then wait three days. And if they haven't responded, then do this other thing. And I think I've said this in previous episodes; I'm so wildly impressed with that platform. They have done such a good job. And I know that good software doesn't happen in a vacuum. In fact, if we're being honest, a lot of the software out there is not very good. And not only do they do a good job, but it's across...there's a ton of functionality in Customer.io. And it's interesting because we're finding ourselves leaning into it even more because it is such a solid platform and because it connects into our event system. Like, it's a segment destination, so all of our analytics events get piped into Customer.io, and then we can action on any of them. And the actions can be quite complicated. And this is where we're getting into good idea, terrible idea space. And to be clear, this is still just an exploration. But we basically wanted a way to do more. There are a bunch of different actions that you can take so, like send an email, send an SMS, or there are a couple of other slightly fancier ones. You can trigger an event within the Customer.io system. You can actually do an arbitrary HTTP POST, PUT, PATCH, whatever, any web requests you want to make. So if you want to integrate with essentially anything else out there, you can do that. You can send some structured data over the wire. And so we've now been like, okay, what if, and stay with me here, what if we use our analytic system and we send events whenever a user does something, and then that event eventually trickles down to Customer.io? Within that, we allow ourselves to respond to that event by emitting a different event within the system, within Customer.io. And then, via the webhook functionality, we fire that back to the Rails application. And then there we can do whatever we want. And in a way, that sounds absurd because we're starting from our app, and then we're sending some events down, processing them in certain ways, sending it back to the app, and then maybe doing something. In particular, one of the things we want to do is richly formatted Slack alerts. And Customer.io has a Slack alert functionality, but they can't have any of the fancy stuff. They can't link to our customer in the admin dashboard. So we found that that functionality is particularly useful for our admin team. And so we're like, ah, this feels weird. But if we were to do this loop out and back, then ideally, we get the power of Customer.io for non-technical users or non-engineering team users to configure workflows and to say, "When a user does this, I actually want to alert the admin team via Slack." And we want it to be rich and have buttons that you can click and all that kind of stuff. And although the thing that I just described seems complicated, is a word that I'll use for it, confusing at times, it isn't...like, I don't want to do all of that in the app. I don't want the app to have to think about how do I wait three days? We technically can do that with Sidekiq, but it gets us in trouble and whatnot, whereas Customer.io that's a core concept for them. And so, again, very much exploration. This will probably be a future good idea, terrible idea segment. But that's been an interesting one to explore. STEPH: You have quite a talent for you preface something as a bad idea, and you do a very good job of making it sound reasonable and good. [laughs] So it's interesting to be on that side of like, good idea, bad idea. It's like, I'm looking for the bad. And I have questions, but overall, [chuckles] you do a very good job of being very thoughtful and walking through why it makes sense or what are the benefits of it. So you answered some of my questions around why still send it to Customer.io versus just having it all in-house. So the fact that the admin team has access to it makes a lot of sense. I want to clarify one point. So when you send it to Customer.io, Customer.io then needs to send a message back to your application. And then that's when you customize the Slack message. Do you need Customer.io to send that message, or could you just fire off an event to Customer.io to say, "Hey, capture this, but don't do anything with this. And then we're going to send the Slack message because we want to customize it."? CHRIS: I think the key is that we want to leverage the fact that Customer.io is the platform that our operations team really is now becoming comfortable with and using for this behavioral automation workflow type logic. So that idea of when this event, you know, when this triggering event happens, if this condition is true, then respond in this way. And so because Customer.io is the platform that A, is quite good at that and B, is where our admin team is now thinking about doing that, one thing that we might do let's say a user completes some action within the application. So they fill out a form to submit their interest in some new platform feature. Initially, what we might want to do there is alert ourselves to say, "Hey, this happened. Take some action." And then eventually, we may want to instead switch that over and send an email to the customer with the next steps that they need to do. And the ability to gradually transition across that spectrum is really interesting to me, and again, Customer.io being the platform, sort of the hub for how we respond to these events. At the same time, I know that this feels like a generic message processing system that might be a Kafka queue somewhere else. And so I've got that in the back of my head of like, is this weird? I think it's a little weird. But it also, thus far as we're exploring it, is very approachable for the admin team, very familiar for them, and reasonably powerful. And also, there's a drag and drop editor for the events and the payloads. And it knows for this event, here's the stuff that's available to you. And so the ability for our admin team to interact with that interface is really great. And we don't have to build it. We don't have to think about it. But I will say I've worked at so many different companies that have their ad hoc system that makes it easy to do generic X, Y, and Z. And it's bad, and it falls down. And it's impossible to know when anything happens. And so, I've got a lot of concerns in the back of my head, which I will want to at least think through and understand the trade-offs that we're making if we pursue this path, but it is very interesting to me. So right now, a lot of this logic does live in the app. But it means that it requires a code change for anything that we want to do like this. We want to have a Slack alert whenever X happens. Now, the developers are in the loop for all of that. And really, it's the operations team that owns the decisioning on that. And so if they can also self-serve and instrument the action, the alert, the follow-up, the whatever it is, if we can give them those primitives in a platform that they already understand, that sounds nice. I'm intrigued, is what I'll say. So anyway, while we're in this lull period, we are trying out some fun stuff like that and exploring those sorts of things. STEPH: I like that perspective that you're putting on it, or at least the one that's standing out to me is the concept of ownership is like who gets to own these actions. But then beyond that, that's the part where I feel a little squirmy is, so we are using this third-party tool because it makes life easier. But then, at what point when we start building software around this third-party tool to then customize it back on our own side. Then if someone is in Customer.io, so if an admin user is in there and then they trigger an event, is there going to be confusion as to what's going to happen? And can they retry an event? Because I'm realizing my initial suggestion where it was like, hey, notify Customer.io that this is there but then also manage sending the Slack message that would prevent them from being able to have that retry capability. And that may be very much worth preserving. So then it's understood that hey, if you want to manage this, we are giving you full access to manage this work. We may customize it, but this is still the interface in which you go through to have three tries or to manage that workflow or these actions that get sent to users. CHRIS: Yeah. I think you've perfectly highlighted the why this might not be a great idea or at least the concerns to explore before adopting this more thoroughly. And even just the idea of adopting it more thoroughly, like, how tied into the system are we? How business-critical does this new external piece of software become? Because I've seen that to be really problematic where there are organizations that I've worked with that are like, "Oh God, we would love to move off of system X. But unfortunately, it's basically the one thing holding this business up." And I'm like, yeah, I get that. And that happens. So yeah, being really intentional with that. And that's why we're very much in an exploration place. But we have a bunch of stuff that we've done that required engineering work. And we're now seeing like, actually, could we map this into this other tool? And can we build the set of primitives in that space that now this team can own that whole experience? And then critically, can they debug it? Will we know when something goes wrong, et cetera? Those are always parts. At this point, I don't think I can just imagine a happy path. And I hope this isn't true for the rest of my life. But the work as a software developer, especially after having done a couple of rounds of it and as a consultant, I just imagine failure modes. It's all I do. I'll be like, okay, we just need to wire X up to Z, and then we need to fire off a request. And then, once we get the message back, then we can process them. I'm like, right. You just described 13 things that can go wrong. Now let's imagine each of the different failure states because that's all I'm going to do. Who cares about the happy path? Those are easy. Those write themselves. It's all of the failure modes that I need to think about. And someday, when I retire, and I go to a log cabin in the woods, and I don't talk to people for a while, maybe I'll go back to a place of only happy paths. But that is not my truth right now. STEPH: I can't tell you how many people in my personal life I have annoyed so much [laughs] because all I see are failure modes. And one, that's a delightful t-shirt. [laughs] I'd love to have that. And then yeah, I feel you because there are so many times where someone is...like, I'm with someone who's like a big idea person. And so they're just launching into what-ifs, and we did this. And I can't help it, and I have learned to help it. But it has been a struggle with some strong feedback from family and friends to reel it in. Because then I will start to think through okay, well, what's the details? And I have some questions. What happens when this happens? And yeah, all I see are failure modes. [laughs] It is very true for me too, and not always...not so great. So I, too, shall get a log cabin one day and try to forget all of that. CHRIS: I will say I painted that as a particularly glib version of myself. But some of what I'm doing right now, particularly joining an early-stage startup and taking the role of CTO, was very much to try and intentionally resist that. Because right now, I have to be really careful with how much of the potential edge cases and whatnot. I'm considering exactly how robust of a platform are we building? Very is the answer. But what about extremely? Because extremely is an option but extremely costs four times as much. Mostly in time being the critical element there. And so part of the work that I'm doing now is just trying to push on those edges, push on those boundaries, find the places where we can move quickly, and still build a robust platform because frankly, we're building...Sagewell is a financial platform under the hood, and I can't be flippant with that. We as a team have to be really careful with the thing that we're building. But we also have to move quickly. We have to be able to iterate. We have to be able to build something and try it out and see if it works. And then, if it doesn't, maybe shelve it and pull it out of the codebase. And it has been a real challenge, but it was the challenge that I wanted here. And so I've been enjoying that work, but it has been a stretch, a growth moment, let's call it. STEPH: I don't know if you've shared that particular goal with me in transitioning to a CTO role, but I really, really like it. One, it's very aligned with who you are. You're very thoughtful, and you look for areas to push and ways to do that. And then I also struggle in those areas, and thoughtbot specifically and consulting has helped push me in directions, push me out of my comfort zones but still in a safe space where I have other people to talk to as I'm making those decisions and pushing past the comfort areas that I have. But one of them is that I will initially think things have to be perfect or really planned. And I had a really nice conversation with Chad Pytel, who is one of the Founders of thoughtbot and also COO and host of the Giant Robots Smashing Into Other Giant Robots Podcast. And we were chatting about a new offering that thoughtbot is bringing to the market. And it's one that I've been involved with. And I started getting really in the weeds of like, but we really have to plan out how this is going to look and all the actions that need to take place before then we can really sell this type of engagement to a new client. And as I was going through this list of worries, when I was done, he mentioned he's like, "All of those are valid and something to consider." He's like, "But we don't have any customers yet." So the first part is we feel that we are in a space that we have enough of information to get started. And it's something that we've done before. And then, we'd like to see where customers align with us on this need because we're going to end up shaping this work in response to what their needs are. And so, we can't really begin that shaping until we understand more of what people are looking for. I was like, oh yeah, that's such a nice point. It just reminded me in regard to pushing those boundaries of yes, we need planning upfront, and we look for failure modes. But then there's also an important aspect of then finding ways to keep moving forward and getting more feedback and then balancing those two. CHRIS: Yeah, I think that's definitely right the as always, anchoring it to the customer. What is it that they need? How do we connect with them and hear from them? And ideally, keep those feedback loops as short as possible. That's the game, and everything else fits around that. But yeah, so we're trying some stuff. We'll see how it goes. I will certainly report back, depending on how it plays out. But that's a little bit of what's up in my world. What's up in your world? STEPH: I have been thinking recently about working in isolation. It's a topic that Joël Quenneville, who's another thoughtboter and has been on the show a number of times, it was a topic that he'd actually pointed out to me and mentioned. And so, I wanted to bring that here and share it with you because I'd love to get some of your thoughts on this as well. But I've typically had the viewpoint that when developers are sent off to work on a large, nebulous task, that it's a recipe for disaster, and almost everyone's going to lose in that scenario. And it tends to be a combination of isolation, very distant due dates, and loosely defined scope that leads to those really poor results. However, as developers, it's not inconceivable for us to land in that position. And it's very similar to my current project, who I'm working with Joël on, where we were given a very fuzzy project with some really aggressive goals, and the engagement is going really well. So that led Joël and I to wonder why is this working? This is the thing that we said that people should never do, but it's actually going quite well for us. So reflecting upon some of the things that are working well for us, even though we are in more of an isolated state than we would typically work, some of the things that I've been reflecting on or some of the strategies I should say that we've applied to this situation is number one, we did work hard to plug into an existing team. So when we joined, we joined more of an ad hoc volunteer team. And in everybody's spare time, those individuals were then contributing to the CI process in terms of trying to speed things up and improve things for the rest of the team. But otherwise, there wasn't really a team. There wasn't much structure to it. So it felt like everybody was very much off in their own world doing their own thing, occasionally putting up some code changes for review. And then you had to gain a lot of context to understand what it was that they were doing. So one of the things that I advocated for early on that I thought was more of just my personal preference but I think has actually worked well in regards to the success of the project as well is to plug into an existing team. So even if you are not working with that team on their day-to-day tasks, but you want to have more people to interact with and more people to share your context with. So you are essentially reducing the isolation of you're no longer these two people who are off in a corner working on something, and nobody has any idea what you're doing, and only one person is getting a status update. There is now a whole channel or team of people that have some insight as to what's going on. And they can also really unblock you for when you get stuck because then if you do have a question, but there's that one person who has been like your go-to person for this whole project, if they're out on vacation, or if they leave, or just something happens, you're suddenly blocked. And you don't know who to go to because you've been part of this larger company, but you haven't interacted with anybody outside of that one person. So at least if you're plugged into another team, you've immediately got some friends or some other people to go to and say, "Hey, I'm not sure who can help me with this, but I have this problem." And then, from there, you can get more help. CHRIS: This is super interesting. To start, I really like that you're framing this in terms of this is a thing that we often recommend against or see as an anti-pattern, and yet in this particular case, it's working. Let's look at that. Because I think the things that you're like, huh, that's interesting. That phrase "Huh, that's interesting" is very interesting. It often highlights like oh, something is behaving counter to how we would expect it to, so let's dig in and explore that. And so I love that that was the reaction and then sort of the conversation that spilled out of that. I'm also not super surprised that the combination of you and Joël were able to find a way to make this successful because you are two of the most capable developers that I've worked with but also particularly excellent communicators and advocates for the work that you're doing and the way that one should do the work. So the idea that there's a situation that may not be the ideal mode of working and that you're able to take that and say, "What if we shift it just a little bit and make it a little bit more manageable and whatnot?" So unsurprised, frankly, that you found a way collectively to make this a little bit better. And then I think yeah, it sounds like you're doing the things...so just like, we're in isolation, hmm, that doesn't seem great. Let's unisolate and connect to some people, and that just feels so true. I'm very interested to hear, though. I'm guessing there's more to this story or other things that you've done. Are there other tactics or ways that you've shifted this around? STEPH: Yeah, there's a couple more. So this is one that (And thank you for the kind words.) this was one that I think Joël is really exceptional at. So Joël is really good at building diagrams and graphs and then sharing that with the team as sort of like we've spent a couple of days understanding this big, messy concept. Here's a nice condensed graph that shows how we went about understanding this. And then here's the big overall picture of what we've learned from this, which has been wonderful for so many reasons. And every time that we share something with the team, one, it just helps build camaraderie, especially in remote days, it just builds camaraderie on hey, we're all online. And we're working. And here's the thing that I'm working through or struggling with or something that I learned. I often do that, especially when I get frustrated and something goes wrong. I love to share the I did this today. It went terribly. [laughs] Let me tell you about it, so you're aware of it in case it helps you. And specifically, the diagrams are really nice because then other people can just see and appreciate it, or they can point something out that we didn't know. Or they'll see a different angle because they're more familiar with the system. So they can say, "Oh yeah, that totally makes sense," or "I had no idea that was happening." So that's been a really nice way to engage with the team. And so, essentially, the little title for that strategy is just overshare. Just share all the things that you're doing and find ways to make it digestible for the team so then they can go along on this big, nebulous journey with you. And you can also put it in threads so that way, you're not flooding a channel, but then people can opt-in to that oversharing if they would like more insight into the work that you're doing. CHRIS: Opt-in to that oversharing. [laughs] STEPH: Exactly. I mean, it's not forced oversharing; it's just it's here if people would like it. That was a really nice compliment that some other thoughtboters received from their client team is someone had mentioned that there's so much information that's getting shared from the thoughtboters that they had trouble keeping up. And they really liked that. They really appreciated that they could then go check out this channel or these threads and see exactly the type of work that was happening and the outcomes of it. And then they could just check it maybe beginning of the day, end of the day and get that knowledge dump. Some of the other strategies that we've used are giving ourselves mini-goals to accomplish as part of the larger, more nebulous task. So as we have this very large goal in mind, it's like, where's the small piece? Where's an entry point? What's a task or a goal that we can define? And then we want to break that down into what questions do we need to ask? How can we start moving in this direction? And we want to find something that has an answer. So each time that we start researching once we've gotten to that point...and this is hard. I feel like people may know that, but I should just say that this is hard to take something nebulous and then find the entry point and break down some goals. And that has been one of the wonderful parts of then having a buddy for this type of project because then we can bounce ideas off of each other. And we can also help the other person not go too deep into an area. Because I have definitely had moments where I've been very passionate about like, "We need to do this," and Joël is just like, do we? And I'm like, "Yeah." And he's like, "Do we though?" And I'm like, "I guess not. I just really, really want to." [laughs] It's been very helpful to have a partner balance some of those feelings. And once you can break down some of that amorphous problem into those smaller goals, then you can also create tickets, which is also a really nice way to then surface the work that you're doing. You can document how you're researching, document the question. And then once you have that question of what you're in search of, it's so nice because then once you find the answer, that's immediately a good moment to pause and reflect. So I think in a recent episode, we were chatting about this where Joël and I were trying to understand why the tests weren't being balanced properly across each process that was available. And we found the answer, and we started immediately digging into fixing it or solutions. And then it took us a moment to go back and say, "Actually, this ticket is really just about understanding the problem, not fixing the problem." And so that was a nice; now that we understand the problem, let's go back high-level to define our next goal from this big, nebulous task because maybe fixing that balancing is the right thing to do, but maybe not, and we just need to reconsider. So for that portion of breaking down a big, nebulous task and then identifying smaller tasks that you can achieve, time-boxing has been huge for us in regards of what's something that we can accomplish this week, or what's something we can accomplish today that will then move us forward? And then making sure that we are setting deadlines for ourselves. So normally, this is another area where it's like, huh, that's interesting. I'm a big believer in deadlines. But I do think self-imposed deadlines are really helpful. CHRIS: I'm intrigued to hear you say that you're not a big fan of deadlines because I assume we're actually more aligned on this. But deadlines that are arbitrary and also come with fixed scope and other immovable things, yes, those are the worst in the world. But deadlines that we set for ourselves, and then we use that as a mechanism to hone and refine the scope that we're going to get out the door by that deadline, I find those incredibly useful. And that sounds like that's the same sort of thing you have going on here is like by saying we're willing to expend this much to get a result, that defines the work going into it. STEPH: Yeah, that's fair. Everything that you said is true, too; in regards to, I'm realizing I default that when I hear the word deadline, I'm so used to teams having deadlines that are defined by other individuals that are not part of the work. And as you said, the scope has already been defined, and it can't be changed. And it's all of the bad things that then go with it. So when I think of deadlines, I immediately think of that type of deadline versus the more self-imposed, yes, we can revisit, yes, the team has bought in and understands why this is important. Those types of deadlines are very helpful. It's that first part that I default to that I think of immediately, and I need some reassurance that that is not the type of deadline that I'm looking at or being forced to meet. I have a very similar feeling for estimates. Like, those both fall in the same category for me is; as soon as I hear estimation and deadline, I get nervous. And then I just need to understand the purpose of both and who is setting both of those and the communication around them. And then what does that failure mode look like, the one that we're always looking for? So yeah, deadlines and estimations fit into that. Initially, I'm very hesitant and cautious, but I think they're both very good tools. CHRIS: Yeah, I feel like those are very closely related. And they're definitely tools that can be used for great good or for great evil. And so, ideally, we advocate for the great good usage. But more generally, I love, again, the sharing around the process and what's worked for you in this less typical or often somewhat problematic workflow. I will say, again, so I gave you the series of compliments earlier, and I stand by those compliments for you and Joël. But I think also the sort of related aspect is that you two are both quite senior, very capable, very comfortable suggesting changes, suggesting workflows. So I think the potential dangers of isolation are still very much there. And the fact that the two of you have been able to find a way to work more effectively and perhaps change the terms of things just a little bit to make this effective is A, unsurprising but B, not something that I would expect of every team. I think you've described a wonderful list of the specifics as to how you did that. And ideally, if folks that are perhaps a little earlier on in their career are sent out for a month with a wild project, and they're sent to do it in isolation, hopefully, they can borrow from that list. But again, I do think this is a thing that, from an organizational perspective, we should be very careful with when we're imposing this isolation on it because it takes two fantastic folks like you and Joël to break out of the shackles of it. STEPH: The more we're talking about this, the more apparent it's also becoming that I started with this; how do you manage isolation? And my answer is you get out of it. [laughs] Get out of isolation as quickly as possible. Someone thought it was a good idea to put you there or a good idea to structure it that way. Or maybe they didn't mean it intentionally, but that's how things then shook out. So that's really what a lot of those strategies are about is, then how do I get myself out of this corner that you put me in? Because nobody put Stephanie in a corner. So it's essentially that's all the strategies are looking for ways to say, hey, I'm isolated, but I really don't want to be, and it's dangerous for me to be isolated in this way. Even as a more senior capable developer, it's more likely that things could go wrong, and miscommunications, misaligned expectations. So I need to find ways to then bring the work that I'm doing to make it more relevant to other people on the team. So then we can have more overlap, or at least I can share a lot of the work that's being done. CHRIS: Yeah, absolutely. I think with that wonderful summary and, frankly, utterly fantastic movie reference, what do you think? Should we wrap up? STEPH: Let's do it. Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
342: Sky Icing

The Bike Shed

Play Episode Listen Later Jun 14, 2022 43:42


Another toaster strudel debate?! Plus, the results are in for the most listened-to podcast in the RoR community! :: drum roll :: Steph has a "Dear Gerrit" message to share. Chris has a follow-up on mobile app strategy. The Bike Shed: 328: Terrible Simplicity (https://www.bikeshed.fm/328) When To Fetch: Remixing React Router - Ryan Florence (https://www.youtube.com/watch?v=95B8mnhzoCM) Virtual Event - Save Time & Money with Discovery Sprints (https://thoughtbot.com/events/save-time-money-with-discovery) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: thoughtbot's next virtual event "Save Time & Money with Discovery Sprints" is coming up on June 17th, from 2 - 3 PM Eastern. It's a discussion with team members from product management, design and development. From a developer perspective, topics will include how to plan a product's architecture, both the MVP and future version, how to lead a tech spikes into integrations and conduct a build vs buy reviews of third party providers. Head to thoughtbot.com/events to register, the event is June 17th 2 - 3 PM ET. Even if you can't make it, registering will get you on the list for the recording. CHRIS: We're the second-best. We're the second-best. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: I'm very happy to report that I picked up a treat from the store recently. So while I was in Boston and we were hanging out in person, we talked about Pop-Tarts because that always comes up as a debate, as it should. And then also Toaster Strudels came up, so I now have a package of Toaster Strudels, and those are legit. Pop-Tart or Toaster Strudel, I am team Toaster Strudel, which I know you're going to ask me about icing and if I put it on there, so go ahead. I'm going to pause. [laughs] CHRIS: It sounds like I don't even need to say anything. But yes, inquiring minds want to know. STEPH: I think that's also my very defensive response because yes, I put icing on my Toaster Strudel. CHRIS: How interesting. [laughs] STEPH: But it feels like a whole different class of pastry. So I'm very defensive about my stance on Pop-Tarts with no icing put Strudel with icing. CHRIS: A whole different class of pastry. Got it. Noted. Understood. So did you travel? Like, were these in your luggage that you flew back with? STEPH: [laughs] Oh no. They would be all gooey and melty. No, we bought them when we got back to North Carolina. Oh, that'd be a pro move; just pack little individual Strudels as your airplane snack. Ooh, I might start doing that now. That sounds like a great airplane snack. CHRIS: You got to be careful though if the icing, you know, if it's pressurized from ground level and then you get up there, and it explodes. And you gotta be careful. Or is it the reverse? It's lower pressure up in the plane. So it might explode. STEPH: [laughs] Either way, it might explode. CHRIS: Well, yeah. If you somehow buy a packet of icing that is sky icing that is at that pressure, and you bring it down, then...but if you take it up and down, I think it's fine. If you open it at the top, you might be in danger. If you open icing under the ocean, I think nothing's going to happen. So these are the ranges that we're playing with. STEPH: I will be very careful sky icing and probably pack two so that way I have a backup just in case. So if one explodes, we'll be like, all right, now I know what I'm working with and be more prepared for the next one. CHRIS: That's just smart. STEPH: I try to make smart travel decisions, Toaster Strudels on the go. Aside from travel treats and sky icing, I have some news regarding Planet Argon, who is a Ruby on Rails consultancy regarding their latest published this year's Ruby on Rails community survey results. And so they list a lot of fabulous different topics in there. And one of them includes a learning section that highlights most listened to podcasts in the Ruby on Rails community as well as blogs and some other resources. And Bike Shed is listed as the second most listened to podcast in the Ruby on Rails community, so whoo, golf clap. CHRIS: Fantastic. STEPH: And in addition to that, the thoughtbot blog got a really nice shout-out. So the thoughtbot blog is in the number two spot for the most visited blogs in the community. In the first spot is Ruby Weekly, which is like, you know, okay, that feels fair, that feels good. So it's really exciting for the thoughtbot blog because a lot of people work really hard on curating and creating that content. So that's wonderful that so many people are enjoying it. And then I should also highlight that for the podcast in first place is Remote Ruby, so congrats to Chris, Jason, and Andrew for grabbing that number one spot. And Brittany Martin, host of the Ruby on Rails Podcast, along with Brian Mariani, Jemma Issroff, and Nick Schwaderer, are in the number three spot. And some people say that Ruby is losing steam but look at all that content and all those highly ranked podcasts. I mean, we like Ruby so much we're spending time recording ourselves talking about it. So I say long live Ruby, long live Rails. CHRIS: Yes. Long live Ruby indeed. And yeah, it's definitely an honor to be on the list and to be amongst such other wonderful shows. Certainly big fans of the work of those other podcasts. We even did a joint adventure with them at one point, and that was a really wonderful experience, so yeah, honored to be on the list alongside them. And to have folks out there in the world listening to our tech talk and nonsense always nice to hear. STEPH: Yeah. You and I show up and say lots of silly things and technical things into the podcast. The true heroes are the ones that went and voted. So thank you to everybody who voted. That's greatly appreciated. It's really nice feedback. Because we get listener responses and questions, and those are wonderful because it lets us know that people are listening. But I have to say that having the survey results is also really nice. It lets us know people like the show. Oh, but I did go back and look at some of the previous stats because then I was like, huh, so I'm paying attention. I looked at this year's, and I was like, I wonder what last year's was or the year before that. And I think this survey comes out every two years because I didn't see one for 2021. But I did find the survey results for 2020, which we were in the number one spot for 2020, and Remote Ruby was in the second spot. So I feel like now we've got a really nice, healthy podcasting war situation going on to see who can grab the first spot. We've got two years, everybody, to see who [laughs] grabs the number one spot. That's a lot of prep time for a competition. CHRIS: Yeah, I feel like we should be like, I don't know, planning elaborate pranks on them or something like that now. Is that where this is at? It's something like that, I think. STEPH: I think so. I think this is where you put like sky frosting inside someone's suitcase, and that's the type of prank that you play. [laughs] CHRIS: The best of pranks. STEPH: We'll definitely put together a little task force. And we'll start thinking of pranks that we all need to start playing on each other for the podcasting wars that we're entering for the next few years. But anywho, what's going on in your world? CHRIS: Let's see, what's going on in my world? A fun thing happened recently. I had a chance to reflect back on some architectural choices that we've made in the Sagewell platform. And one of those specific choices is how we've approached building our native mobile apps. We made what some listeners may remember is an interesting set of choices. In particular, in Episode 328, which we'll include a link to in the show notes, I shared with you the approach that we're doing, which is basically like, Inertia is great, web user great. We like the web as a platform. What if we were to wrap it in a native shell and find this interesting and somewhat unique hybrid trade-off point? And so, at that point, we were building it. We had most of it built out, and things were going quite well. I think we maybe had the iOS app in the store and the Android app approaching the store or something like that. At this point, both apps have been released to the store, so they are live. Production users are signing in. It's wonderful. But I had a moment in the past couple of weeks to reassess or look at that set of choices and evaluate it. And thankfully, I'm happy with the choices that we've made. So that's good. But to get into the specifics, there were two things that happened that really, really framed the choice that we made, so one was we introduced a major new feature. We basically overhauled the first-run experience, the onboarding that users experience, and added a new, pretty fundamental facet to the platform. It's a bunch of new screens, and flows, and error states, and all of this complexity. And in the process, we iterated on it a bunch. Like, first, it looked like this, and then we changed the order of the screens and switched out the error messages, and et cetera, et cetera. And I'll be honest, we never even thought about the mobile apps. It just wasn't even a consideration. And interestingly, we did as a final check before going fully live and releasing this out to the full production audience; we did spot check it in the mobile apps, and it didn't work. But it didn't work for a very specific, boring, technical reason that we were able to resolve. It has to do with iframes and WebViews and embedded something, something. And we had to set a flag. Thankfully, it was solvable without a deploy of the native mobile apps. And otherwise, we never thought about the native apps. Specifically, we were able to add this fundamental set of features to our platform. And they just worked in native mobile. And they were the same as they roughly are if you're on a mobile WebView or if you're on a desktop web, you know, slightly different in terms of form factor. But the functionality was all the same. And critically, the error states and the edge cases and the flow, there's so much to think about when you're adding a nontrivial feature to an app. And the fact that we didn't have to consider it really spoke to the choice that we made here. And again, to name it, the choice that we made is we're basically just reusing the same WebViews, the same Rails controllers, and the same what are Svelte components under the hood but the same essentially view layer as well. And we are wrapping that in a native iOS. It's a Swift application shell, and on Android, it's a Kotlin application shell. But under the hood, it's the same web stuff. And that was really great. We just got these new features. And you know what? If we have to rip that whole set of functionality out, again, we won't need to deploy. We won't need to rethink it. Or, if we want to subtly tweak it, we can do that. If we want to think about feature flags or analytics, or error states or error reporting, all of this just naturally falls out of the approach that we took. And that was really wonderful. STEPH: That's super nice. I also love this saga of like, you made a choice, and then you're coming back to revisit and share how it's going. So as someone who's never done this before, in regards of wrapping an application in the manner that you have and then publishing it and distributing it that way, what does that process look like? Is this one of those like you run a command, and literally, it's going to wrap the application and then make it hostable on the different mobile app stores? Or what's that? Am I oversimplifying the process? What does that look like? CHRIS: I think there are a lot of platforms or frameworks I think would probably be the better word like Capacitor is something that comes to mind or Ionic or Expo. There are a handful of them that are a little more fully featured in what they provide. So you just point us at your React Views and whatnot, and we'll wrap that up, and it'll be great. But those are for, I may be overgeneralizing here, but my understanding is those are for more heavy client-side bundles that are talking to a common API. And so you're basically taking your same rich client-side application and bundling that up for reuse on the native app, the native app platforms. And so I think those do have some release to the store sort of thing. In our case, we went a little bit further with that integration wrapper thing that we built. So that is a thing that we maintain. We have a Sagewell iOS repo and a Sagewell Android repo. There's a bunch of Swift and Kotlin code, respectively, in each of them, and we deploy to the stores manually. We're doing that whole process. But critically, the code that is in each of those repositories is just the bridge glue code that says, oh, when this Inertia navigation event happens, I'm going to push a WebView to the navigation stack. And that's what that is. I'm going to render the tab bar of buttons at the bottom with the navigation elements that I get from the server. But it's very much server-driven UI, is the way that I would describe it. And it's wrapping WebViews versus actually having the whole client bundle wrapped up in the thing. It's unfortunately subtle to try and talk through on the radio, but yeah. [laughs] STEPH: You're doing great; this is helping. So if there's a change that you want to make, you go to the Rails application, and you make that change. And then do you need to update anything on that iOS repo? It sounds like you don't, which then you don't have to push a new update to the store. CHRIS: Correct. For the vast majority of things, we do not need to make any changes. It's very rare for us to deploy the iOS or the Android app is a different way to put it or to push new releases to the store. It happens we may want to add a new feature to the sort of bridge layer that we built, but increasingly, those are rare. And now it's basically like, yeah, we're just wrapping those WebViews, and it's going great. And again, to name it, it's a trade-off. It's an intentional trade-off that we've made. We're never going to have the richest, most deep platform integration, smooth experience. We are making a small trade-off on that front. But given where we're at as an organization, given how early we are, how much iteration and change, we chose an architecture that optimizes for that change. And so again, like what you just said, yeah, I can...you know how it's really nice to be able to deploy six times a day on a web app, and that's a very straightforward thing to do? It is not so straightforward in the native mobile world. And so, we now have afforded ourselves the ability to do that. But critically, and this is the fun part in my mind, have the trade-offs in the controls. So if we were just like, it's just a WebView, and that's it, and we put it in the stores, and we're done, that is too far of an extreme in my mind. I think the performance trade-offs, the experience trade-offs, it wouldn't feel like a native app like in a deep way, in a problematic way. And so as an example, we have a navigation bar at the top of our app, particularly on iOS, that is native iOS navigation. And we have a tab bar at the bottom, which is native tab UI element. I forget actually what it's called, but it's those elements. And we hide the web application navigation when we're in the mobile context. So we actually swap those out and say, like, let's actually promote these to formal native functionality. We also, within our UI on the web, have a persistent button in the top right corner of your screen that says, "Need help? Reach out to your retirement advocate." who is the person that you get to work with. You can send questions, et cetera, et cetera. It's this little help sidebar drawer thing that pops out. And we have that as a persistent HTML button in the top corner of the web frame. But when we're on native, we push that up as a distinct element in the native UI section. And then again, the bridge that I'm talking about allows for bi-directional communication between the JavaScript side and the native side or the native side and the JavaScript side. And so it's those sorts of pieces that have now afforded us all of the freedom to tinker, and we don't need to re-release where we're like, oh, we want to add a new weird button that does a thing in the WebView when you click on a button outside the WebView. We now just have that built-in. STEPH: Yeah, I really like the flexibility that you're describing. When you promoted those elements to be more native-friendly so, like the navigation or the footer or the little get help chat, is that something that then your team implemented in like the iOS or the Kotlin repo? Okay, I see you nodding, but other people can't see that, so...[laughs] CHRIS: Yeah. I was going to also say the words, but yes, those are now implemented as native parts. So the thing that we built isn't purely agnostic decoupled. It is Sagewell-specific; a lot of it is low-level. Like, let's say we want to wrap an Inertia app in a native mobile wrapper. Like, 90% of the code in it is that, but then there are little bits that are like, and put a button up there. And that button is the Sagewell button. And so it's not entirely decoupled from us. But it mostly is this agnostic bridge to connect things together. STEPH: Yeah, the way you're describing it sounds really nice in terms of you're able to get out the app quickly and have a mobile app quickly that works on both platforms, and then you're still able to deploy changes without having to push that. That was always my biggest mental, or emotional hurdle with the idea of mobile development was the concept of that you really had to batch everything together and then submit it for review and approval and then get it released. And then you got to hope people then upgrade and get the newest version. And it just felt like such a process, not that I ever did much of it. This was all just even watching like the mobile team and all the work that they had to do. And I had sympathy pains for them. But the fact that this approach allows you to avoid a lot of that but still have some nice, customized, more native elements. Yeah, I'm basically just recapping everything you said because I like all of it. CHRIS: Well, thank you, friend. Like I said, I've really enjoyed it, and similar to you, I'm addicted to the feedback loop of the web. It's beautiful. I can deploy ten times or however many I want. Anytime I want, I can push out a new version. And that ability to iterate, to test, to explore, to tweak, to not have to do as much formal testing upfront because I'm terrified that if a bug sneaks out, then, it'll take me two weeks to address it; it just is so, so freeing. And so to give that up moving into a native context. Perhaps I'm fighting too hard to hold on to my dream of the ability to rapidly iterate. But I really do believe in that and especially for where we're at as an organization right now. But, and a critical but here, again, it's a trade-off like anything else. And recently, I happened to be out about in the town, and I decided, oh, you know what? Let me open up the app. Let me see what it's like. And I wasn't on great internet. And so I open the app, and it loads because, you know, it's a native app, so it pops up. But then the thing that actually happened is a loading spinner in the middle of the screen and sort of a gray nothing for a little while until the server request to fetch the necessary UI elements to render the login screen appeared. And that experience was not great. In particular, that experience is core to the experience of using the app every single time. Every time you use it, you're going to have a bad time because we're re-downloading that UI element. And there's caching, and there's things that could happen there to help with that. But fundamentally, that experience is going to be a pretty common one. It's the first thing that you experience when you're opening the app. And so I noticed that and I chatted with the team, and I was like, hey, I feel like this is actually something that fixing this I think would really fundamentally move us along that spectrum of like, we've definitely made some trade-offs here. But overall, it feels snappy and like a native app. And so, we opted to prioritize work on a native login screen for both platforms. This also allows us to more deeply integrate. So particularly, we're going to get biometric logins like fingerprints or face scans, or whatever it is. But critically, it's that experience of like, I open the Sagewell native app on my iOS phone, and then it loads immediately. And then I show it my face like we do these days, and then it opens up and shows me everything that I want to see inside of it. And it's that first-run experience that feels worth the extra effort and the constraints. Because now that it's native mobile, that means in order to change it, we have to do a deploy, not a deploy, release; that's what they call it in the native world. [laughs] You can tell I'm well-versed in this ecosystem. But yeah, we're now choosing that trade-off. And what I really liked about this sort of set of things like the feature that we were able to just accidentally get for free on native because that's how this thing is built. And then likewise, the choice to opt into a fully native login screen like having that lever, having that control over I'm going to optimize for iteration generally, but where it's important, we want to optimize for performance and experience. And now we have this little slider that we can go back and forth. And frankly, we could choose to screen by screen just slowly replace everything in the app with true native WebViews backed by APIs. And we could Ship of Theseus style replace every element of the app with true native mobile things until none of the old bridge code exists. And our users, in theory, would never know. Having that flexibility is really nice given the trade-off and the choice that we've made. STEPH: You said a word there that I missed. You said ship something style. CHRIS: Ship of Theseus. STEPH: What is that? CHRIS: It's like an old biblical story, I want to say, but it's basically the idea of, like, you have the ship. And then some boards start to rot out, so replace those boards. And then the mast breaks, you replace the mast. And slowly, you've replaced every element on the ship. Is it still the same ship at that point? And so it's sort of a philosophical question. So if we replace every single view in this app with a native view, is it still the same map? Philosophers will philosophize about it forever, but whatever. As long as we get to keep iterating and shipping software, then I'm happy. STEPH: [laughs] Y'all philosophize. That's that word, right? CHRIS: Yeah. STEPH: And do your philosopher thing. We'll just keep building and shipping. CHRIS: I don't know if I pronounced it right. It's like either Theseus or Theseus, and I'm sure I said the wrong one. And now that I've said the other, I'm sure both of them are wrong somehow. It's like a USB where there's up and down, and yet somehow it takes three tries. So anyway, I may have mispronounced it, and I may be misattributing it, but that's the idea I was going for. STEPH: Well, given I wasn't even familiar with the word until just now, I'm going to give both pronunciations a thumbs up. I also really like how you decided that for the login screen, that's the area that you don't want people to wait because I agree if you're opening an application or opening...maybe it's the first time, maybe it's the 100th time. Who knows? But that feels important. Like, that needs to be snappy. I need to know it's responsive. And it builds trust from the minute that I clicked on that application. And if it takes a long time, I just immediately I'm like, what are y'all doing? Are y'all real? Do you know what you're doing over there? So I like how you focused on that experience. But then once I log in, like if something is slow to log me in, I will make up excuses for the application all day where I'm like, well, you know, maybe it's my connection. It's fine. I can wait for the next screen to load. That feels more reasonable. And it doesn't undermine my trust nearly as much as when I first click on the app. So that feels like a really nice trade-off as well, or at least a nice area that you've improved while still having those other trade-offs and benefits that you mentioned. CHRIS: To highlight it, you used a phrase there which I really liked. Like, it's building trust. If something's a little bit off in that first run experience every single time, then it kind of puts a question in the back of your head, maybe not even consciously. But you're just kind of looking at it, and you're like, what are you doing there? What are you up to, friend? Humans say to the apps they use on their phone. That's normal, right? When you talk... But to name it, we've also done a round of performance work throughout the app. And so there are a couple of layers to it. But it was work that we had planned for a while, but we kept deferring. But now that we're seeing more usage of the native apps, the native apps experience the same surface area of performance stuff but all the more so because they may be on degraded network connections, et cetera. And so this is another example where this whole thing kind of pays off. The performance work that we did affects everything. It affects the web. It's the same under the hood. It's let's reduce the network requests that we're making in the payloads that we're sending, particularly the network requests to upstream things, so like the banking partner that we're using and those APIs, like, collating all the data to then render the screen. Because of Inertia, we only have a single sort of back and forth conversation via the API as opposed to I think it's pretty common to have like seven different APIs and four different spinners on the screen. We're not doing that, none of that on my watch. [chuckles] But we minimize the background calls to the other parties that we're integrating with. And then, we reduce the payload of data that we're sending on each request. And each of those were like, we had to think about things and tweak and poke, but again it's uniform. So mobile web has that now, desktop web has that now. Android, iOS, they all just inherited it sort of that just happened one day without a deploy or release, without a release of either of the native mobile apps. We did deploy to the web to make that happen, but that's easy. I can do that a bunch of times a day. One last thing I want to share as we're on this topic of trade-offs and levers, there was a really great conference talk that I watched recently, which was Ryan Florence of remix.run also React Router fame if you're familiar with him from that. But he was talking about the most recent version of Remix, which is their meta framework on top of React. But they've done some really interesting stuff around processing data, fetching data, when and how to sequence that. And again, that thing that I talked about of nine different loading spinners on the screen, Remix is taking a very different approach but is targeting that same thing of like, that's not great for user experience. Cumulative layout shift being the actual number that you can monitor for this. But in that talk, there are features that they've added to Remix as a framework where you can just decide, like, do we wait for this or do we not? Do we make sure we have all of the data, or do we say, you know what? Actually, this is going to be below the fold. So it's okay to defer loading this until after we send down the first payload. And then we'll kick in, and we'll do it from the client-side. But it's this wonderful feature of the framework that they're adding in where there's basically just a keyword that you can add to sort of toggle that behavior. And again, it's this idea of like trade-offs. Are we okay with more layout shift, or are we okay with more waiting? Which is it that we're going to optimize for? And I really love that idea of putting that power very simply in the hands of the developers to make those trade-off decisions and optimize over time for what's important. So we'll share a link to that talk in the show notes as well. But it was very much in the same space of like, how do I have the power to decide and to change my mind over time? That's what I want. But yeah, with that, I think that's enough of me updating on the mobile app. I'll continue to share as new things happen. But again, I'm at this point very happy with where we're at. So yeah, it's been fun. But yeah, what else is up in your world? STEPH: I have a dear Gerrit message that I wrote earlier, so I want to share that with you. Gerrit is the system that we're using for when we push up code changes that then manages very similar in the competitive space of like GitHub and GitLab, and Bitbucket. And so the team that I'm working with we are using Gerrit. And Gerrit and I, you know, we get along for the most part. We've managed to have a working relationship. [chuckles] But this week, I wrote my dear Gerrit letter is that I really miss being able to tell a story with my commit messages. That is the biggest pain that I'm feeling right now. So for anyone that's less familiar or if you already are familiar with Gerrit, each change that Gerrit shows represents a single commit that's under review. And each change is identified by a Change-Id. So the basic concept of Gerrit is that you only have one commit per review. So if you were to translate that to GitHub terminology, every pull request is only going to have one commit, and so you really can't push up multiple. And so, where that has been causing me the most pain is I miss being able to tell a story. So like even simple stories that are like, hey, I removed something that's not used. I love separating that type of stuff into its own commit just so then people can see that as they're going through review. Now, before I merge, I'm likely to squash, and that doesn't feel important that it needs to be its own commit. That's really just for the reviewer so they can follow along for the changes. But the other one, I can slowly get over that one. Because essentially, the way I get around that is then when I do push up my code for review, is I then go through my change request, and then I just add comments. So I will highlight that line and say, "Hey, I'm removing this because it's not in use." And so, I found a workaround for that one. But the one I haven't found a workaround for is that I don't push up my local work very often because I love having lots of local, tiny, green commits so that way I can know the progress that I'm at. I know where I'm headed. Also, I have a safe space to roll back to, but then that means that I may have five or six commits that I have locally, but I haven't pushed up somewhere. And that is bothering me more and more hour by hour the more I think about it that I can't push stuff up because it makes me nervous. Because, I mean, usually, at least by the end of the day, I push everything up, so it's stored somewhere. And I don't have to worry about that work disappearing. Now I am working on a dev machine. So there is that aspect of it's technically...it's not even on my local machine. It is stored somewhere that I should still be able to access. CHRIS: What's a dev machine? The way you're saying it, it sounds like it's a virtual machine, not like a laptop. But what's a dev machine? STEPH: Good question. So the dev machine is a remote server or remote machine that then I am accessing, and then that's where I'm performing. That's where I'm writing all of my work. And then that's also kind of the benefit is everything is not local; it's controlled by the team. So then that also means that other teams, other individuals can help set up these environments for future developers. So then you have that consistency across everyone's working with the same Rails version, or gems, or has access to the same tools. So in that sense, my work isn't just on my laptop because then that would really worry me because then I've got nowhere...it's not backed up anywhere. So at least it is somewhere it's being stored that then could be accessed by someone. So actually, now, as I'm talking this through, that does help alleviate my concern about this a bit. [laughs] But I still miss it; I still miss being able to just push up my work and then have multiple commits. And I looked into it because I was like, well, maybe I'm misunderstanding something about Gerrit, and there's a way around this. And that's still always a chance. But from the research that I've done, it doesn't seem to be. And there are actually two very fiery takes that I saw that I have to share because they made me laugh. When I was Googling, the question of like, "Can I push up multiple commits to one single Gerrit CR? Or is there just a way to, like, can I have this concept of like a branch and then I have many commits, but then I turn it into one CR? Whatever the world would give me. What do they have? [laughs] I'm laughing just looking at this now. One of the responses was, have you tried squashing your commits into one commit? And I was like, [laughs] "Yeah, that's not what I had in mind, but sure." And then the other one, this is the more fiery take. They were very defensive about Gerrit, and they wrote that "People who don't like Gerrit usually just hack shit together. They cut corners and love squashing commits or throwing away history. And those people hate Gerrit. Developers who care love it. It's definitely possible and easy to produce agile software." And I just...that made me laugh. I was like, cool, I'm a developer that cuts corners and loves squashing commits. [laughs] CHRIS: So you don't care is what that take says. STEPH: I'm a developer who does not care. CHRIS: You know, Steph, I've worked with you for a while. And I've been looking for the opportunity to have this hard conversation with you. But I just wish you cared a little more about the software that you're writing, about the people that you're working with, about the commits that you're authoring. I just see it in every facet of your work. You just don't care. To be very clear for anyone listening at home, that is the deepest of sarcasm that I can make. Steph cares so very much. It's one of the things that I really enjoy about you. STEPH: I mean, we had the episode about toxic traits. This would have been the perfect time to confront me about my lack of caring about software and the processes that we have. So winding down on that saga, it seems to be the answer is no, friend; I cannot push up multiple commits. Oh, I tried to hack it. I am someone that tries to hack shit together because I tried to get around it just to see what would happen. [laughs] Because the docs had suggested that each change is identified by a Change-Id. And I was like, hmm, so what if there were two commits that had the same Change-Id, would Gerrit treat those as patch sets? Because right now, when you push up a change, you can see all the different patch sets, so that's nice. So that's a nice feature of Gerrit as you can see the history of, like, someone pushed up this change. They took in some feedback. They pushed up a new change. And so that history is there for each push that someone has provided. And I wondered maybe if they had the same Change-Id that then the patch sets would show the first commit and then the second commit. And so I manually altered the commits two of them to reference the same Change-Id. And I have to say, Gerrit was on to me because they gave me a very nice error message that said, "Same Change-Id and multiple changes. Squash the commits with the same Change-Ids or ensure Change-Ids are unique for each commit. And I thought, dang, Gerrit, you saw me coming. [laughs] So that didn't work either. I'm still in a world of where I now wait. I wait until I'm ready for someone to review stuff, and I have to squash everything, and then I go comment on my CRs to help out reviewers. CHRIS: I really like the emotional backdrop that you provided here where you're spending a minute; you're like, you know what? Maybe it's me. And there's the classic Seymour Skinner principle from The Simpsons. Am I out of touch? No, it's the children who are wrong. [laughs] And I liked that you took us on a whole tour of that. You're like, maybe it's me. I'll maybe read up. Nope, nope. So yeah, that's rough. There's a really interesting thing of tools constraining you. And then sometimes being like, I'm just going to yield control and back away and accept this thing that doesn't feel right to me. Like, Prettier does a bunch of stuff that I really don't like. It shapes code in a way, and I'm just like, no, that's not...nope, you know what? I've chosen to never care about this again. And there's so much utility in that choice. And so I've had that work out really well. Like with Prettier, that's a great example whereby yielding control over to this tool and just saying, you know what? Whatever you produce, that is our format; I don't care. And we're not going to talk about it, and that's that. That's been really useful for myself and for the teams that I'm on to just all kind of adopt that mindset and be like, yeah, no, it may not be what I would choose but whatever. And then we have nice formatted code; it's great. It happens automatically, love it. But then there are those times where I'm like; I tried to do that because I've had success with that mindset of being like, I know my natural thing is to try and micromanage and control every little bit of this code. But remember that time where it worked out really well for me to be like, I don't care, I'm just going to not care about this thing? And I try to not care about some stuff, which it sounds like that's what you're doing right here. [laughs] And you're like, I tried to not care, but I care. I care so much. And now you're in that [chuckles] complicated space. So I feel for you, Steph. I'm sorry you're in that complicated space of caring so much and not being able to turn that off [laughs] nor configure the software to do the thing you want. STEPH: I appreciate it. I should also share that the team that I'm working with they also don't love this. Like, they don't love Gerrit. So when I shared in the Slack channel my dear Gerrit message, they're both like, "Yeah, we feel you. [laughs] Like, we're in the same spot," which was also helpful because I just wanted to validate like, this is the pain I'm feeling. Is someone else doing something clever or different that I just don't know about? And so that was very helpful for them to say, "Nope, we feel you. We're in the same spot. And this is just the state that we're in." I think they have started transitioning some other repos over to GitLab and have several repos in Gitlab, but this one is still currently using Gerrit. So they very much commiserate with some of the things that I'm feeling and understand. And this does feel like one of those areas where I do care deeply. And frankly, this is one of those spaces that I do care about, but it's also like, I can work around it. There are some reasonable things that I can do, and it's fine as we just talked through. Like, the fact that my commits are not just locally on my machine already makes me feel better now that I've really processed that. So there are lower risks. It is more of just like a workflow. It's just, you know, it's crushing my work vibe. CHRIS: Harshing your buzz. STEPH: In the great words of Queen Elsa, I gotta let it go. This is the thing I'm letting go. So that's kind of what's going on in my world. What else is going on in your world? CHRIS: Well, first and foremost, fantastic reference and segue. I really liked that. But yeah, let's see, [laughs] what else is going on in my world? We had an interesting thing happen last week. So we had an outage on the platform last week. And then we had an incident review today, so a formal sort of post-mortem incident review. There are a couple of different names that folks have given to these. But this is a practice that we want to build within our engineering culture is when stuff goes wrong, we want to make sure that we have meaningful conversations around to try to address the root causes. Ideally, blameless is a word that gets used often in this context. And I've heard folks sort of take either side of that. Like, it's critical that it's blameless so that it doesn't feel like it's an attack. But also, like, I don't know, if one person did something, we should say that. So finding that gentle middle ground of having honest, real conversations but in a context of safety. Like, we're all going to make mistakes. We're all going to ship bugs; let's be clear about that. And so it's okay to sort of...anyway, that's about the process. We had an outage. The specific outage was that we have introduced a new process. This is a Sidekiq process to work off a specific queue. So we wanted that to have discrete treatment. That had been running, and then it stopped running; we still don't know why. So we never got to the root-root cause. Well, we know what the mechanism was, which was the dyno count for that process was at zero. And so, eventually, we found a bunch of jobs backed up in the Sidekiq admin. We're like, that's weird. And then, we went over to Heroku's configuration dashboard. And we saw, huh, that's weird. There are zero dynos processing this. That wasn't true yesterday. But unfortunately, Heroku doesn't log or have an audit trail around changes to those process counts. It's just not available. So that's unfortunate. And then the actual question of like, how did this happen? It probably had to be someone on the team. So there is like, someone did a thing. But that is almost immaterial because, again, people are going to do things, bugs will get shipped, et cetera. So the conversation very quickly turned to observability and understanding. I think we've done a pretty good job of instrumenting error reporting and being quite responsive to that, making sure the signal-to-noise ratio is very actionable. So if we see a bug or a Sentry alert come through, we're able to triage that pretty quickly, act on it where it is a real bug, understand where it's a bit of noise in the system, that sort of thing. But in this case, there were no errors. There was no Sentry. There was nothing; there was the absence of something. And so it was this really interesting case of that's where observability, I think, can really come in and help. So the idea of what can we do here? Well, we can monitor the count of jobs backed up in Sidekiq queue. That's one option. We could do some threshold alerting around the throughput of processed events coming from this other backend. There are a bunch of different ways, but it basically pushed us in the direction of doubling down and reinforcing the foundation of our observability within the platform. So we're just kicking that mini-project off now, but it is something we're like, yeah, we feel like we could add some here. In particular, we recently added Datadog to the stack. So we now have Datadog to aggregate our logs and ideally do some metric analysis, those sort of things, build some dashboards, et cetera. I haven't explored Datadog much thus far. But my sense is they've got the whiz-bang things that we need here. But yeah, it was an interesting outage. That wasn't fun. The incident conversation was actually a good conversation as a team. And then the outcome of like, how do we double down on observability? I'm actually quite excited for. STEPH: This is a fun moment for me because I have either joined teams that didn't have Datadog or have any of that sort of observability built into their system or that sort of dashboard that people go to. Or I've joined teams, and they already have it, and then nobody or people rarely look at it. And so I'm always intrigued between like what's that catalyst that then sparked a team to then go ahead and add this? And so I'm excited to hear you're in that moment of like, we need more observability. How do we go about this? And as soon as you said Datadog, I was like, yeah, that sounds nice because then it sounds like a place that you can check on to make sure that everything is still running. But then there's still also that manual process where I'm presuming unless there's something else you have in mind. There's still that manual process of someone has to check the dashboard; someone then has to understand if there's no count, no squiggly lines, that's a bad thing and to raise a concern. So I'm intrigued with my own initial reaction of, like, yeah, that sounds great. But now I'm also thinking about it still adds a lot of...the responsibility is still on a human to think of this thing and to go check it. Versus if there's something that gets sent to someone to alert you and say like, "Hey, this queue hasn't been processed in 48 hours. There may be a concern that actually feels nicer." It feels safer. CHRIS: Oh yeah, definitely. I think observability is this category of tools and workflows and whatnot. But I think what you're describing of proactive alerting that's the ideal. And so it would be wonderful if I never had to look at any of these tools ever. And I just knew if I got, let's say, it's PagerDuty connected up whatever, and I got a push notification from PagerDuty saying, "Hey, go look at this thing." That's all I ever need to think about. It's like, well, I haven't gotten a PagerDuty in a while, so everything must be fine, and having a deep trust in that. Similar to like, if we have a great test suite and it's green, I feel confident deploying the sort of absence of an alert being the thing that I can trust. But right now, we're early enough in this journey that I think what we need to do is stand up a bunch of these different graphs and charts and metric analysis and aggregations and whatnot, and then start to squint at it for a while and be like, which of these would I be really concerned if it started to wibble? And then you can figure the alerting around said wibble rate. And that's the dream. That's where we want to get to, but I think we've got to crawl, walk, run on this. So it'll be an adventure. This is very much the like; we're starting a thing. I'll tell you about it more when we've done it. But what you're describing is exactly what we want to get to. STEPH: I love wibble rate. That's my new measurement I'm going to start using for everything. It's funny, as you're bringing this up, it's making me think about the past week that Joël Quenneville and I have had with our client work. Because a somewhat similar situation came up in regards where something happened, and something was broken. And it seemed it was hard to define exactly what moment caused that to break and what was going on. But it had a big impact on the team because it essentially meant none of the bills were going through. And so that's a big situation when you got 100-plus people that are pushing up code and expecting some of the build processes to run. But it was one of those that the more we dug into it, the more it seemed very rare that it would happen. So, in this case, as a sort of a juxtaposition to your scenario, we actually took the opposite approach of where we're like; this is rare. But we did load up a lot of contexts. Actually, I was thinking back to the advice that you gave me in a previous episode where I was talking about at what point do you dig in versus try to stay at surface level? And this was one of those, like, we've spent a couple of days on getting context for this and understanding. So it felt really important and worthwhile to then invest a little bit more time to then document it. But then we still went with the simplest approach of like, this is weird. It shouldn't happen again. We think we understand it but then let's add a little bit of documentation or wiki page around like, hey, if you do run into this, here are some steps that will fix everything. And then, if you need to use this, let somebody know because this is so odd it shouldn't happen. So we took that approach in this case where we didn't increase the observability. It was more like we provided a fire extinguisher very close to the location in case it happens. And so that way, it's there should the need arise, but we're hoping it just never gets used. We're also in the process of changing how a lot of that logic works. So we didn't really want to optimize for observability into a system that is actively being changed because it should look very different in upcoming months. But overall, I love the conversations that you bring about observability, and I'm excited to hear about what wibble rates you decide to add to your Datadog dashboard. CHRIS: There's a delicate art and science to the selection of the wibble rates. So I will certainly report back as we get into that work. But with that, shall we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
341: Fundamentals and Weird Stuff

The Bike Shed

Play Episode Listen Later Jun 7, 2022 35:27


Steph and Chris are recording together! Like, in the same room, physically together. Chris talks about slowly evolving the architecture in an app they're working on and settling on directory structure. Steph's still working on migrating unit tests over to RSpec. They answer a listener question: "As senior-level developers, how do you set goals to ensure that you keep growing?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Faking External Services In Tests With Adapters (https://thoughtbot.com/blog/faking-external-services-in-tests-with-adapters) Testing Third-Party Interactions (https://thoughtbot.com/blog/testing-third-party-interactions) Jen Dary - On Future Goals (https://www.beplucky.com/on-future-goals/) Charity Majors - The Engineer Manager Pedulum (https://charity.wtf/2017/05/11/the-engineer-manager-pendulum/) Charity Majors Bike Shed Episode (https://www.bikeshed.fm/302) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What is new in my world? Actually, this episode feels different. There's something different about it. I can't quite put my finger on it. I think it may be that we're actually physically in the same room recording for the first time in two years and a little bit more, which is wild. STEPH: I can't believe it's been that long. I feel like it wasn't that long ago that we were in The Bike Shed...oh, I said The Bike Shed studio. I'm being very biased. Our recording studio [laughs] is the more proper description for it. Yeah, two and a half years. And we tried to make this happen a couple of months ago when I was visiting Boston, and then it just didn't work out. But today, we made it. CHRIS: Today, we made it. Here we are. So hopefully, the audio sounds great, and we get all that more richness in conversation because of the physical in-person manner. We're trying it out. It'll be fun. But let's see, to the normal tech talk and nonsense, what's new in my world? So we've been slowly evolving the architecture in the app that we're working on. And we settled on something that I kind of like, so I wanted to talk about it, directory structure, probably the most interesting topic in the world. I think there's some good stuff in here. So we have the normal stuff. There are app models, app controllers, all those; those make sense. We have app jobs which right now, I would say, is in a state of flux. We're in the sad place where some things are application records, and some things are Sidekiq workers. We have made the decision to consolidate everything onto Sidekiq workers, which is just strictly more powerful as to the direction we're going to go. But for right now, I'm not super happy with the state of app jobs, but whatever, we have that. But the things that I like so we have app commands; I've talked about app commands before. Those are command objects. They use dry-rb do notation, and they allow us to sequence a bunch of things that all may fail, and we can process them all in a much more reasonable way. It's been really interesting exploring that, building on it, introducing it to new developers who haven't worked in that mode before. And everyone who's come into the project has both picked it up very quickly and enjoyed it, and found it to be a nice expressive mode. So app commands very happy with that. App queries is another one that we have. We've talked about this before, query objects. I know we're a big fan. [laughs] I got a golf clap across the room here, which I could see live in person. It was amazing. I could feel the wind wafting across the room from the golf clap. [chuckles] But yeah, query objects, they're fantastic. They take a relation, they return a relation, but they allow us to build more complex queries outside of our models. The new one, here we go. So this stuff would all normally fall into app services, which services don't mean anything. So we do not have an app services directory in our application. But the new one that we have is app clients. So these are all of our HTTP clients wrapping external third parties that we're interacting with. But with each of them, we've taken a particular structure, a particular approach. So for each of them, we're using the adapter pattern. There's a blog post on the Giant Robots blog that I can point to that sort of speaks to the adapter pattern that we're using here. But basically, in production mode, there is an HTTP backend that actually makes the real requests and does all that stuff. And in test mode, there is a test backend for each of these clients that allows us to build up a pretty representative fake, and so we're faking it up before the HTTP layer. But we found that that's a good trade-off for us. And then we can say, like, if this fake backend gets a request to /users, then we can respond in whatever way that we want. And overall, we found that pattern to be really fantastic. We've been very happy with it. So it's one more thing. All of them were just gathering in-app models. And so it was only very recently that we said no, no, these deserve their own name. They are a pattern. We've repeated this pattern a bunch. We like this pattern. We want to even embrace this pattern more, so long live app clients. STEPH: I love it. I love app clients. It's been a while since I've been on a project that had that directory. But there was a greenfield project that I was working on. I think it might have been I was working with Boston.rb and working on giving them a new site or something like that and introduced app clients. And what you just said is perfect in terms of you've identified a pattern, and then you captured that and gave it its own directory to say, "Hey, this is our pattern. We've established it, and we really like it." That sounds awesome. It's also really nice as someone who's new to a codebase; if I jump in and if I look at app clients, I can immediately see what are the third parties that we're working with? And that feels really nice. So yeah, that sounds great. I'm into it. CHRIS: Yeah, I think it really was the question of like, is this a pattern we want to embrace and highlight within the codebase, or is this sort of a duplication but irrelevant like not really that important? And we decided no, this is a thing that matters. We currently have 17 of these clients, so 17 different third-party external things that we're integrating with. So for someone who doesn't really like service-oriented architecture, I do seem to have found myself in a place. But here we are, you know, we do what we have to with what we're given. But yes, 17 and growing our app clients. STEPH: That is a lot. [laughs] My eyes widened a bit when you said 17. I'm curious because you highlighted that app services that's not really a thing. Like, it doesn't mean anything. It doesn't have the same meaning of the app queries directory or app commands or app clients where it's like, this is a pattern we've identified, and named, and want to propagate. For app services, I agree; it's that junk drawer. But I guess in some ways...well, I'm going to say something, and then I'm going to decide how I feel about it. That feels useful because then, if you have something but you haven't established a pattern for it, you need a place for it to go. It still needs to live somewhere. And you don't necessarily want to put it in app models. So I'm curious, where do you put stuff that doesn't have an established pattern yet? CHRIS: It's a good question. I think it's probably app models is our current answer. Like, these are things that model stuff. And I'm a big believer in the it doesn't need to be an application record-backed object to go on app models. But slowly, we've been taking stuff out. I think it'd be very common for what we talk about as query objects to just be methods in the respective application record. So the user record, as a great example, has all of these methods for doing any sort of query that you might want to do. And I'm a fan of extracting that out into this very specific place called app queries. Commands are now another thing that I think very typically would fall into the app services place. Jobs naming that is something different. Clients we've got serializers is another one that we have at the top level, so those are four. We use Blueprinter within the app. And again, it's sort of weird. We don't really have an API. We're using Inertia. So we are still serializing to JSON across the boundary. And we found it was useful to encapsulate that. And so we have serializers as a directory, but they just do that. We do have policies. We're using Pundit for authorization, so that's another one that we have. But yeah, I think the junk drawerness probably most goes to app models. But at this point, more and more, I feel like we have a place to put things. It's relatively clear should this be in a controller, or should this be in a query object, or should this be in a command? I think I'm finding a place of happiness that, frankly, I've been searching for for a long time. You could say my whole life I've been searching for this contented state of I think I know where stuff goes in the app, mostly, most of the time. I'm just going to say this, and now that you've asked the excellent question of like, yeah, but no, where are you hiding some stuff? I'm going to open up models. Next week I'm going to be like, oh, I forgot about all of that nonsense. But the things that we have defined I'm very happy with. STEPH: That feels really fair for app models. Because like you said, I agree that it doesn't need to be ActiveRecord-backed to go on app models. And so, if it needs to live somewhere, do you add a junk drawer, or do you just create app models and reuse that? And I think it makes a lot of sense to repurpose app models or to let things slide in there until you can extract them and let them live there until there's a pattern that you see. CHRIS: We do. There's one more that I find hilarious, which is app lib, which my understanding...I remember at one point having one of those afternoons where I'm just like, I thought stuff works, but stuff doesn't seem to work. I thought lib was a directory in Rails apps. And it was like, oh no, now we autoload only under the app. So you should put lib under app. And I was just like, okay, whatever. So we have app lib with very little in it. [laughs] But that isn't so much a junk drawer as it is stuff that's like, this doesn't feel specific to us. This goes somewhere else. This could be extracted from the app. But I just find it funny that we have an app lib. It just seems wrong. STEPH: That feels like one of those directories that I've just accepted. Like, it's everywhere. It's like in all the apps that I work in. And so I've become very accustomed to it, and I haven't given it the same thoughtfulness that I think you have. I'm just like, yeah, it's another place to look. It's another place to go find some stuff. And then if I'm adding to it, yeah, I don't think I've been as thoughtful about it. But that makes sense that it's kind of silly that we have it, and that becomes like the junk drawer. If you're not careful with it, that's where you stick things. CHRIS: I appreciate you're describing my point of view as thoughtfulness. I feel like I may actually be burdened with historical knowledge here because I worked on Rails apps long, long ago when lib didn't go in-app, and now it does. And I'm like, wait a minute, but like, no, no, it's fine. These are the libraries within your app. I can tell that story. So, again, thank you for saying that I was being thoughtful. I think I was just being persnickety, and get off my lawn is probably where I was at. STEPH: Oh, full persnicketiness. Ooh, that's tough to say. [laughs] CHRIS: But yeah, I just wanted to share that little summary, particularly the app clients is an interesting one. And again, I'll share the adapter pattern blog post because I think it's worked really well for us. And it's allowed us to slowly build up a more robust test suite. And so now our feature specs do a very good job of simulating the reality of the world while also dealing with the fact that we have these 17 external situations that we have to interact with. And so, how do you balance that VCR versus other things? We've talked about this a bunch of times on different episodes. But app clients has worked great with the adapter pattern, so once more, rounding out our organizational approach. But yeah, that's what's up in my world. What's up in your world? STEPH: So I have a small update to give. But before I do, you just made me think of something in regards to that article that talks about the adapter pattern. And there's also another article that's by Joël Quenneville that's testing third-party interactions. And he made me reflect on a time where I was giving the RSpec course, and we were talking about different ways to test third-party interactions. And there are a couple of different ways that are mentioned in this article. There are stub methods on adapter, stub HTTP request, stub request to fake adapter, and stub HTTP request to fake service. All that sounds like a lot. But if you read through the article, then it gives an example of each one. But I've found it really helpful that if you're in a space that you still don't feel great about testing third-party interactions and you're not sure which approach to take; if you work with one API and apply all four different strategies, it really helps cement how to work through that process and the different benefits of each approach and the trade-offs. And we did that during the RSpec course, and I found it really helpful just from the teacher perspective to go through each one. And there were some great questions and discussions that came out of it. So I wanted to put that plug out there in the world that if you're struggling testing third-party interactions, we'll include a link to this article. But I think that's a really solid way to build a great foundation of, like, I know how to test a third-party app. Let me choose which strategy that I want to use. Circling back to what's going on in my world, I am still working on migrating unit tests over to RSpec. It's a thing. It's part of the work that I do. [laughs] I can't say it's particularly enjoyable, but it will have a positive payoff. And along that journey, I've learned some things or rediscovered some things. One of them is read expectations very carefully. So when I was migrating a test over to RSpec, I read it as where we expected a record to exist. The test was actually testing that a record did not exist. And so I probably spent an hour understanding, going through the code being like, why isn't this record getting created like I expect it to? And I finally went back, and I took a break, and I went back. And I was like, oh, crap, I read the expectation wrong. So read expectations very carefully. The other one...this one's not learned, but it is reinforced. Mystery Guests are awful. So as I've been porting over the behavior over to RSpec, the other tests are using fixtures, and I'm moving that over to use factorybot instead. And at first, I was trying to be minimal with the data that I was bringing over. That failed pretty spectacularly. So I have learned now that I have to go and copy everything that's in the fixtures, and then I move it over to factorybot. And it's painful, but at least then I'm doing that thing that we talked about before where I'm trying to load as little context as possible for each test. But then once I do have a green test, I'm going back through it, and I'm like, okay, we probably don't care about when you were created. We probably don't care when you're updated because every field is set for every single record. So I am going back and then playing a game of if I remove this line, does the test still pass? And if I remove that line, does the test still pass? And so far, that has been painful, but it does have the benefit of then I'm removing some of the setup. So Mystery Guests are very painful. I've also discovered that custom error messages can be tricky because I brought over some tests. And some of these, I'm realizing, are more user error than anything else. Anywho, I added one of the custom error messages that you can add, and I added it over to RSpec. But I had written an incorrect, invalid statement in RSpec where I was looking...I was expecting for a record to exist. But I was using the find by instead of where. So you can call exist on the ActiveRecord relation but not on the actual record that gets returned. But I had the custom error message that was popping up and saying, "Oh, your record wasn't found," and I just kept getting that. So I was then diving through the code to understand again why my record wasn't found. And once I removed that custom error message, I realized that it was actually because of how I'd written the RSpec assertion that was wrong because then RSpec gave me a wonderful message that was like, hey, you're trying to call exist on this record, and you can't do that. Instead, you need to call it on a relation. So I've also learned don't bring over custom error messages until you have a green test, and even then, consider if it's helpful because, frankly, the custom error message wasn't that helpful. It was very similar to what RSpec was going to tell us in general. So there was really no need to add that custom step to it. For the final bit that I've learned or the hurdle that I've been facing is that migrating tests descriptions are hard unless they map over. So RSpec has the context and then a description for it that goes with the test. Test::Unit has methods like method names instead. So it may be something like test redemption codes, and then it runs through the code. Well, as I'm trying to migrate that knowledge over to RSpec, it's not clear to me what we're testing. Okay, we're testing redemption codes. What about them? Should they pass? Should they fail? Should they change? What are they redeeming? There's very little context. So a lot of my tests are copying that method name, so I know which tests I'm focused on, and I'm bringing over. And then in the description, it's like, Steph needs help adding a test description, and then I'm pushing that up and then going to the team for help. So they can help me look through to understand, like, what is it that this test is doing? What's important about this domain? What sort of terminology should I include? And that has been working, but I didn't see that coming as part of this whole migrating stuff over. I really thought this might be a little bit more of a copypasta job. And I have learned some trickery is afoot. And it's been more complicated than I thought it was going to be. CHRIS: Well, at a minimum, I can say thank you for sharing all of your hard-learned lessons throughout this process. This does sound arduous, but hopefully, at the end of it, there will be a lot of value and a cleaned-up test suite and all of those sorts of things. But yeah, it's been an adventure you've been on. So on behalf of the people who you are sharing all of these things with, thank you. STEPH: Well, thank you. Yeah, I'm hoping this is very niche knowledge that there aren't many people in the world that are doing this exact work that this happens to be what this team needs. So yeah, it's been an adventure. I've certainly learned some things from it, and I still have more to go. So not there yet, but I'm also excited for when we can actually then delete this portion of the build process. And then also, I think, get rid of fixtures because I didn't think about that from the beginning either. But now that I'm realizing that's how those tests are working, I suspect we'll be able to delete those. And that'll be really nice because now we also have another single source of truth in factory_bot as to how valid records are being built. Mid-Roll Ad: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/ bikeshed. That's buildpulse.io/bikeshed. Pivoting just a bit, there's a listener question that I'm really excited for us to dive into. And this listener question comes from Joël Quenneville. Hey, Joël. All right, so Joël writes in, "As senior-level developers, how do you set goals to ensure that you keep growing? How do you know what are high-value areas for you to improve? How do you stay sharp? Do you just keep adding new languages to your tool belt? Do you pull back and try to dig into more theoretical concepts? Do you feel like you have enough tech skills and pivot to other things like communication or management skills? At the start of a dev career, there's an overwhelming list of things that it feels like you need to know all at once. Eventually, there comes a point where you no longer feel like you're drowning under the list of things that you need to learn. You're at least moderately competent in all the core concepts. So what's next?" This is a big, fun, scary question. I really like this question. Thank you, Joël, for sending it in because I think there's so much here that can be discussed. I can kick us off with a few thoughts. I want to first highlight one of the things that...or one of the things that resonates with me from this question is how Joël describes going through and reaching senior status how it can really feel like working through a backlog of features. So as a developer, I want to understand this particular framework, or as a developer, I want to be able to write clear and fast tests, or as a developer, I want to contribute to an open-source project. But now that that backlog is empty, you're wondering what's next on your roadmap, which is where I think that sort of big, fun scariness comes into play. So the first idea is to take a moment and embrace that success. You have probably worked really hard to get where you're at in your career. And there's nothing wrong with taking a pause and enjoying the view and just being appreciative of the fact that you are able to get your work done quickly or that you feel very confident in the work that you're doing. Growth is often very important to our careers, but I also think it's important to recognize when you've achieved certain growth and then, if you want to, just enjoy that and pause. And you're not constantly pushing yourself to the next level. I think that is a totally reasonable and healthy thing to do. The second thing that comes to mind is that you're on a Choose Your Own Adventure mode now, so you get to...I would encourage folks, once you've reached this stage, to reflect on where you're at and consider what is your dream? What are your aspirations? Maybe they're related to tech; maybe they're not. And consider where is it that you want to go next? And then, what are the concrete steps that will help you achieve those goals? So there's a really great article by Jen Dary, who's a career coach and owner of Plucky Manager Training, that describes this process. And there's a really great blog post that I'll be sure to include a link to in the show notes. But she has a couple of great questions that will then help you identify, like, what are my goals? Some of those questions are, "If I could do anything and money wasn't an object, I'd spend my time doing dot dot dot." And that doesn't necessarily mean sitting on a beach with your toes in the sand all day. I mean, it could, but then that probably just means you need a vacation. So take the vacation. And then, once you start to get bored, where does your mind start to wander? What are the things that you want to do? Where are you interested in spending your time? And then, once you have an idea of how you'd like to spend your time, you can consider what actions you could take next that will point you in that direction. There's also the benefit that by this point, you probably have an idea of the type of things that you like to do and where you like to spend your time. And so you can figure out which areas of expertise you want to invest in. So do you like more greenfield projects? Do you like architecture discussions? Do you like giving talks? Do you like teaching? Or maybe you're interested in management. I think there's also a more concrete approach that you can take that. You can just talk to your managers in your team and say, "Hey, what big, hard problems are you looking to solve? And then, you can get some inspiration from them and see if their problems align with your interests. Maybe it's not even your own team, but you can talk to other companies and see what other problems industries are trying to solve. That might be an area that then spurs some curiosity or some interest. And then, where do you feel underutilized? So with your current day-to-day, are there areas where you feel that you wish you had more responsibilities or more opportunities, but you feel like you don't have access to those opportunities? Maybe that's an area to explore as well. This feels like a wonderful coaching question in terms of you have done it; congratulations. You've reached a really great spot in your career. And so now you're figuring out that big next step. This is going to be highly customized to each person to then figuring out what it is that's going to help you feel fulfilled over the next five years, ten years, however long you want to project out. Those are some of my thoughts. How about you? What do you think? CHRIS: Well, first, those are some great thoughts. I appreciate that I get to follow them now. It's going to be a hard act to follow. But yeah, I think Joël has asked a fantastic question. And coming from Joël, I know how intentional and thoughtful a learner, and sharer, and teacher and all of these things are. So it's all the more sort of framed in that for me knowing Joël personally. I think to start, the kernel of the question is as senior developers, that's the like...or senior level developers is the way Joël phrased it, but it treats it as sort of this discrete moment in time, which I think there's maybe even something to unpack. And I think we've probably talked about this in previous episodes, but like, what does that even mean? And I think part of the story here is going from reactive where it's like, I don't know how anything works. I know a little bit. I can code some. And every day, I'm presented with new problems that I just don't understand. And I'm trying to build up that base of knowledge. Slowly, you know, you start, and it's like 95% of the time you feel like that. And slowly, the dial switches over, and maybe it's only 25% of the time you feel like that. Somewhere along that spectrum is the line of senior developer. I don't actually know where it is, but it's somewhere in there. And so I think it's that space where you can move from reactive learning things as necessitated by the work that's coming at you to I want to proactively choose the things that I want to be learning to try and expand the stuff that I know, and the ways that I can think about the work without being in direct response to a piece of work coming at me. So with that in mind, what do you do with this proactive space? And I think the way Joël frames the question, again, to what I know of Joël, he's such an intentional person. And I wouldn't be surprised if Joël is very purposeful and thinks about this and has approached it as a specific thing that he's doing. I have certainly been in more of “I'll figure it out when I get there.” I'll explore. Or actually, probably the most pointed thing that I did was I joined a consultancy. And that was a very purposeful choice early on in my career because I'm like, I think I know a little bit. I don't think I know a lot. I would like to know a lot. That seems fun. So what do I think is the best way to do that? My guess, and it turned out to be very much true, is if I join a consultancy, I'm going to see a bunch of different projects, different types of technologies, organizations, communication structure, stuff that works, stuff that doesn't work. And to be honest, I actually thought I would try out the consultancy thing for a little while, like a year or two, and then go on to my next adventure. Spoiler alert: I stayed for seven years. It was one of the best periods of my professional life. And I found it to be a much deeper well than I expected it to be. But for anyone that's looking for, like, how can I structure my career in a way that will just automatically provide the sort of novelty and space to grow? I would highly recommend a consultancy like thoughtbot. I wonder if they're hiring. STEPH: Well, yes, we are hiring. That was a perfect plug that I wasn't expecting for that to come. But yes, thoughtbot is totally hiring. We'll include a link in the show notes to all the jobs. [laughs] CHRIS: Sounds fantastic. But very sincerely, that was the best choice that I could have made and was a way to flip the situation around such that I don't have to be thinking about what I want to be learning. The learning will come to me. But even within that, I still tried to be intentional from time to time. And I would say, again, I don't have a holistic theory of how to improve. I just have some stuff that's kind of worked out well. One thing is focusing on fundamentals wherever I can, or a different way to put it is giving myself permission to spend a little bit more time whenever my work brushes up against what I would consider fundamentals. So things that are in that space are like SQL. Every time I'm working on something, I'm like, ah, I could use like a CROSS JOIN here, but I don't know what that is. Maybe I'll spend an extra 30 minutes Googling around and trying to figure out what a CROSS JOIN is. Is that a thing? Is a CROSS JOIN a real thing? I may be making it up. [laughs] A window function, I know that those are real. Maybe I'll learn what a window function is. I think a CROSS JOIN is a real thing. A LEFT OUTER JOIN that's a cool thing. And so, each time I've had that, SQL has been something that expanding my knowledge; I've continually felt like that was a good investment. Or fundamentals of HTTP, that's another one that really has served me well. With Ruby being the primary language that I program in, deeply understanding the language and the fundamentals and the semantics of it that's another place that has been a good investment. But by contrast, I would say I probably haven't gone as deep on the frameworks that I work with. So Rails is maybe a little bit different just because, like many people, I came to Ruby through Rails, and I've learned a lot of Rails. But like in JavaScript, I've worked with many different JavaScript frameworks. And I have been a little more intentional with how much time I invest into furthering my skills in them because I've seen them change and evolve enough times. And if anything, I'm trying to look for ones that are like, what if it's less about the framework and more about JavaScript and web fundamentals underneath? Thus, I've found myself in Svelte land. But I think it's that choice of trying to anchor to fundamentals wherever possible. And then I would say the other thing that's been really beneficial for me is what can I do that's wildly outside the stuff that I already know? And so probably the most pointed example I have of this is learning Elm. So I previously spent most of my career working in Ruby and JavaScript, so primarily object-oriented languages without a strong type system. And then, I was able to go over and experience this whole different paradigm way of working, way of structuring programs, feedback loop. There was so much about it that was really, really interesting. And even though I don't get to work in Elm, frankly, as much as I would want at this point or really at all, it informs everything that I do moving forward. And I think that falls out of the fact that it was so different than what I was doing. So if I were to do that again, probably the next type of language I would learn is Lisp because those are like, well, that's a whole other category of thing that I've heard about. People say some fun stuff about them, but I don't really know. So it's that fundamentals and weird stuff is how I would describe it. And by weird, I mean outside of the core base of knowledge that I have. STEPH: I love that categorization, and I'll stick with it, fundamentals and weird stuff, to stretch and grow and find some other areas. I also really like your framing, the reactive versus proactive. I think that's a really nice way to put it because so much of your career is you are just learning what your company needs you to learn, or you're learning what you need to keep progressing and to feel more competent with the types of features or the work that you're handling. And I think that's why Jen Dary's blog post resonates with me so much because it's probably...up until now, a lot of someone's career, maybe not Joël's particularly, but I know probably for my career, a lot of it has been reactive in terms of what are the things that I need to learn? And so then once you reach that point of like, okay, I feel competent and reasonably good at all the things that I needed to know, where do I want to go next? And rather than focus on necessarily the plans that are laid out in front of me, I can then go wide and think about what are some of the bigger things that I want to tackle? What are the things that are meaningful to me? Because then I can now push forward to this bigger goal versus achieving a particular salary band or title or things like that. But I can focus on something else that I really want to contribute to. And there do seem to be two common paths. So once you reach that level, either you typically go into management, or you become that more like principal and then onward and upward, whatever is after principal. I don't even know what's after that, [laughs] but the titles that come after principal. So there's management, and then I've seen the other very strong contributors, so Aaron Patterson comes to mind. And I feel like those people then typically will migrate to places where they get to contribute to a language or to a framework. And I think it comes down to the idea of impact because both of those provide a greater impact. So if you go into management, you can influence and affect a team of individuals, and you can increase the value created by that team. Then you've likely exceeded the value that you would have created as your own individual contributor. Or, if you contribute to a language or a framework, then your technical decisions impact a larger community. So I think that would be another good thing to reflect on is what type of impact am I looking for? What type of communities do I want to have a positive impact on? And that may spur some inspiration around where you want to go next, the things that you want to focus on. CHRIS: Yeah, I think one of the things we're picking up in that that Joël mentioned in his question is the idea of there is the individual contributor path. But then there's also the management path, which is the typical sort of that's the progression. And I think, for one, naming that the individual contributor path and the idea of going to principal dev and those sorts of outcomes is a fantastic path in and of itself. I think often it's like, well, you know, you go along for a certain amount of time, and then you become a manager. It's like, those are actually distinctly different things. And people have different levels of interest and aptitude in them, and I think recognizing that is critically important. And so, not expecting that management just comes after individual contributor is an important thing. The other thing I'll say is Charity Majors, who we had on the podcast a bit back, has a wonderful blog post about the pendulum swing called The Engineer/Manager Pendulum. And so in it, she talks about folks that have taken an exploration over into manager land and then come back to the individual contributor path or vice versa sort of being able to move between them, treating them as two potentially parallel career tracks but ones that we can move between. And her assertion is that often folks that are particularly strong have spent some time in both camps because then you gain this empathy, this understanding of what's the whole picture? How are we doing all of this? How do we think about communication, et cetera? So, again, to name it, like, I think it's totally fine to stay on one of those tracks to really know which of those tracks speaks to you or to even move between them a little bit and to explore it and to find out what's true. So we'll include a link to that in the show notes. And we'll also include a link to the previous episode a while back when we had Charity on. But yeah, those I think are some critical thoughts as well because those are different areas that we can grow as developers and expand on our impact within the team. And so, we want to make sure we have those options on the table as well. STEPH: Absolutely. I love where teams will support individuals to feel comfortable shifting between experiences like that because it does make you a more well-rounded contributor to that product team, not just as an engineer, but then you will also understand what everybody else is working on and be able to have more meaningful conversations with them about the company goals and then the type of work that's being done. So yeah, I love it. If you're in a place that you can maybe fail a little bit, hopefully not in a too painful way, but you can take a risk and say, "Hey, I want to try something and see if I like it," then I think that's wonderful. And take the risk and see how it goes. And just know that you have an exit strategy should you decide that you don't like that work or that type of work isn't for you. But at least now you know a little bit more about yourself. Overall, I want to respond directly to something that Joël highlighted around how do you know what are high-value areas for you to improve? And I think there are two definitions there because you can either let the people around you and your team define that high value for you, and maybe that really resonates with you, and it's something that you enjoy. And so you can go to your manager and say, "Hey, what are some high-value areas where I can make an impact for the team?" Or it could be very personal. And what are the high-value areas for you? Maybe there's a particular industry that you want to work on. Maybe you want to hit the public speaking circuit. And so you define more intrinsically what are those high-value areas for you? And I think that's a good place to start collecting feedback and start looking at what's high value for you personally and then what's high value to the company and see if there's any overlap there. With that, I think we've covered a good variety of things to explore and then highlighted some of the different ways that you and I have also considered this question. I think it's a fabulous question. Also, I think it's one, even if you're not at that senior level, to ask this question. Like, go ahead and start asking it early and often and revisit your answers and see how they change. I think that would be a really powerful habit to establish early in your career and then could help guide you along, and then you can reflect on some of your earlier choices. So, Joël, thank you so much for that question. STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
340: Solving People Problems with Rob Whittaker

The Bike Shed

Play Episode Listen Later May 31, 2022 50:36


Steph is joined by a very special guest and fellow thoughtboter, Rob Whittaker. ngrok (https://ngrok.com/) Time Off Book (https://www.timeoffbook.com/) Rob's Codespace Setup (https://github.com/purinkle/codespace) Rob Whittaker on Twitter (https://twitter.com/purinkle) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And today, I'm joined by a very special guest and fellow thoughtboter, Rob Whittaker. Rob has been in the software business for the past 15 years and spent the last five and a half years at thoughtbot. Rob is the Director of Software Development for our Europe, Middle East, and Africa team and, in his spare time, likes to hunt down delicious beers and coffee. Rob, welcome to The Bike Shed. It's so lovely to have you on the show today. ROB: Thank you for having me. It's a pleasure to be here. Yeah, thank you for that lovely introduction and my far too complicated job title. It sounds more serious than it actually is. STEPH: Well, you do have a fancy job title, yeah, Director of Software Development. [laughs] ROB: Yeah, it's the added on bit where it's Europe, Middle East, and Africa where I feel like there's about 20 of us maximum. But that sounds more grandiose than it actually is. STEPH: Yeah, that's something that Chris and I haven't dug into too much on previous episodes are all the different teams that we have at thoughtbot. So the shorter way of saying that is Launchpad II, but not everybody knows that. But I'm going to circle back to that because I would love to talk a bit more about that specific team and the dynamic. But before we do that, I'm realizing I'm not familiar with your origin story as to how you came to thoughtbot and then how you became this very fancy grand title of Director of Software Development for Europe, Middle East, and Africa team. ROB: Yeah, there's a bit of history about thoughtbot London as well that kind of ties into this. So before thoughtbot Launchpad II, it was thoughtbot London before we went remote. And initially, we had the plan of setting up a new studio in London to help expand thoughtbot outside of the Americas, but that plan fell through. But he knew some people from another agency called New Bamboo, and so we merged with or acquired that agency, and that agency then became the thoughtbot London team. I'm actually the first hire or...not the first hire, that's not true, the first development hire for the thoughtbot London team that would then become launchpad II. I was at the Bath Ruby Conference six years ago, I guess. And there was just an advert up on the hiring board that Nick Charlton, who's a Senior Developer and Development Team Lead at Launchpad II now, had put up. And I saw it, and I was talking to somebody who was my mentor at the time that I'd worked with at a previous job at onthebeach.co.uk, a guy called Matt Valentine-House who now works at Shopify who, actually, fun fact, his face appears at the top of Ruby Weekly this week. If you open up this week's Ruby Weekly, you can see Matt Valentine-House, who said to me, "Yeah, apply for it, why not? You see what happens." And I was like, "Okay," and just kind of took the leap. So I thought, thoughtbot, why would thoughtbot want me? Which is something I think a lot of people think when they want to join thoughtbot. They think, well, I can't do that. But I would implore people to apply. And so, from there, I never really wanted to move to London. I'd always lived in the North West of the UK. I made that leap to London because I wanted to work at thoughtbot. And then, gradually, over time, the London team expanded, and we needed to split out the management roles, and the development director role came up. And I've always enjoyed the coaching side of software development. It seems that you gain more experience as you help people with less experience, and I've always enjoyed coaching. And that was a big part of the role for me. So I was fortunate enough to be allowed to do it. And then, from there, things have grown. Yeah, so it's been a really interesting journey as a development director. The London studio went through a pretty tough time at one point where not long after I became development director that two-thirds of the team, in the space of two weeks, decided to hand their notice in and unbeknownst to each other. And so, all of a sudden, we didn't have a very big team. We didn't have very many prospects, and so it was a tough time. And so it's really nice to look back on the last three years and go, okay, we came through that. We're now one of the stronger teams at thoughtbot. And somebody actually asked me in an interview the other day, somebody we actually hired, not just based on this question, but he said, "What is your proudest moment of working at thoughtbot?" And I was like, that's one of the best questions I've heard from a candidate. And I said, "Hmm, that's interesting." It's not anything development-related, but it's that I can now look back on this team and say this is the team that I have grown in my image and all these people apart from Nick, who was the person who put the advert of it at Bath Ruby. I've hired all these people, and so the buck stops with me really because if anybody isn't able to perform, then it's kind of my fault because these are the people that I want to grow into being the team and see be a successful product design team or product development team, which brings us to modern-day I guess. So yeah, that was a long origin story. That's pretty much my whole thoughtbot biography. And I apologize. STEPH: That was perfect. I thoroughly enjoyed hearing it. And yeah, that's an awesome question. What's your proudest moment, like, part of a team? That can yield so many insights. I love that question. And I love your answer as well in terms of this is the team. We've pulled through a hard time. And then we've built everybody to the point that they are now, which kind of leads in perfectly to my next question. So being the software development director, could you walk us through a little bit of like, that's one of those titles I feel like a lot of companies have, but they can be very different from company to company. Would you mind walking us through a bit of the day-to-day in the life of being a development director? ROB: Yeah, sure. It's one of those things where I think this is something that I'm not sure if it's unique to thoughtbot, but you end up taking on a lot of hats at thoughtbot. So I know you're a team lead. So you have to balance your responsibilities as an individual contributor, which is a term I don't like, but I haven't got a better way to say it yet, and your development team lead roles. And I have similar sort of responsibilities where I have to do my individual contributor work. I have to do my director work. I'm also on our DEI Council. So I have to add that work in too, and make sure it's balanced out. So the start of my day is very much about prioritizing things. I know you and Chris, a few episodes ago, had quite lengthy discussions about productivity systems and what tools Chris wants to use. And I'm a big fan of Things, and I've been using it for maybe ten years, if not more, that I've now got my system down that I'm able to prioritize things in the way that I can pick up the right task at the right time. So a big part of my day-to-day is figuring out what is the most important thing to work on? So I have my client work, and then it's about supporting the team from that point. And the big part of my idea of what a manager is is that my job isn't to tell you what to do; my job is to find out what you want to do and direct you in a place where you can find the answer. Or I can give you some guidance about where to find the answer. And I feel like I'm doing a bad job as a manager where if I have to act as a middle person. Because if somebody comes to me and says, "Oh, I want to do this thing," And I say, "Well, I'll talk to that person for you," and then come back, I have failed. And my job is to say, "Oh, you should talk to that person about this." And to some extent, it's about being lazy. I don't want to be doing too much stuff because I have other things to do. But I want to make sure that those people have the right frameworks and guidelines in place so that I can point them in the right direction. STEPH: I think the fancy term for that is just delegating. [laughs] ROB: Yes, thank you. [laughs] STEPH: But I like lazy. [laughter] I like that one as well. I love that framing of a manager where you're not telling someone to do, but as your job, you are helping that person figure out what they want to do and then supporting them. I've been chatting with Chris recently and some others because I've been reading the book Resilient Management by Laura Hogan. And it's really helped me cement the difference between mentorship, coaching, and sponsorship. And I realized that I'm already falling a lot into the coaching and sponsorship because mentorship can be wonderful, but it is more directive of like, this is what I've done. And this is what has worked for me, and you should do this too. Versus the coaching and sponsorship, I think aligns far more perfectly with what you described as management, where it is my job to figure out what brings you joy, what brings you energy, and then how to help you progress to your next goals and your next steps in your career. ROB: Yeah, I think Laura Hogan is a great resource like her blog posts and books. I haven't read Resilient Management. But I know that the team leads on my team had been on her training courses, and they say how great it is. And there's also a blog post of hers that's about managing in tough times. It has a much better title than that. But it's about how do we be good managers in such uncertain times when there are a lot of things going on around the world right now that we all have to deal with? And helping people deal with those situations. Because at the end of the day, work isn't the most important thing; the most important thing is living. And it's something I say to my team, especially when people feel like...it's something that I say to my team when they're not feeling well. The most important thing is that you get better. And thoughtbot is still going to be here. The most important thing is how you live your life and how you look after yourself, and everything else is secondary. STEPH: Absolutely. Well, and everybody needs something different from work too. Some people may be in a state where they really need more stability and predictability from their work. And some people may be in a space where everything else outside of work is very stable and calm, and then they want work to bring the challenge and the volatility and the variety to life. So I remind myself very often that not everybody wants the same thing from work and to figure out what it is that someone wants from work. And then your seasons change. You may be in a season of where you want stability, or then you may be in a season of like, I'm ready to grow and push and take some risks. So helping someone identify which season of work they're in. ROB: Yeah, I 100% agree. What people can't see is me nodding vigorously on the other side of this call. It's very much about understanding because everybody is different. And that's what we want from a good team; it's understanding everybody's different approach to things. And so sometimes people want the distraction of work because they don't want the time off to think about other things. They want to be able to sit and concentrate on something. And it's understanding different people. STEPH: Yeah, that's a great point. I'm curious; you mentioned that as part of being development director, you are also, in addition to managing the team and being part of DEI then, there's also your day-to-day client work. I think you've started a new client recently. Could you tell me more about that? ROB: Yeah, I'd recently been working for a client for two and a half years, which is a very long time to be working with one client at thoughtbot. And it came to the time where I was ready for a new challenge, and it was stable enough for me to move on. So I've been working for a company in the UK. They allow customers to buy and sell cars, not between customers, the customers like companies like Auto Trader but customers to dealers and back and forth. And primarily, they worked with buying cars. And they've launched a product in the UK where people can sell their cars as well because they found that 70% of people who are buying cars also want to sell their cars. And from there, they're now looking to expand into Germany and Spain, so we are helping them to do that. And it's an interesting project, not necessarily from a technical point of view, but I might come back to that but definitely from a cultural point of view. The product at the moment allows you to put in a license plate or a registration plate for a car. And there's then a service in the UK that will allow you to pull up the maker model and the service history of that car. But you can't do that in Germany because it's against the privacy laws to find something from registration plates. And so it's interesting these different cultural aspects that you have to take into account when expanding into other countries that you aren't from and that you have less knowledge about. Because I'm also aware that credit cards aren't a big thing in Germany either. So you have to think about how they pay for things in different countries. And the previous company I was working for they're based in the Middle East. And so we had to take into account how we would do right to left design in a mobile app, which is really interesting from a western point of view that you get so used to swiping through an experience from left to right. But then it's not just the screen that's right to left. The journey moves from right to left. So you have to get used to the transitions of the screen going the other way and not thinking of that as going backwards. It's one of the best things about working in this region is that we get to deal with so many different cultures and how they expect to use applications. It's really satisfying. STEPH: That's fascinating. Yeah, I haven't gotten to work on a project like that that has those types of considerations. I think the most relatable experience I have is more working in healthcare because that's one of those areas that I'm certainly not proficient. I've become more proficient because of the type of projects that I've worked on. But I'm curious, for expanding into other regions and cultures, do those teams typically have an expert on their team that then helps guide the development process? Or, as you mentioned, the process of buying a car could be very different in some of the legal aspects that you're up against. Is there someone that you can turn to that's then helping mentor or be aware of that process? ROB: Yes, the current client they have a team based in Germany, people who are from Germany that are advising us on different cultural aspects or legislative things. They are doing a lot of data analysis for us because we need a new service that we can use for looking up car details. Because there is a service that you give different information to to get information about the car back from. So yeah, we do have that team there. But that's not always the case because every client is different. The company that we're working for in the Middle East didn't have a team. They had two developers who were helping us. But we have to figure things out just from their cultural background to ask them questions about things and allow them to advise us, but nobody who was really a specialist. But that's an interesting thing as well, not just the cultural aspects of the customers but the cultural aspects of the company that you work for. We definitely found that the company in the Middle East was more hierarchical. And so that's another challenge that you have to work with because we tend to work in quite a flat way where we tend to default as on thoughtbot projects, of not having a point person on a project. Everybody is there to answer the questions. But some teams or clients want that point person. And so, we adapt and change to allow for that to happen and work in that way. But it is interesting to work in different companies as well as working as an agency. STEPH: Yeah, you bring up a really good point of something that I don't reflect on very often, but it's something that I really appreciate about our thoughtbot culture is that we do try to strive for a very flat hierarchy. But also in working with clients, we purposely will avoid like, if there are two or more thoughtboters on a project, we don't want one person that is then the primary contact between the client and the thoughtbot team. The goal is that everybody shows up. Everybody is part of the process; everybody is part of meetings. And we do have an advisor for projects, but otherwise, we work very hard to make sure that there's not just one person that's then responsible for communication. We want everybody to have opportunities to be part of meetings, to lead meetings, to take on initiatives versus having that one person. That is something that I really appreciate that we do. ROB: Yeah. And it's more noticeable when you go to places where that isn't the norm, and you appreciate it more. And I think a big part of that is how much we are trusted. And we trust people to trust us, I guess. STEPH: Yeah. And I think it fits in nicely with circling back to the management conversation is that when people have access to those opportunities, that makes my job so much easier as a team lead where then there are more opportunities to sponsor someone or to coach someone as to how they can then be the person that then takes on a project or if they want to lead a particular meeting, or if they want to help a team introduce retrospectives into their process. So it gives more opportunities for me to then coach someone into expanding their skill set in those ways. ROB: Yeah, that's interesting to think about, allowing yourself to coach other people in that role. Because as we gain more experience and become senior developers, we naturally fall into that role of taking the lead on projects, even when we're not asked to. But then, when you gain other responsibilities in the management track, so you as a team lead and me as a team lead and a development director, it could be better for you to not take that role and allow somebody else to come into that role so you can coach them. That's been playing on my mind the last couple of days. Josh Clayton, who's the Managing Director for one of our teams in the Americas, raised it on our pull request in our handbook where we were talking about team leads having a dedicated day to concentrate on team lead things. It's one of those things where somebody says something, and it's like, oh yeah, that really clicks. Maybe that's why we have been having certain struggles on projects where we need to rearrange things and learn from that and so we can be better on projects in the future. So that's something that really resonated with me, and it's flying around in the back of my mind at the moment. STEPH: Yeah, that really resonates with me because while the predominant part of being a team lead at thoughtbot is having one-on-ones with folks, I find that when I have more time, a lot of the work also falls outside of that one-on-one where it's following up on conversations around hey, this person mentioned they're really interested in growing their skill. How can I help them? How can I help find opportunities? Or I know that they're currently stretching their skill set right now. If I have some extra time, then I can check in with them. I can pair with them. I can see how things are going. So I find that while the one-on-ones are the staple thing that happens every two weeks, there's a lot of other behind-the-scenes work that's going on as well to make sure that that person is growing and feeling really fulfilled by their work. ROB: I know we've spoken a lot about the product side and the client side of working on the new project that I'm working on. There are some interesting technical sides to it as well. The client has found that they have had some issues with Haskell and running on M1 Macs. And so, they've decided to take the leap and use GitHub Codespaces as their primary development environment, which has been interesting. I had heard about it but only in the background. I hadn't read anything about it or hadn't had any direct conversations. I just heard that there was a thing. So it's been quite interesting to play with that. It's interesting the way the client is using it as well because they're using a Dockerized environment effectively inside Docker by using Codespaces. So you start the Codespace, which very basically is a Docker instance somewhere on GitHub's infrastructure. It's built very much for Visual Studio Code, and so you can just directly attach your Visual Studio Code session to the Codespace and go from there, but I'm a Vim user. I've started to feel like a bit of an old guard or a curmudgeon recently where I've been like; maybe I need to use Visual Studio Code. Maybe I should just unlearn my Vim key bindings and learn the Visual Studio ones. And people say, "Oh, you could just use The Vim key bindings in VS Code." I'm like, that's cheating. I spent the time to learn the key bindings for Vim. I will take the time to learn the key bindings for Visual Studio Code and use it for the way it's intended. So it's been interesting to understand how Codespaces works, not necessarily in the way, it's intended. So you can still SSH into a Codespace session, but then you lose all the lovely setup stuff that you might have on your local machine. So I did spend half a day porting my dotfiles which are based off thoughtbot's dotfiles, into something that Codespaces can use and made it publicly available. So if you go to github.com/purinkle/codespace, you can see what I use to set up my Codespace environment. And once that's set up, it becomes a bit easier because then you have all the things that you're used to running locally. It is very much early days for how the client is using it. And so they're really open to saying like, okay, let's find out what's not working, and let's work and figure out how to get it up and running properly. So one of the things we do find is that Codespaces do timeout after a while. And then you might lose, like, even if I've created a tmux session, that tmux session disappears. And so I have to go in and create it again. I'm not sure what the timeouts are. I haven't had time to look into what those timeouts are yet. But that's definitely the main pain point at the moment of it being used as a development environment. It's been interesting. It's been kicking around in the back of my head like the difference between developing locally and deploying locally. And it's something that I wanted to talk to people at thoughtbot and outside of thoughtbot as well to understand that more. Because I don't think you need everything running to develop locally, but you might need it to deploy locally. It's interesting to me to understand how different companies work on their products from that point of view. STEPH: Yeah, I'm selfishly excited that you are using Codespaces for a client project because I have kept an eye on it, and I'm very intrigued by it. But I also haven't used it for a project. And it sounds really neat. I'm curious, have you found that it has helped them with onboarding or if you need to switch from working on one application to another? Have you found that it has helped them with some of those? I'm guessing that's the problem that they're optimizing to solve is how do we help people run everything quickly without having to set it up locally? ROB: It's an interesting question because I don't have the comparison of trying to set up the environment as it was before. It was smoother. The main thing with access tokens because once you can set up your SSH keys and your GitHub tokens, it's just a case of running a script and letting it run. So yes, from that point of view, I can imagine if I tried to set up their previous environment, that it'd be a lot more challenging because they were using Vagrant and running things that way, which I know from experience would not be fun. And I know that my Mac fans would just be spinning all the time. It would be like an aeroplane was trying to take off. So I'm thankful for that, that I don't have that experience anymore that my machine is going to slow down all the time. We've had on a previous client who had a Dockerized environment, but you have to have it all running on your machine. There are pros and cons to everything with these things. And it's like you said, what is the problem they're trying to solve with introducing this setup? STEPH: Yeah, I can't decide if this is a good thing or a bad thing. But I'm also intrigued by the idea that if a team is using Codespaces, then that means everybody else is using VS Code. And you can still customize it so you can still have your own preferences. But that does set a standard, so everybody is using the same editor. There's a lot of cross-collaboration in terms of if you do run into an issue, then you can help each other out. Versus when I join other teams, everybody's using their preferred editor, and then there you may have a day where someone's like, "Oh, I'm really stuck because my particular editor is suddenly having a problem and can't connect." And then you have less people that are able to help them if they're not using that same editor. And I can't decide if I like that or if I hate it [laughs] in terms of taking away people's ability to pick and choose their editor. But then the gains of everybody is using the same thing which is nice and would be really great for pairing too. ROB: Yeah, that's an interesting point. I was talking to...I have a management coach. He's a PHP developer, and I'm a Rails developer. And we were talking about the homogenization of things nowadays. And is that good, or is that bad to use with stuff like RuboCop that lints everything, so it's exactly the same? Does that stifle creativity? But then, at the same time, the thing I like about Codespaces is I think we're biased coming at it from the point of view of Rails developers. And if you look at how you can use Codespaces in the browser directly from GitHub, that's quite interesting because now you're lowering the barrier to entry to get started and saying you don't need to have an editor. You don't have to set up everything. You can just do it from your browser. A few years ago, I used to volunteer or coach at an organization called codebar. They help people who are less represented in the tech community get represented in the tech community. And we would see a lot of people coming for sessions using...I forgot what it's called. What was it called? Cloud 66 or something. There was some remote development environment that people would come and say, "Oh, I've been using this," because they didn't know how to set up the necessary infrastructure to just get a Rails server going or things like that or didn't know how to set up Sublime or Atom or editor of choice. And it's really interesting if you remove your bias of 15 years of professional software development and go okay, if I were starting today, what would the environment look like, and how would I get started? I'm lucky enough that I've grown up with the web and seen how web development has changed and been able to gain more knowledge as it's appeared. I don't envy anybody who has to come into the industry now and suddenly have to drink from this firehose of all these different frameworks, all these different technologies. Yeah, I started off by just right-clicking and viewing source on HTML files back in 1998 or something ridiculous like that. And CSS didn't even exist or wasn't used. And so it's a much different world than 24 years ago. STEPH: That is something that Chris and I have mentioned on previous episodes where people are coming into software development, and as much as we love Vim and it sounds like you love Vim, our advice is don't start with Vim. Don't start there. You've got so much to learn. Start with something like VS Code that's going to help you out. And you make such a great point in regards to this lowering the barrier to entry. Because I have been part of a number of classes where you have people coming in with Macs or with a Windows machine, and then you're trying to get everybody set up. You want them to use the same browser for testing. And we spend like a whole class just getting everybody on the same page and making sure their machines are working or then troubleshooting if something's not. But if they can just go to GitHub and then they can run things seamlessly there, that's a total game-changer in terms of how I would teach a class, and it would just be far easier. So I hadn't even considered the benefits that would have for teachers or just for onboarding teams as well. But yeah, specifically for leading a class, I think that is a huge benefit. GitHub did some pretty cool stuff around when they were launching that as well because I went back and watched some of their GitHub Universe sessions that they had where they were talking about Codespaces. And one of the things that they did that I really appreciated was how they went about launching Codespaces. So initially, it was how fast can this be? Or what's our proof of concept? And I think when they were building this, they found it took about 45 minutes if they wanted to spin up an application and then provide you a development environment. And they're like, okay, cool, like, we can do this, but it's 45 minutes, and that's not going to work. And so then their next iteration, they got it down to 25 minutes, and then they got it down to 5 minutes. And now they've got it to the point that it's instantaneous because they're building stuff in the background overnight. And so then that way, when you click on it, it's just all ready for you. But I loved that cycle, that process that they went through of can we even do this? And then let's see, slowly, incrementally, how fast can we get it? And then, to get feedback, instead of transitioning their own internal teams to it right away, they created this more public club. I think they called it The Computer Club, something like that. And they're like, hey, if you want to be part of Codespaces or try out this new feature that we have, delete all the source and the things that you need locally, and then just commit to using Codespaces. And then, if you are stuck or if you have trouble, then your job is to let us know so then we can iterate, and we can fix it. I really liked that approach that they took to launching this product and then getting feedback from everyone and then improving upon it. ROB: Yeah, that sounds like an Agile developer's dream where you just put something out there that's the bare bones, and you're given license to learn from that experience and how people are actually using that tool. That's something we've actually tried to do on the client project at the moment is adding all the...now that there's a different flow in Germany, there are different questions we need to ask. And so that could be quite a complex thing to put into place. So what we said is what we're going to do is just put in the different screens, and all you have is one option to click. So you click that option, you go to the next one, go to the next one, go to the next one. Then we have something that the customer can click on and play with and understand, and then we can iterate on top of that. But it also allows us to identify areas of risk because you can go; oh, where does this information come from? But now we need to get this from a third-party service. So that's the riskiest thing we've got to work on here, where this other thing is just a hard-coded list of three-door or five-door cars. And so that's an easier problem to solve. So allowing yourself to put something that could be quite complex like GitHub Codespaces and go okay, we're going to put something out there. It takes 45 minutes to run-up. But we're telling you it takes 45 minutes to run it. We're not happy with it, but we want to learn how you're using it so that we can then improve it but improve it in the right direction. Because it might be that we get it to 20 minutes to start up, but you need it in half a second. That's a ridiculous example. Or it might be that you need to be able to use RubyMine with it instead of VS Code, and that's where the market isn't. That's the thing that you can't learn in isolation that you have to put something out there for people to use and play with. STEPH: There's one other cool feature I want to highlight that I realized that they offer as well. So in the past, I've used a tool called ngrok, which then you can make your localhost public so other people can access. You can literally demo what you're working on locally, and someone else can access it. And I think that it's very cool. It's come in handy a number of times. And my understanding is that Codespaces has that feature where they can make your localhost accessible. So your work in progress you can then share with someone, and I love that. ROB: Oh, that's really interesting. I didn't know you could do that. I know you could forward ports from your local machine to that. But I didn't know you could share it externally. That'd be really cool. I can see how that can be really helpful in demos and pairing. And it makes sense because it's not running on your computer. It's running on some remote architecture somewhere. That's interesting. STEPH: Well, that's the dream I've been sold from what I've been reading about GitHub Codespaces. So if I'm telling lies, you let me know [laughs] as you're working further in it than I am. But yeah, that was one of the features that I read, and I was like, yeah, that's great because I love ngrok for that purpose. And it would be really cool if that's already built into Codespaces as well. ROB: ngrok is really interesting with things like trying to get third-party services to work. So from, the previous client, they wanted an Alexa Skill. And so, if you're trying to work with an Alexa Skill, you have to sign in from Amazon's architecture onto your local machine. You have to use ngrok as the tool there. So I wonder if that could potentially solve a problem where if there are three developers trying to develop on this if you could point to one Codespace that you're all working on rather than... Because the problem we had was if me or Fritz or Rakesh was working on this, we'd have to go and then change the settings on the Amazon Alexa Skill to point to a different machine. Whereas I wonder if Codespasces allows you to have this entry point, you could point to like thoughtbot.codespace.github.com or something like that that would then allow you to share that instance. That's something interesting that I think about now. I wonder if you could share Codespace instances amongst each other. I don't know. STEPH: Yeah, I'm intrigued too. That sounds like it'd be really helpful. So circling back just a bit to where we were talking about wearing different hats in terms of working on client work, and then also working on the team, and then also potentially some sales work as well, I'm curious, how do you balance that transition? How do you balance solving hard problems in a codebase and then also transition to solving hard problems in the management space? How do you make all of that fit cohesively in your day or your week? ROB: The main thing that somebody said to me recently is that you can only do so much in a day, and it's about the order that you approach those things. And just be content with the fact that you're not going to get everything done. But you have to make sure that you work on things in the right order and just take your time and then work through them. I read a really good book recently that was recommended to me by my coach called Time Off. And it's all about finding your rest ethic, which sounds a bit abstract and a bit weird. But all it is it's about understanding that you can't be working 100% all the time. It's not possible. As developers, sometimes we can forget that we're creative people, and creativity comes from a part of your brain that works subconsciously. So it's important for you to take breaks throughout the day and kind of go okay; I use the Pomodoro Technique. So I have an app that runs, and every 25 minutes, I just take a little break. I don't use it in the way that it's supposed to be used. I just use it to give me a trigger to have a break every 25 minutes. And so in that time, I'll just step away from my computer. I'll walk to the kitchen, grab a glass of water. I usually have a magazine or a book next to my table. So I have a magazine here at the moment. I'll just read a page of that just to kind of rest my eyes, so they focus at a different level but also just to get my brain thinking about something else. And it seems counterproductive that like, oh, you're stepping out of what you were doing. But then I find like, oh, I suddenly have a little refresher to like, oh, I need to get back into what I was doing. I know where I've got to go. That thing that I was thinking about now makes a little bit more sense. And even if it's a bigger break, give yourself the license to go for a walk and just kind of clear your head. And a big thing about going for a walk is not to concentrate on completing the task of walking but to concentrate on the walk itself and taking the things that are happening around you. And let your mind just kind of...you'll sometimes notice that oh, I can hear a bird. But that bird's been chirping for five minutes, and you didn't notice because your mind's kind of going. And if you concentrate on, I just want to complete this walk, that's what I'm out here to do, then you lose that ability to let your mind reset. That's a big thing that I'm working on personally to concentrate on the doing rather than the getting done. And it ties into the craft of being a software developer because if you concentrate on the actual writing of the code and the best practices that we all believe in, you end up with something better that you don't then have to revisit at a later time. Where if you just try and get something done, you're just going to end up having to come back to it or have to revisit in some other way. I've actually got a blog post coming out soon about notifications on phones. I'm a big believer that your phone belongs to you and that if your work wants you to have work notifications on your phone, then they could buy you a phone just for that purpose. The only thing where I kind of draw the line is I have notifications for meetings on my phone because I can't think of another way to get those things to ping up at me. And I understand that there are jobs where you do need to have those sorts of notifications, especially things like where you're on call; it's a big thing. But when it comes to things where a manager wants to get a hold of you straight away, from a trust point of view, that's where I think things fall down. And you're questioning, like, okay, why does this person need to get hold of me at 7:00, 8:00, 9:00, 10:00 o'clock at night? And should I be available? We build by the day at thoughtbot. And so when I find, not when I find but when I talk to people, and they say, "Oh, I was still working at 7:30, 8:00 o'clock," I will say, "Why? You're devaluing your own time at that point because we're not billing any extra for that time. So you're making your craft and your skill...you're cheapening it. And I want them to relish the skills and competencies that they have. That's a big thing for me. We're very lucky at thoughtbot that we can draw a boundary at the end of the day and go, okay, that's it. There's no expectation for me. It is much more difficult at product companies. But yeah, I think it's something that as an industry, and it's a bigger thing as a society, especially with younger people coming into the industry who have never worked in an office and may never work in an office, that idea of where is the cutoff? For so much of the pandemic, the people I would get concerned about the most are the people whose beds I could see behind them because I'm thinking to myself, you spend at least 16 hours a day in that same room. And that's going to become the norm for people. And if people don't have those rest periods and those breaks and aren't given the opportunities to do that by their managers, then it's not going to end well. And happy people and fulfilled people do the best jobs from a business point of view. But that's never the way I approach it, but that's what I say to people. STEPH: I think that's one of the biggest mistakes that I made early on in my career, and even now, I still have to coach myself through it. It's like you said, we are creative people and people in software and in general and not just developers, but it's a creative craft. And I wouldn't step away to take breaks. I just thought if I pushed hard enough, I would figure it out, and then I could get done with my work because I was so focused on getting it done versus the doing, as you'd highlighted earlier. I haven't really thought about it in that particular light of focusing on this is the thing that I'm working on. And yes, I do want to get it done, but let's also focus on the doing portion of it. And so I wouldn't step away for walks. I wouldn't step away for breaks. And that is something that I have learned the hard way that when I actually gave myself that time to breathe, if I gave myself a moment to relax, then I would come back refreshed and then ready to tackle whatever challenge was in front of me. And same for keeping a magazine that's near my desk; I have found that if I keep a book or something that I enjoy...because, at some point, my brain is going to look for some rest, like, it happens. That's when we flip open Twitter or Instagram or emails or something because our brain is looking for something easy and maybe a little bit of like brain candy, something to give us a little hit. And I have found that if I keep something else more intentional by my desk, something that I want to read or that I'm enjoying, then I find that when I am seeking for something that's short that I can look at, that I feel more relaxed and fulfilled from that versus then if I go to Twitter, and then I see a bunch of stuff, I don't like, and then I go back to work. [laughs] And it has the opposite effect of what I actually wanted to do with my downtime. I love the sound of this book. We'll be sure to include a link in the show notes because it sounds like a really good book to read. And I've also worked on improving the setup with my phone and notifications, where I have compartmentalized all the work-related apps into one folder, and then I keep it on the third screen of my phone. So if I want to see something that's work-related, it's very intentional of like, I have to scroll past all of the stuff that matters to me outside of work and then get to that work section and then click in that folder to then see like, okay, this is where I have Slack, and Gmail, and Basecamp, and all the other things that I might need for work. And I have found that has really helped me because I do still have the notifications on my phone, but at least putting it on its own screen further away from the home screen has been really helpful. ROB: Do you find that you still get distracted by that, though, when you're in the flow of doing something else? STEPH: I don't with my phone. I am a person who ignores my phone really well. I don't know if that's a good thing or a bad thing, [laughs] but it is a truth of who I am where I'm pretty good at ignoring my phone. ROB: That's a good skill to have. If there's any phone in the room and a notification goes off, my head swivels, and I pivot, and I'm like, oh, yeah, some dopamine hit over there that I can get from looking at somebody else's notification. STEPH: I have noticed that in the other people that I'm around. Yeah, it's that sound that just triggers people like, oh, I got to look. And even if you know it's not your phone like you heard someone else's phone ding, it still makes you check your phone even though probably there's a part of your brain that recognizes like, that wasn't mine, but I'm still going to check anyways. And I have worked hard to fight that where even if I hear my phone go off, I'm like, okay, cool, I'll get to it. I'll check it when I need to. And I'm that person that whenever apps always ask me, "Can we send you notifications?" I'm like, no, you may not send me notifications. [laughs] Something else you said that I haven't thought about until just now is the idea that there are some people who have never worked in an office or may never work in an office because we are leaning into more remote jobs. And that is fascinating to me to think about that someone won't have had that experience. But you make such a good point that we need to start thinking about these boundaries now and how we manage our remote work and our home life because this is, going forward, going to be the new norm for a number of people. So how do we go ahead and start putting good practices in place for those future workers? ROB: One of the things, as we've hired people from a remote point of view who've only worked with thoughtbot remotely, is the idea of visibility. And I don't mean the visibility of I want to see when somebody's working but maybe the invisibility of people. Because you can't see when people are taking breaks, you assume that everybody is working all the time, and so then you don't take those breaks. And so this is something we saw with people who we hired in the first six months of being remote. And they were burning out because they didn't realize that other people were taking breaks. Because they didn't know about the cultural norms of how we worked at thoughtbot. But people who had worked in the studio would know that people would get up and have breaks. People would get up and go get a coffee from a coffee shop and then have a walk around. They didn't know that that was the culture because they bring the culture from other places with them. But then it's much harder to get people to understand your way of working and how we think that we should approach things when you are sat in isolation in a room with a screen. And that's something that we've had to say to people to break that down. And even things that we took for granted when we worked in a studio where somebody would get up and ask somebody if they could pair with them even if they weren't on the same project. Somebody might have more Elm knowledge or React Native knowledge, or Elixir knowledge. And you'd get up and say, "Hey, can I borrow some of your time just to go over this thing, to pair?" And everybody would say, "Yeah, yeah, I can find some time. If not now, we can do it later." And recently, we've had people saying, "Oh, is it okay if we pair across projects? Is it okay if we pair with other people?" It's like, "Yeah, pair." One of the big things we say is that we have this vast amount of knowledge across thoughtbot, across the world that we can tap into and that you can use. And that's just one example of how do you get those core things that you take for granted and help people understand them? Because you don't know what people don't know. And it's all about that implied knowledge. So that's something that we learned. And we try and say to people and instill in them about yeah, take breaks. You can pair with people. There are people who bring in culture from other places with them. But then, to go back to where you started, how do you start with people who have no culture with them or have the culture of coming from maybe from school, or university, or from a different industry? How do you help those people add to your culture but also learn from your culture at the same time? Big people problems. STEPH: Have you found any helpful strategies to normalize that take a break culture? ROB: One thing we tried, but it doesn't last very long because people are lazy, is putting it in Slack saying, "I'm going for a break." And you can do that, but it's so artificial. After a week or two weeks, people just stopped doing it. It was through conversation. We have a regular retrospective as the Launchpad II team where we talk about what is working, what isn't working. And we have such a trusting environment where people will say things along the lines of this isn't working for me, or I feel like I'm burning out. Then we will talk to each other about it and figure out where it comes from. And it's a good point to raise that I don't think we have explicitly addressed it. But it is something that we will address. I'm not going to say could address; we will address it. I will talk to our latest hire, Dorian, who I have a one-on-one with next week, and to kind of talk to him about it. And we should maybe try and codify that in our handbook somewhere so everybody can learn from it, at least start a strategy and a conversation. Because I don't think it is something that we do talk about. It's the problem of being siloed and being remote and time zones as well. A lot of stuff that Launchpad I knows Launchpad II doesn't necessarily know because we only have three, maybe, hours if people are based on the East Coast where we overlap. I have meetings with Geronda, who's our DEI Program Manager, and she lives in Seattle. And so sometimes I'll talk to her at 5:00 o'clock, and it's 7:00 o'clock in the morning for her. And you have different energy levels. But yeah, so we spend time to try and figure out how we work together. STEPH: Yeah, I like that idea of highlighting that we take breaks somewhere that's part of your expectations as part of your role. Like, this is an expectation of your role; you're going to take breaks. You're going to step away for lunch. You're going to stick to a certain set of hours in terms of having like an eight-hour workday with a healthy lunch break in there. I think that's a really good idea. On the Boost team, I have found that people have adopted the habit of not always but typically sharing of, like, "Hey, I'm stepping away for a coffee break," or "I'm having lunch. Maybe like a late lunch, but I'm taking it," Or "I am stepping away for a walk." You often see later in the afternoon where there are a number of people that are then saying, "Hey, I'm going for a walk." And I feel that definitely helps me when I see it every day to reinforce like, yes; I should do this too because I already admitted I'm bad at this. So it helps reinforce it for me when I see other people saying that as well. But then I can see that that takes time to build that into a team's culture or to find easy ways to share that. So just putting it upfront in like a role expectation also feels like a really good place to then highlight and then to reinforce it as then people are setting that example. ROB: One thing that Nick Charlton tried to introduce was a Strava group. There's a thoughtbot Strava group. So you can see if people are members of it that they've been walking and things like that. It was quite an interesting way to automate it. I think it fell off a cliff. But it was something that we did try to how can we make the visibility of this a little bit easier? But yeah, the best thing I've seen is, like you say, having that notification in Slack or somewhere where you can see that other people are stepping away from their keyboards. STEPH: Well, as you mentioned, solving people problems is totally easy, you know. It's a totally trivial task although I'm sure we could spend too many hours talking about it. All right, so I do have one more very important question for you, Rob. And this goes back to a debate that Chris and I are having, and I'd love to get you to weigh in on it. So there are Pop-Tarts, these things called Pop-Tarts in the world. And I don't know if you're a fan, but if you were given the option to eat a Pop-Tart with frosting or a Pop-Tart without frosting, which one do you think you would choose? ROB: That's an interesting question. Is there a specific flavor? Because I think that the Strawberry Pop-Tart I would have with frosting but maybe the chocolate one I have without. I know there are all sorts of exotic flavors of Pop-Tarts. But I think I would edge towards with frosting as a default. That's my undiplomatic answer. STEPH: I like that nuanced answer. I also like how you refer to the flavors as exotic. I think that was very kind of you [laughs] other like melon crushed or wild flavors that they have. Awesome. All right. Well, I think that's a perfect note for us to wrap up. Rob, thank you so much for coming on the show and for bringing up all of these wonderful ideas and topics and sharing your experience with Codespaces. For folks that are interested in following your work or interested in getting in touch with you, where's the best place for them to do that? ROB: Yeah, thank you so much for having me. It's been fantastic to have a chat. If people do want to find me, the best place would be on Twitter. So my handle on Twitter is @purinkle which I understand is hard for people to maybe understand via a podcast, but we'll put a link in the show notes so people couldn't find me more easily. And that's probably also a good time to say that I am actually trying to find a development team lead to join our Launchpad II team. So we are looking for somebody who lives in Europe, Middle East, or Africa to join our team as a developer and manager of two to three people. There's more information on the thoughtbot website, and I do tweet about it very, very often. So feel free to reach out to me if that's of any interest to you. STEPH: Awesome. We'll be sure to include a link to that in the show notes as well. On that note, shall we wrap up? ROB: Yeah, let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.