POPULARITY
Transcript: CHRIS NEWBOLD: Good afternoon, well-being friends. Welcome to the Path To Well-Being In Law, an initiative of the Institute For Well-Being In Law. I'm your cohost, Chris Newbold, executive vice president of ALPS Malpractice Insurance. Most of our listeners know why we're here. Our goal is to introduce you to thought leaders doing meaningful work in the well-being space within the legal profession. And in the process, we're working to build and nurture a national network of well-being advocates intent on creating a culture shift within the profession. Let me be the first to introduce my co-host, Bree Buchanan. Bree, how are you? BREE BUCHANAN: I'm doing great, Chris, thank you. I am so excited, you know, about this episode because we have just increased our staff at IWIL. I'll let you finish, but I'm just excited. CHRIS: Well, I was going to say that there's a couple notable things about this, right? Bree, you and I have been at this for least going well beyond five years now, but a couple things that I think are really unique about this particular podcast, first of all, our 25th podcast. I'm totally excited about just the incredible people that we have met on this journey. It's a reflection point, so to speak. I just think it's been a great ride for us as we've introduced people from around the country and welcome in new listeners to the podcast. BREE: Absolutely, absolutely. Yeah, it's been a lot of fun. CHRIS: It has. And then I think the big point and I think maybe a little bit of historical perspective is good to share with the listeners today that obviously the Institute for Well-being in Law started now just over 18 months ago. Really the intent was as a natural outgrowth of the National Task Force on Well-Being in Law was that we wanted to look toward a greater level of sustainability for the movement. Bree and I and many other leaders in the movement got together and we ultimately decided that the creation of the institute as a national think tank to be able to work and lead efforts on a national basis was the move. CHRIS: A lot of that was with the intent of being able to hire a full-time professional staff that could work on this issue, not just for the short-term, but for the long-term. Again, without further ado, we are super excited about today's guest, which is our friend Jennifer DiSanza, who is the first executive director of the Institute for Well-Being in Law. I know that we are really excited to introduce her to our listeners, talk about the vision, talk about where the organization's going, talk about her own personal journey as it relates to well-being. CHRIS: Bree, why don't I kick it to you for an introduction of somebody who I think will be a pivotal leader, spokesperson. I know she's thoughtful. Again, we're just super excited to have Jennifer on board. BREE: I'm going to let Jennifer talk about her background, but I'm going to talk about as way of introduction how we got her to us. Like you said, there was this whole plan of how we were performing IWIL and then be able to fundraise and then be able to hire staff, and Jennifer's the first one of that. We went out and did a national search, really cast the net wide and far. We had over 80 applications to the position. It took us a good number of months to go through all of those, many interviews. Ultimately, I'd say at least it was a six month search process, we found our Jennifer DiSanza. Jennifer, we're finally going to let you talk now. JENNIFER DISANZA: I have to say, after that introduction, I feel like an athlete, like I should have had play on music or I should have some theme music, because that was quite the introduction. Thank you, both. BREE: It was really a buildup. What I was thinking is that in the old radio shows that they had the button you could hit with the applause. All that too. JENNIFER: I heard it all in my head, Bree, so it was good. But no, I appreciate both of you so much. It was a long process, but one of the things that attracted me to this, and I've told this story, so those people who know me who are listening know this story, is that I was really looking for an opportunity to be entrepreneurial. But I didn't necessarily want to go out on my own. For those people who can work for themselves, that's great. When this position was posted, I had been following IWIL because my background is in legal education and nonprofit work. I'd been following the organization and just wonderful things that I knew about it. JENNIFER: I had friends on the advisory board involved in different ways. I really believed in the mission. I have never felt so strongly about something as I did when I saw IWIL was hiring an executive director. What brought it home for me is so many people sent it to me because they knew what I wanted to do. It was a confluence of events, I feel. I am so grateful to the search committee, to the board, because I really feel like I'm doing my life's work here at IWIL. BREE: Wonderful. Wonderful. Well, Jennifer, we're going to start you off the way we have started off all of our guests, which is to ask you about what are the experiences in your life that drive your passion for well-being in the legal profession and clearly you have a passion? Tell us about that. JENNIFER: Again, it really comes from my background and experiences. A lot of times you go through your career and you're doing things that you're well-positioned for, that you're well-skilled for, but not necessarily something that drives you past just an everyday job. From a personal standpoint, it really came to me when I went to law school oh so many years ago. Actually, not to date myself, this is our 20th anniversary from graduating. It's been 20 years since I was in law school. But from a personal standpoint, even before that, I struggled with depression and anxiety throughout my life, but law school was the point where it was at the most difficult. JENNIFER: I faced my most difficult challenges. I chose to go to school part-time while working full-time and getting my master's degree. There were a lot of different layers on that, but I really didn't have the resources. I actually didn't even have the language at that time. I was very much of the standpoint, "I got to get through it, I got to get through it," without really thinking about what toll it was taking on me mentally and on my health just in general. Well, I didn't realize the context. I was a first gen student. I never had met anyone that went to law school. I really thought it was going to be like graduate school. JENNIFER: I'd go a couple nights a week. I'd do my homework on the weekends. After being in law school education for almost 20 years, I realize that's an impossible thought. I incorrectly assumed that. It's no secret to those people who know me that I really did not enjoy my law school experience. CHRIS: It's so interesting when you go back and you talk to folks who have went through that experience. Some love it, some it was a terrible experience. That forms a lot of how you think about coming to the law and making some decisions about, "Boy, did I make the right decision here?" Jennifer, I think one of the things that's interesting I know as on the hiring committee that we thought was really pertinent was your career in legal education. Can you tell us a little bit more about your professional journey after graduating from law school that you think has prepared you for taking the leadership baton here and running with it? JENNIFER: Sure. I think most people are like, "Well, if you hated law school so much or didn't enjoy the experience, why did you stay there for an additional 20 years?" But the reality is that I was lucky enough to have someone at my law school who I could go talk to, and it made my experience better and I realized that I could continue doing that. I could be that person for other people. I had been in human resources, in manufacturing and well-being at that time, it was the late '90s, early 2000s, I had safety as part of my human resources responsibility and it was really about physical safety. There was no holistic approach to employee well-being. JENNIFER: But I took what I learned as an HR manager to law school student affairs. I worked at three different ABA-Approved schools. You get students. They come in as you find them, basically. Some of them have preexisting issues, whether it's mental health or substance use, whatever it is. But I knew once they got to law school, whatever it was either started in law school or became much more exasperated while they were in law school. Really over the years of working in legal education, I tried to focus on ways to make the experience better. JENNIFER: The majority of my time during those almost 20 years was counseling students or developing programs to support students in finding out better ways to handle their stress or their anxiety, providing accommodations for students. But I also think one of the things that's sort of a catch 22 in the legal education world is that we're preparing people to have resilience. I was just having this conversation with a law firm well-being person last week. Resilience in itself says there's something you have to be resilient about. There's going to be something difficult in this process. I'm not saying law is easy or should be easy, but we're creating this expectation that they're already going to find difficulty in it. JENNIFER: Well, we had to, in the law school environment, create these programs to deal with life after law school. The reason I love IWIL is we want to fix those issues. We want to look at it and say, "What's causing the burnout? What's causing the turnover so we can make it better?" Even with my last position for the last three years, I was working in financial wellness issues with law students, it's also better to understand the financial pressure students went in, why they went into maybe big law or different world, different jobs. JENNIFER: They went to law school and they were willing to sacrifice their health because they wanted to make a lot of money or because they were hoping for public service loan forgiveness. It really is this confluence of events, like I said, to bring me here now. BREE: Jennifer, I think one of the things that was so attractive for me with you, many things, but also this law school background because we really are there for the groups of law students, judges, and lawyers. And because law students, of course, it's corny, but they are the future of the profession and we know that the youngest lawyers suffer the greatest level of behavioral health problems, it just seemed like a really great way for us to ensure that we're focusing on this critical group. Listen, I've got another question for you just about how you're kind of doing, what's going on now. BREE: I'll date this episode. We're in August. You've been with us for two months now. You've been through a strategic planning session that we had in Chicago with a board a couple of weeks ago. Talk to us now about what are your priorities for IWIL over the next couple of years, which, to be fair, is not just your priorities, it's the board's too, but talk a little bit about that. JENNIFER: The strategic planning session was really eye-opening for me, not because there was a lot of new information, but just having this group of well-being advocates in the room committed to improving the profession. It was inspiring actually. One of the things, probably the most important thing that we focused on during that strategic planning is really focusing on where we can have the most impact. It's nothing new, but we helped articulate it. We're already doing education and awareness. JENNIFER: We have wonderful programming through our biennial conference and our Well-Being Week in Law. We are getting started with a research agenda that's very exciting. And our policy work. We have wonderful initiatives coming up in our policy work and our technical assistance. We work with state task force, getting them up and running, supporting them, looking at opportunities to comment on policy change, that's really one of my priorities, and making sure we are involved in every conversation that impacts well-being in the legal profession. We need to be the thought leaders in this. I want to see those ongoing research projects. JENNIFER: I want to see those comments. I want to see us out in front of everything and being the thought leader in that. I also want to be the gathering space for well-being advocates. I want them to come to us for those questions on how we can support them. CHRIS: That was a great day for us, right? Because I think for the listener's perspective, a lot of us... Obviously IWIL was formed during the pandemic, right? While we probably have spent hundreds of hours together on Zoom calls, the ability to be physically together and meet people that you feel like you know, but you never know people until you're physically with them, right? It just was a fantastic experience to bond with people in a physical setting. Again, Jennifer, I'll just kind of come back to the notion of, I think it's fair to say that going into that retreat, your vision of where you thought the movement was heading was probably a little bit blurry. CHRIS: Coming out of that session, do you feel better about what that outlook looks like relative to where IWIL and other constituencies will be able to put their time, talent, bandwidth, and resources to advance the movement and to advance the culture shift in a more accelerated way? JENNIFER: Absolutely. As I have said over and over again, those first couple months, I really felt like I was drinking through a fire hose. And that's typical of any new job. You're getting up to speed and there's so many things. But I really feel good about where we landed because one of the most common things I hear is, "What are you doing, or what are you going to support?" They want deliverables and they want action items. I feel like defining those pillars as we did and coming up with action items is something that is important. JENNIFER: It also helps to hold us accountable in what we dom and that is really important. We have sustaining donors that we need to be accountable. We have the general public. We have our volunteers. We need to be held accountable, and I feel like we can do that. CHRIS: Jennifer, one of the things that I think is just really interesting about your role is in some respects, our "business model" is premised on the notion of effective volunteer management. Obviously, I mean, one of the things that I think has been one of the great accomplishments of IWIL thus far in its kind of short history has been the manner in which we've offered an on ramp to people interested in this issue to become more involved. CHRIS: Whether it's through a committee structure, whether it's through service on a state task force and then connecting with IWIL through that, whether it's through participation in the annual conference or Well-Being Week in Law, we have created an opportunity for people to come together. I just would be curious on your opinion as to how has it been for you to meet them, that volunteer base, and how important is that group obviously to what we're trying to do relative to our mission? JENNIFER: Well, we do not exist without our volunteers. I mean, it's as plain as simple. I am one person, right? I am one paid staff member. But my listening tour in these first few months has been the best part of this job, because I've become connected to so many of these well-being advocates that are out there. Not only they have their primary professional career, that they have committed their time and talent to moving this shift forward. I'm just amazed by all the time and thoughtful comments and the way they have embraced me, I mean, it's just been phenomenal. JENNIFER: I'm so grateful to them. I would also like to thank those of you, those volunteers out there, who have been very transparent with me and saying they love being a part of IWIL, but they need more focus, which is one of the reasons why we needed a strategic plan. It really helped inform that. BREE: I think one of the things that seems so unique, you look at other think tanks, a couple of people had an idea that they want to dig into and they form a think tank and go forward. We did this, but we also opened up the doors to bring everybody we could along. Just so people know, between the state task forces that we work with regularly and the committees, we have over 200 volunteers that are working with IWIL monthly. It's a very large, very active volunteer base. BREE: Jennifer, we're going to go ahead and take a break at this point in time to hear from one of our sponsors, and then we'll be back and continue our podcast. We'll talk a little bit about DEI, diversity, equity, inclusion and belonging with Jennifer and how that fits into the whole well-being puzzle. — Advertisement: You expect most things to be easily available online. So why should your malpractice insurance be any different? Your job as an attorney is already hard enough. You deserve an application that's easy. With ALPS, you can apply, view rates, and accept your policy 100% online and all in about 20 minutes. Get back to your practice faster and add valuable time back to your day. Want to talk to a real person? Call, chat, email. ALPS is here for you. — BREE: Welcome back, everybody, and we have the most special guest today, our new executive director, Jennifer DiSanza. Jennifer, tell us about... The first thing that the IWIL board did was pass a resolution, a statement, a policy around diversity, equity, and inclusion and how imperative it is to be looking at those issues alongside contemporaneously with the work that we're doing on well-being, because you can't really do one without the other. BREE: Could you talk a little bit about your views on that? I know that you've heard a lot of discussion about this at the board level. What are you thinking about the future of where we meld, raise awareness, et cetera, these two things of DEI belonging and well-being? JENNIFER: I think most importantly that this has been something that was articulated very early to me in the interview process. I have seen it play out throughout my time here. It's great that we're including diverse points of view and supporting them, but you hit the nail on the head when you talked about belonging, right? There's enough room at the table for everyone. We need to make sure that we are not only including people, but we have a place where there's psychological safety. We have a place that people feel comfortable and they feel belonging, because we know that's key to well-being, right? We know that belonging is important. JENNIFER: That is on the premise of everything we do. We know that DEI and B has come to the forefront lately, but there was definitely a struggle to get there. But we have a unique opportunity as we build this movement and create it to really create it as a foundational premise of every single thing that IWIL does, that we have an eye to ensuring this inclusivity and this belonging. Because without it, we're not serving all our stakeholders, we're not serving the profession, and we're not holding to the policy that we stated we would do. We have to live it. BREE: Right. Well said. So well said. I accept that I am privileged white woman, cisgendered lawyer, and I have to be continuously vigilant about these issues. It doesn't just happen without really paying consistent close attention to it. I'm just thrilled that you are here to help us in that endeavor. I have no doubt that you will keep us on that path. JENNIFER: Thank you. I am excited about the opportunity, but I also am glad that we have such a wide variety of volunteers who can keep us accountable on this point too. CHRIS: The reality is, if you've met one lawyer, you've met one lawyer, right? We all come from perspectives that are unique, different, all across the spectrum. Again, this notion of how people struggle for inclusivity and belonging in our profession is something that just has to be at the forefront of everything that we do. I was proud as part of our strategic planning process that we continue to, again, ensure that we're looking through the right lens in our discussions. We always are striving to be a little bit better than we were previously, because sometimes even the most well-intentioned folks can sometimes have a little bit of blind spots here and there, right? JENNIFER: Absolutely. I agree. CHRIS: Jennifer, one of your first achievements was a recently announced partnership and establishment with Thomson Reuters. Can you tell our listeners about what that entailed and how that came about? JENNIFER: Sure. This was actually very exciting for me because I was pleasantly surprised when within the first few weeks of me starting with IWIL, I was able to connect with Thomson Reuters. And more importantly, I was able to reconnect with somebody who I went to law school with. Bree had already established the relationship, but I was able to connect with Ina Camelo, who is leading the space in their global large law firm area. She and I went to law school together. She was year or two behind me, but it was really nice to have this conversation with her about all the wonderful things that Thomson Reuters and IWIL can do together. JENNIFER: This is different than a traditional sponsorship. They have unique areas that we can leverage, whether it's their research, whether it's their practical law area, even marketing and technology. I believe that this partnership might be example moving forward of some of the things that IWIL can do. I feel like the sky's the limit, and it's just harnessing all of that and figuring it out. We've been having continuing meetings with them about some of the work that we can do together. It's very exciting. BREE: Absolutely. Jennifer, what else interests you and excites you and I'll say worries you, of course, because as an executive director, you do a lot of worrying about the future of the well-being in law movement? JENNIFER: Well, obviously being a startup has its pros and cons. As I talked about earlier with diversity, equity, inclusion, and belonging, we have the opportunity to build something from the ground up and thinking about all the pieces. We have the opportunity to be the preeminent think tank on well-being in the legal profession. We also have to expect there's going to be growing pains. As I talked about drinking out of a fire hose and trying to figure out where we're going to focus our energies, we only have that staff of one. JENNIFER: We do have dedicated board members, advisory board members, and volunteers, but we need to make sure IWIL is sustainable. It's an ongoing process to make sure we're thoughtful as we grow and how we fund initiatives to make sure that we're here for the long-term. It's very exciting from a startup perspective, but we also have to be thoughtful about where we put our energies and time. CHRIS: Jennifer, I think one of the things that's always interesting about the roles that we have as leaders is obviously working to leave the profession a little bit better than we found it. As we think about your tenure and our mission, if we were to look forward a decade, if we were to do a good job around changing attitudes, hearts, and minds, how will the legal profession be different? JENNIFER: In my dream scenario, I want legal employers to set the standard on employee well-being. I want to see organizations highlighted for their employee first approach to the work environment. I really would love to see the profession to be able to put up against some of the other professions that are already doing a lot of work in well-being. I'd love to see those shifts. We're seeing these issues debated right now that were accelerated by the pandemic, remote work, vacation. But the truth is, the issues we're coming regardless. I want to make sure the legal workforce is able to be agile as these things change and generations change as they come up into the workforce. JENNIFER: I want the legal profession to be able to weather any future crises like a pandemic because their employees feel psychological safety. I also want law schools to embed well-being from day one. I don't want it to be an afterthought or trying to fit it in here where you can. This is not a function that can come easily because there are a lot of rules and regulations, but it needs to be inextricably tied to the curriculum culture. Because as we said earlier, law schools are preparing people for the practice of law, but we want that to be a holistic approach. JENNIFER: They're charged with preparing students for practice, but that includes not only doing the job of being a lawyer, but it also helps informing that professional identity and understanding the culture of the legal profession. Wouldn't it be wonderful if that base, that foundational culture of legal profession is now well-being at the forefront? BREE: Absolutely. CHRIS: For sure. Jennifer, as we look to wrap up here, again, I think one of the things that's interesting as we think about the future is, how do we know whether we've made progress or not? Do you have any just early inclinations as to the business world? They talk about key performance indicators. Do you have any early sense of, as we talked about this phrase of engineering a culture shift, any sense of how we might want to be thinking about the measurement of progress? JENNIFER: I think there are some standard measurements of progress, retention, burnout. I know there are law firms out there looking at their employees, there are some larger scale surveys, but we talk about different groups of lawyers leaving the profession or changing or moving areas of practice because of the type of their work they're doing. If we see some of those things change, I think we'll be making progress. JENNIFER: If we see the path to partnership change that allows more flexibility, if we see more alternate work environments, if we see some of those things as a standard, remember, because there are firms that might be able to do it and employers that might have to do it, but then it becomes the standard, I think we will have made a difference. CHRIS: Awesome. Jennifer, again, on behalf of everyone who I know has labored on this particular issue and set us up, I mean, we're so excited that you're joining us as a leader in this journey. Our best days continue to be ahead of us. We know that there are some things going on both in society and generationally that might give us a little bit of tailwind for some of that acceleration of activity. I think one of the most important things about this podcast is how can people reach you? Because you are now in some respects the face and the day-to-day kind of operational execution of some of the mission. I would love it if you would let the listeners know how do they get a hold of you? JENNIFER: As I often say, I have a virtual open door. I've been taking meetings regularly, but you can reach me at my email address, jdisanza@lawyerwellbeing.net. Feel free to reach out. I have plenty of availability on my schedule if you just want to chat with me and talk about your thoughts about the Lawyer Well-Being movement or how you'd like to contribute to the Lawyer Well-Being movement. I look forward to talking to many more people. CHRIS: Again, Jennifer, thank you so much for joining us. I have a hunch that you will be on the podcast again at some point down the road. In fact, you could even probably be a guest host on the podcast in the event that Bree or I have to take a little bit of a leave or a vacation. Again, I know that for us, that labor on this issue is something that we've made as part of our professional opportunity to give back. It's certainly refreshing to be able to have someone of your talent join our team. I know that you've been passionate about this issue from the forefront, but now you get to work on it day-to-day and that's awesome for us and it's awesome for where this movement is ultimately going. BREE: Absolutely. Jennifer, we're so glad to have you. And me on a personal note, I love working with you. Delighted you're onboard. JENNIFER: Thank you both so much. CHRIS: Well, again, thanks everyone for listening in. We'll be back probably within the next couple weeks, two to three weeks, as we look forward into the fall. It's going to be a busy fall for both IWIL and well-being activities. We will see you down the road. Thanks for tuning in.
Chris Pennington is the Chief Customer Officer at SugarCRM. Creating the best experience for customers in their buying journey doesn't only fall on the one person facing them but on all the tools, strategies, and support behind the scenes. Chris talks about the roles and responsibilities of a Chief Customer Officer and how they measure customer experience. He shares his insight into the elements and overall process of building trust. By prioritizing trust, you're less likely to face customer churn and you also get proper feedback that enables you to deliver even better customer experiences. He also talks about how to use CRM most effectively to measure and improve the buyer's journey. HIGHLIGHTS What is Chief Customer Officer responsible for Measuring the customer experience in their buying journey The critical elements of building trust Avoid leaving yourself vulnerable to customer churn The value of technology that assists in delivering superior customer experiences QUOTES Using data and experience or intuition together - Chris: "It's very important to make sure you're applying some thought process to information that's being surfaced through data. Even though it may be dismissed, having an understanding of what the gut is saying and interpreting it is still a very valuable thing." The importance of empathy in building trust - Chris: "It's important not to forget the human factor. We're on a customer relationship management basis. The relationship part is super important and empathy goes a long way to drive that. You need to be able to respond in a time frame that is appropriate. You need to demonstrate empathy, you can't be in a business of relationships without deeply understanding that." Filling the gap between knowing something and understanding it - Andy: "We don't train sellers or support people in this idea of cognitive empathy which is really the form that's most useful to our customers. Because we go beyond just empathizing with how they feel but actually understand why they feel the way they do and that, then, gives us information to take action to solve the issues that have got them in pain." Connect with Chris in the links below: LinkedIn: https://www.linkedin.com/in/chrispennington88/ Website: https://www.sugarcrm.com/ Online Community: https://sugarclub.sugarcrm.com/ Sponsored by: Revenue.io | Unlock exponential growth with an AI-powered RevOps platform | Revenue.io Scratchpad | The fastest way to update Salesforce, take sales notes, and stay on top of to-dos | Scratchpad.com Blueboard | World's leading experiential rewards & recognition platform | Blueboard.com Explore the Revenue.io Podcast Universe: Sales Enablement Podcast RevOps Podcast Selling with Purpose Podcast
Chris: What do you think some of the fundamental changes of IAM are from on-prem to cloud?Chris: What are some of the key tradeoffs and considerations for using IDaaS offerings?Nikki: There are a lot of solutions out there that discuss zero trust as a product or a service that can be leveraged to 'bake in' zero trust into an environment. But I'm curious on your perspective - do you think we need additional tools to configure zero trust principles, or leverage the technology at hand to implement zero trust?Nikki: There's this move towards passwordless solutions - I can see that being a big boost to zero trust architectures, but I think we're still missing the need for trusted identities, whether it's passwords, pins, or tokens. How do you feel about the passwordless movement and do you think more products will move in that direction?Chris: You've been a part of the FICAM group and efforts in the CIO Council. Can you tell us a bit about that and where it is headed?Chris: It is said Identity is the new perimeter in the age of Zero Trust, why do you think this is and how can organizations address it?Nikki: There was an interesting research publication I read, titled "Beyond zero trust: Trust is a vulnerability" by M. Campbell in the IEEE Computer journal. I like the idea of considering zero trust principles, like least privilege, or limited permissions, as potential vulnerabilities instead of security controls. Do you think the language is important when discussing vulnerabilities versus security controls?Chris: What role do you think NPE's play in the modern threat landscape?Chris: If people want to learn more about the Federal FICAM/ZT Strategies, where do you recommend they begin?
What you'll learn in this episode: How your location affects SEO, and why firms in major metros need to market differently than rural or suburban firms How traditional advertising and brand building can complement SEO What end-to-end SEO is, and why Chris' company does nothing but SEO How long you can expect to work with an SEO firm before seeing results Why it's better to not do SEO at all than do it halfheartedly About Chris Dreyer Chris Dreyer is the CEO and Founder of Rankings.io, an SEO agency that helps elite personal injury law firms land serious injury and auto accident cases through Google's organic search results. His company has the distinction of making the Inc. 5000 list four years in a row. Chris's journey in legal marketing has been a saga, to say the least. A world-ranked collectible card game player in his youth, Chris began his “grown up” career with a History Education degree and landed a job out of college as a detention room supervisor. The surplus of free time in that job allowed him to develop a side hustle in affiliate marketing, where (at his apex) he managed over 100 affiliate sites simultaneously, allowing him to turn his side gig into a full-time one. When his time in affiliate marketing came to an end, he segued into SEO for attorneys, while also having time to become a top-ranked online poker player. Today, Chris is the CEO and founder of Rankings.io, an SEO agency specializing in elite personal injury law firms and 4x consecutive member of the Inc. 5000. In addition to owning and operating Rankings, Chris is a real estate investor and podcast host, as well as a member of the Forbes Agency Council, the Rolling Stone Culture Council, Business Journals Leadership Trust, Fast Company Executive Board, and Newsweek Expert Forum. Chris's first book, Niching Up: The Narrower the Market, the Bigger the Prize, is slated for release in late 2022. Additional Resources Chris Dreyer LinkedIn Rankings.io Twitter Rankings.io Facebook Rankings.io Instagram Transcript: SEO is a complicated beast. If you want to conquer it, you have to go in ready to swing, according to Chris Dreyer. As CEO of Rankings.io, Chris specializes in working with personal injury lawyers and law firms to get them on the first page of Google in competitive markets. He joined the Law Firm Marketing Catalyst Podcast to talk about how the “proximity factor” affects Google rankings; why your content is the first area to target if you want to improve your rankings; and how SEO, digital marketing and traditional advertising all work together to build your brand. Read the episode transcript here. Sharon: Welcome to The Law Firm Marketing Catalyst Podcast. Today, my guest is Chris Dreyer, CEO of Rankings.io. His firm specializes in working with elite personal injury firms, helping them to generate auto accident and other cases involving serious personal injury. He does this through Google's organic keyword search rankings which, to me, is quite a challenge. This is a very competitive market, and it's one that requires a very healthy budget if you're going to be successful. Today, Chris is going to tell us about his journey and some of what he's learned along the way. Chris, welcome to the program. Chris: Sharon, thanks so much for having me. Sharon: Great to have you. Tell us about your career path. You weren't five years old saying this is what you wanted to do. Chris: I've always been an entrepreneur. I saw my uncle. My uncle's a very successful business CEO for many organizations. He's had a really interesting career path. I told my parents before I went to college that no matter what I got a degree in, I was going to start and own my business at one point, and they were on the same page. I ended up getting a history education degree. I was a teacher, and I was working in a detention room when I typed in “how to make money online,” probably the worst query you could possibly type in. But I found a basic course that taught me the fundamentals of digital marketing and I pursued that. By the end of my second year teaching, I was making about four times the amount from that than I got from teaching. So, I went all in and did some affiliate marketing. I had some ups and downs with that. Then I went and worked for another agency and rose to their lead consultant. Then I had an epiphany and thought, “I think I can do this myself. I think I can do it better,” and that's what I did. That's when I started. At the time, it was attorney rankings. Sharon: Wow! Had you played around with attorney rankings before, when you were a teacher and just typing away? Chris: When I worked for this digital agency that's no longer in business, they were a generalist agency, but they worked with many law firms and attorneys. I was their lead account manager. I just enjoyed working with them. I enjoyed the competition and the satisfaction I would get from ranking a site in a more competitive vertical. That's how I chose legal. I wanted to look for something that had a longstanding business. I didn't want to jump into something fast or tech-related that could be changing all the time; I wanted something with a little bit more longevity. Sharon: Did you ever want to be a lawyer yourself? Chris: I ask that to myself all the time. I think about it now, mainly because of all the relationships I have, how easy it could be for a referral practice. We have our own agency and I know how to generate leads now. So, I ask myself that a lot. That's a 2½ to 3-year commitment. You never know; I may end up getting my degree. Sharon: There are a lot of history majors who went into law and then probably decided they wanted to do something else, so that's a great combination you have. It's Rankings.io. What's the .io? Chris: There are these new top-level domain extensions. There are .org, .net, .com. Now you see stuff like .lawyer or .red. There are all kinds of different categories of those domains. Tech companies frequently use .io, standing for “input” and “output.” How I look at it, or how I make the justification for it, is that if you invest in us, you get cases—input/output. Sharon: Can you make up your own top level or is there a list somewhere? Chris: There's a big list. GoDaddy and NameSheet.com have many of them. In legal specifically, there's .law, there's . attorney, there's .lawyer, I believe even .legal. Most industries have their own top-level domain extension now. Sharon: I've seen .io, but I never knew what it stood for. You don't see it that often. I happened to be Googling somebody in Ireland the other day. Most of the places were using .com, but this was using .ie, and I thought, “What is .ie?” but it turns out it was Ireland. Tell us a little about your business. What kinds of clients do you have? Is there seasonality? Chris: We help personal injury attorneys. We primarily work with personal injury law firms that are midsize to large. Typically not solo practitioners and new firms, but more established firms trying to break into major markets in metropolitan areas, your Chicagos, your Philadelphias, your bigger cities that have a lot of competition. We've been around since 2013. We don't work with a high volume of clients because our investments are higher, because to rank in these big cities takes a lot of quantitative actions, a lot of production. We currently work with around 45 to 50 firms, and that's what we do. We do search engine optimization for personal injury law firms. Sharon: Search engine optimization for personal injury law firms. To me, that seems like a lot. It's great. Are these typically smaller firms that are in—I don't know—Podunk, Iowa, and they say, “I want to go to the big city”? Is that what happens? Chris: Typically, it's one of two things. It's either a TV, radio, traditional advertiser that wants to focus more on digital that has a larger investment. They have more capital to invest. Or, it's someone that wants to get creative and focus on digital to try to take market share away from the big TV advertisers. Most of the time it's individuals in big cities because there are tons of personal injury attorneys. Right now, I'm in Marion, Illinois. There's a handful of attorneys. Most of them aren't focused on marketing. Just by the nature of having a practice, they typically show up in the Map Pack. That's not the case in Chicago. You actually have to aggressively market to show up on the first page of Google. Sharon: If somebody's already spending a lot of money on TV or radio or billboards in Chicago, are your clients people who have turned around and said, “I can do better if I put this money all into digital and rankings.” Does that happen? Chris: I personally am not an “or.” I'm an “and.” You did TV? Well, let's also do SEO. Let's also do pay-per-click. I like the omnichannel approach. I think there are two types of marketing. There is lead generation and direct response. That's your pay-per-click, your SEO, things like that. Then there's demand generation and brand building. The thing about demand generation and brand building is they actually complement direct response, and you can get lower cost per acquisition. To give you an example, if you're a big TV advertiser and have an established brand, and someone types something into Google, you may capture that click because they recognize your company as opposed to someone that isn't as known. I think they all work together. Of course, we're always playing the attention arbitrage game. We want to go to the locations where our money can carry the most weight to get us the most attention. For example, right now, individuals are going to TikTok and Instagram Reels and YouTube Shorts because there isn't the same amount of competition there. That's where a lot of tension and competition are occurring. It's a constant game, and it's something to be apprised of and aware of what's going on. Sharon: Is that something you also do in terms of rankings? Do you do TikTok or Instagram or anything like that, or Google My Business? Is it all of those? Chris: We use that ourselves to market our business because we're omnichannel, but for our clients, we focus solely on design and SEO. That's simply because we have intense focus and expertise in those areas. We want to be the best in the world and really dialed in to all the fundamental changes that occur. But knowing that limitation, knowing that there is more effort and sacrifice if someone wants to come to us because we don't do everything, we like to be aware of who is providing services in those other areas. Who's the best at pay-per-click, who's the best at social media. We try to make it as easy as possible to get our clients help in those areas too. Sharon: How do you keep up with everything? There are so many different things. Chris: Obsession. I think of it as a game. I always tell people that running a digital agency is like a game that pays me. I truly believe that, because I enjoy what I do. I don't love the quote that if you love what you do, you never work a day in your life. I don't believe that's completely true, but I don't have the same stressors and I enjoy what I do. So, that's an obsession. Sharon: So that's dinner-table talk. Chris: Oh, yeah. Sharon: What keeps you attracted to attorneys? A lot of people say, “O.K., I've had it.” What keeps you attracted? Chris: I think they're providing a good service to the common individual and fighting against big insurance companies. Generally, they get a bad rap, particularly personal injury attorneys. They're referred to as ambulance chasers. Sometimes individuals get creative, and they refer to me and our agency as an ambulance-chaser chaser. But in general, they're the plaintiffs; they're trying to help individuals that have been injured. I think where they get a bad rap is sometimes people are banging down their doors and soliciting them right after they're injured or in the hospital bed. Other times, you'll see these big billboards where it's like, “How could you possibly put that up on a billboard?” There's a complete lack of EQ or empathy. It's like, “Congratulations. You just lost a leg. Contact us,” or “Congratulations. Someone's seriously hurt.” It's just the wrong messaging. That's where they get a bad rap, but the overwhelming majority are truly trying to provide value and help these injured victims. Sharon: Do you ever work with defense firms or law firms that aren't personal injury? Chris: That's a good question. Our focus and expertise is personal injury, and what I tell other businesses and my peers is that it gives us optionality. If I think we can help a law firm and we can serve them and continue to provide extreme value, we will selectively take those opportunities. Right now we have about 45 clients, and I think three of them aren't personal injury law firms. It just happened to be the perfect prospect for us. They were in competitive markets. They had these clearly defined goals and brands, and we wanted to help them. Sharon: How about other legal services, like—I forget; I think it's Legal Voice or something like that. If it's a graphics firm that does graphics for trials, do you work with that kind of firm? Chris: We've worked with some. I can't think of any specifically. I would say our business is more focused on the front end, the marketing and awareness side, and less on the sales intake or operations side. Operations would be your trials and customer service and things like that. At this point in time, we're focused solely on lead generation, and that's an issue upon itself. Our job is to overwhelm the sales department. Intake is a whole different ballgame. Sometimes intake has to be addressed, but it's not us. We have referrals that we give for that. Sharon: Do you work with only lawyers, or do you work with marketing directors at these firms? Who are you typically working with? Chris: Most of the time it's the lead attorney. There are some firms that have a CMO or a marketing manager, but I would say that's the minority. When they get a CMO, typically it's at your higher eight-figure or nine-figure firms, and they will start to bring these services in-house. So, most of the time it's still the lead attorney. Sharon: You used a term I hadn't heard before, end-to-end SEO. What does that mean? Chris: It's a great question. A lot of digital agencies that are full-service, they'll offer design and social and PPC. They have a very narrow span of control, meaning you get assigned a SEO specialist, and that SEO specialist is supposed to be able to write content, optimize your site, do your local SEO, do your link building. Look, I don't believe in unicorns. I don't think people have the skillset to do all of those. So, when I say end-to-end, we have a dedicated content department with writers; we have a dedicated, on-site SEO and technical department to optimize your site; we have a dedicated local department that only works with local maps and helps you on the Map Pack; we have a dedicated link-building department. It's the full spectrum of SEO as opposed to getting these generalists, where maybe they're good at one thing and not good at the other things. Sharon: Do you think your market understands the term end-to-end SEO? Chris: Probably not. I probably should work on the copyrighting a little bit, but I do like to make that distinction. Even though we're specialists and do only SEO, you can take it a step farther. If you look at how we staff, everybody's a SEO specialist, as opposed to it being an add-on or backend service. Sharon: The Map Pack, is that where you have the top three local firms on a map near you, when you search “Starbucks near me” or “Personal injury firm near me”? I say Starbucks because we did that last weekend. I know things are always changing, but if it's a one- or two-person personal injury firm and they don't have the budget you're talking about, can they do anything themselves? What do you recommend? Chris: That's a good question. If you don't have a budget, try to scrape your budget together and get a website made the easiest way you can, whether it's a WordPress site or a template. That's your main conversion point. Try to get your practice area pages and your sales pages created as an outlet for conversions. If you don't have a big budget and you're in a metropolitan area, I would encourage you to look at other opportunities to generate business, potentially on-the-ground, grassroots business development practices where you're making relationships with other attorneys. That can carry a lot of weight and get you started. SEO is a zero-sum game. Either you rank in the top positions or you don't, and if you don't, you're not going to get the clicks. If you're on the second page of Google, you might as well be on the 90th page. No one goes to page two. So, if you're going to do SEO, you can't just dip your toe into it. You've got to go in ready to swing and ready to do the quantitative actions to get results. Otherwise, you might as well not do it at all. You might as well choose a different channel. Sharon: That's interesting. So, if you Google your firm and find you're on the second page, should you just give it up and say, “O.K., I'm not going to do anything in this area”? Chris: If you're working with an SEO agency and you're on the second page of Google, I would tell you to—well, first of all, depending upon the length of time you've been with them, if you've given them sufficient time, then I would say you probably need a different SEO agency. If you are on the second page of Google and you're not doing SEO, that's O.K. You could still rank for your brand, your firm name, particularly some of the attorney names, the name of their company. There are probably not going to be many of those. You're probably going to rank for that. I would find a different way to generate leads. It may even mean working for someone else to generate revenue before you go in and start your own practice. Sharon: So, being a lawyer in a law firm first and getting your feet wet that way. You mentioned something about the length of time. How long should you give a firm before you say, “O.K., thanks”? Chris: I'm going to give the lawyer answer here. It depends. If you've been doing SEO for a long time and you have a tremendous amount of links and content, it could be a technical SEO coding issue, maybe a site architecture issue. Maybe you need as little as 90 days to truly make a huge impact. We just took on a client in Florida that had a tremendous amount of links, a tremendous amount of content. We literally just unclogged the sink, so to speak, and they're skyrocketing in a short amount of time. If you're in a major market and you just got your website built and you don't have links, it's going to take some time. All of these SEO specialists will say it takes six months. That's completely untrue. It's based upon the gap. What are you benchmarking against? What does the data show? It could be nine months; it could be 14 months based upon the quantitative actions you're taking. If you don't take the correct quantitative actions, you could be treading water, too. So, it really depends. You can see results quickly. It just depends on where you're at in your state for your firm. Sharon: Since you work with attorneys, I'm sure more than once you've heard, “Chris, I've waited three months. What's going on? How long do I have to wait? We're pouring money into this.” What's your response? Chris: That's a great question. We try to set those expectations on the front end before we even sign them as a client, but occasionally those situations will slip through. Maybe we didn't have those conversations enough or they weren't clear enough. We have a series in our onboarding called “Teach Our Clients Not to Be Crazy.” I'm being really transparent here. Clients become crazy when expectations were not set. If they're set in the front end when we sign them and it's part of our onboarding processes, we say, “This is how long it's going to take to get results.” We're not three months down the road getting that, because we already told them on the front end this is how long it will take. The same for your operation processes like content or reporting. You report our meeting cadences, your communication preferences, all these things. We do that in our “Teach Our Clients Not to Be Crazy.” That's the biggest issue. Most individuals don't have those expectations set well enough on the front end. Sharon: So, you basically say, “It depends. I don't know. We'll have to see. We have to look at your website.” Do you start usually by looking at the site architecture? Do you change—I forget what you call it—the headings at the top of the page, things that are searchable? Chris: We have a very thorough diagnostic that uses a lot of data from different APIs, Ahrefs, and other tools that help us with benchmarking and setting these goals and KPIs. We look at three primary pillars. We'll look at their content to see if it's targeting keywords properly, if it's well-written, if they're missing content. We'll look at their architecture, like you said, to see if the information is easily accessible, if they can Google the website and the consumer can find the information they're looking for, if it loads quickly. Then we will look at their backlink profile to see if they have enough endorsements. If you're trying to win an election, you want to get as many votes as possible. If you're trying to win the first page of Google, you want to get as many high-quality links as possible. So, we'll take a look at that too. There are a lot of subcategories to those, but those are the big, top-level things we look at. Sharon: Of course, we're a PR firm and we do a lot of PR, a lot of article writing for the media. We've had SEO companies say, “I want to see the article before you post it. I want to pump it up, add words, delete words.” Do you do things like that, or is that more on the PR side? Chris: I'll be transparent. I don't love it because it hurts things from a throughput perspective and getting it to the end. It's a bottleneck. It delays things. We do heavy, up-front analysis of the content to try to identify voice and their style. We go through a style guide and try to identify their taglines. It's very cumbersome up front. Then we try to get their permission to not do the approval process. Not everyone will allow us to do that, but we like to say it delays us. If we're an SEO agency and we write 40 articles a month, and if the client takes a month to approve them, we don't have any content to market. So, we try to avoid that when we can. Sharon: Yeah, lawyers didn't go to law school for SEO; they went to be lawyers. Chris: And I think there's this perception where they think everybody in the world is going to see the content. We can publish the content then make edits post-production. I know that's a bit different from what you do, Sharon, with PR, but for us, we can control and make changes. You see something you don't like, we'll just change it. Sharon: How important is money? You emphasize that in your own marketing. There's always a debate with personal injury firms. Do people care about warm fuzzies, or do they care about your wins? What do you look at? Chris: That's a deep question. I'm a big fan of Naval Ravikant, and he talks about— Sharon: I'm sorry, who? Chris: Naval Ravikant. He talks about people's motivations based upon status or wealth. Status is a zero-sum game; there's a winner and a loser. A lot of attorneys love trial because there's a winner and a loser. Sports is a zero-sum game. So, there's status orientation. Then there's wealth. Wealth is not a zero-sum game. Many individuals can be wealthy. So, it depends on their demeanor. I think some of them are more status-oriented and want to be the heavy-hitting trial attorneys and peacock and be the man, but then there are others that don't care. They'll let the other individuals shine and they're more wealth oriented. You see this a lot in society. Individuals will choose to go against common things, but they're doing it because it's a status play. It brings them status to be against the big billionaires or whoever. That's a whole different conversation we'll probably want to avoid, but that's the way I see it. Sharon: Do you basically stick with the marketing they have? If they call you in to do SEO and you look at their website and messaging, do you stick with that or do you recommend a change? Chris: We absolutely will make recommendations if we see an opportunity to help them. Ultimately, if they're signing more cases, it helps us; we have more opportunities to do different SEO for different locations, for retention, for security. Individuals that are living and dying by each lead are the ones that are emailing you every single day, “Where are my leads? Where are my leads?” We just try to do the best. If we think we have excellent rankings, and maybe they don't have the correct copywriting or positioning conversion points, we'll absolutely make recommendations for branding or anything that can help them. Sharon: Have you ever let a client go because they were too anxious or they wouldn't listen to you, or you thought, “This is not going to work”? Chris: Yeah, I wouldn't say very frequently, but absolutely. We've done it a couple of times this year under different circumstances. At the end of the day, your team has to feel welcome and hungry and motivated to work on your client. I want to have a culture where people enjoy their work. Sometimes we've had individuals that weren't respectful or the best from the culture perspective. Look, at the end of the day, it's not worth it. I know our employees really appreciate that we have their back when those situations occur. When you take care of your employees, they're going to take care of your clients. Sharon: Another question, one that's important to me. I'm not sure I understand it, but how can you work with a client in more than one market? Can you only work with one law firm that wants auto cases in Philadelphia? If client B comes and says, “I want auto cases in Philadelphia,” can you do that? What do you do? Chris: That's a great question that has been debated on and on in the SEO community. What I'll tell you is that radio and TV own the distribution rights. They already own the distribution. For SEO, it's determined based upon proximity. I'll give you an example, and then I'm going to circle this around. If you go on vacation to St. Louis and you type “best restaurants near me” in your phone, you're going to see restaurants nearest your proximity. You're not going to see them 10 miles away or 20 miles away. In some situations, if you have a big market, let's say Houston, you could, in theory, have multiple clients in Houston. You could have one downtown, one in the northeast, and there will not be a true conflict because of the proximity factor. Having said that, I personally have given up on trying to educate our clients on this because, at the end of the day, it's what they feel. So, we only take one per market now. In the past, I was very resistant to it because of the proximity. We've done our own data studies, but the SEO industry itself, it's perceived as a snake oil salesman. Any time I would try to educate about proximity, it's like they have earmuffs, and they're like, “Oh, another snake oil salesman.” So, I've basically given up. It's what they want; it's their perception, so we just take one per market. Sharon: Let me make sure I understand. Are you saying you think it could be done, but your client doesn't want that? Chris: Yes, that was what I was circling around to. Because the Map Pack, which is the best virtual real estate we talk about, after about one mile, your rankings start to deplete based upon your physical location. One of the biggest things I see attorneys do wrong is they'll have an office in Orlando or Houston, and they'll think about going to an entirely different city. They don't understand there's a big portion of their market that's not covered just because of the location where their office is. It may be better to actually open a second office in the same city than to go to an entirely new city based upon proximity. Sharon: Physical offices may not be the same today as it was a few years ago, but the law firms that advertise will advertise 20 different locations. What location do you use? The main location? Chris: First, I'll say all attorney listings are supposed to follow Google's guidelines. Google's guidelines state that the office has to have staff during your regularly stated hours. That's the big one that most don't do. It has to have signage. It has to be an actual brick-and-mortar with an office space. It can't be a shared office. You'll see a lot of fake satellite offices. Technically, they're violating Google's guidelines. So, when we say they should expand, we tell them to follow the book. Get a lease. Make sure it's staffed. Have proof of that. Have signage. Have business cards so if there's any question, here's the proof. That's the way to do it by the book. There are many firms that do not do it by the book, but again, we can educate them as to the best ways to do things. Then it's their choice on how they proceed. Sharon: I can see them saying, “That's nice, Chris. O.K., thanks.” There are people listening today who are going to get off the phone and go look at their website and say, “What am I doing right? What am I doing wrong?” What are the things they should look at right away, the top three things to evaluate whether this is going to work for SEO? Chris: I would say read your content first. Does the content answer consumer intent? Do you think it would answer your customer's pains? Is it well-written? Is it formatted well? Can they find the information they're looking for? That's where I would start. Looking at things like links, you need to use diagnostic tools. You need third-party assistance or someone that really understands that. So, I would pay close attention to your website, to your content. Read it and make sure everything's covered thoroughly. That's where I would start. Sharon: Can you set SEO and then leave it for a few months? If you get things up and running, can you just— Chris: In major metros, typically, you cannot. In most of the major metros, all SEO agencies are an in-house team that is constantly foot on the pedal, doing more content, more links, more Google reviews, or eventually you'll lose market share. In smaller markets, you may be able to create a big enough gap where you don't have to touch it as frequently. Maybe there are only a few firms. You can get a big runway ahead of them. But in most markets, it's a constant game. It's not set and forget it. Sharon: Do people ask you, “Should I add YouTube?” or “Should I link my YouTube? Should I link my podcast or blog?” I know you have a blog. Should those all be linked, and does that help? Chris: Yes and no. I'm trying not to get too confusing for the audience. In general, I would tell the audience to create a link if it can serve the consumer, if you're trying to transition or build brand awareness. I know you're aware of this, Sharon, because of what you do for PR. A lot of times, the links are not followed, and they won't contribute or pass equity. A lot of press release sites, a lot of media news sites, don't pass authority back to your site. Is it still a good reason to include a link? Yes, because you could transition a consumer to your website. It could still convert. Is it going to help SEO? Maybe. The traffic might help, but will the link pass authority? Maybe not. Should you link your social assets and directories and things like this? Absolutely. Are they going help improve your rankings? Maybe. Maybe they will; maybe they won't. Sharon: Is your team constantly Googling your clients? Is it constantly evaluating them? When you say diagnostics, what are you looking for? Are they doing Google Analytics? I don't know exactly what it is. Chris: Yeah, we do. We have several tools that track rankings. Rankings are one of those leading indicators. Just because you have great rankings doesn't mean it's going to generate cases. It's more predictive. So, we look at leading indicators. There's one we look at as an agency. I'm not aware of another agency that does. It's referred to as Ahrefs traffic value, and basically this number shows the amount of money you'd have to spend on pay-per-click to get the same amount for organic. We measure that on a weekly basis. If we see it increase, great. Our rankings and visibility are improving. If we see a decrease, them something happened. It allows us to take action more quickly on a weekly basis than by looking at your Google Analytics traffic or goals and conversions on a monthly basis, which is more a lagging indicator. We look at a lot of KPIs. We look at leading end lagging. Sharon: You mentioned pay-per-click and social before. You don't do social. Do you do pay-per-click? Do you incorporate that, or is that totally separate? Chris: That would be a situation where we have a few strategic partners we can highly recommend. We work very well with them from a communications standpoint. We feel we're the best in the world of SEO. We try to find the best in the world of pay-per-click and these other services and let our clients work with those individuals. Sharon: That's interesting to me, because I always think of pay-per-click as part of SEO in a sense. There are so many perspectives on SEO. Should you focus on this? Should you focus on linking everything? Should you focus on YouTube? That's why it's always changing. What are your thoughts about something like that? Chris: Again, I'm a big omnichannel person, so I think there are a lot of different places where individuals congregate and hang out. They could hang out on Facebook; now that audience is depleted, so let's go to Instagram. Now that audience is depleted and it's going to TikTok or YouTube. I think you need to do it all. The difference between pay-per-click and SEO in my eyes is with pay-per-click, you're leasing visibility. The moment you quit bidding, you're gone. It's great. You can get that visibility immediately. With SEO, you're creating a library so people can pull these books from the shelves when they have a certain query. The more content and queries and keywords you target, the bigger your library is, the more opportunities there are for consumers to find you. I look at it more as an asset as opposed to a leasing situation or a liability perspective. That's the way I look at it for SEO. It just gets better with time. Still today, even though there are all these different mediums, it's still one of the best costs per conversion, costs per acquisition. With pay-per-click, the amount per click has exponentially increased. Now, we're looking at $300, $600 per click. Facebook ads have gotten more expensive, and you're not seeing yourself on the organic feed as much as you used to. It's more pay to play, but we still see a lot of value in SEO. Sharon: I would think it would be foundational in the long term. No matter what else is coming, you are still going to need that. Do you work with your clients on the intake process? What if you're generating these leads and they're blowing it when somebody calls? Chris: We secret shop them. We secret shop our clients. We listen to calls. There's nothing worse than when we generate leads and the phone's not answered or calls aren't returned. It's our job to overwhelm the sales department. The moment we get any insights to where sales could be improved, we make those recommendations because it impacts us. We can generate a thousand leads, but if they're not getting assigned, we're going to get fired because they're not making money. Sharon: How are you tracking that? Do you work with people inside for that to work? Chris: Yes. There are certain CRMs we recommend. There are a few consultants we recommend. There are even outsourced intake services we recommend for all those scenarios. It depends based upon the type of firm. There are some firms that are settlement firms, so they don't do a lot of litigation. They're really high-volume. Then there are litigating firms, where maybe their case criteria are super high and they don't do volume. The way you staff those sales teams is different, so it depends based upon our recommendations. Sharon: Going to back to what you were saying before about working only with personal injury firms, I would think they're not scared off by big marketing budgets or the big numbers you might be throwing around. When you read the Wall Street Journal, they're spending millions of dollars on stuff like this. I don't know if you find that. Chris: They're not afraid to spend money; I'll say that. It is definitely increasing in most major markets. You're not going to do TV in most markets for less than $50,000 a month. Pay-per-click, you're not doing that for less than $10,000 typically. There's big money in personal injury because there's a lot of opportunity. There are a lot of different insurances and big insurance companies. It's a behemoth that takes advantage of a lot of consumers, so they definitely invest a lot. Sharon: Chris, I really appreciate your being here today because this is, to me, foundational. It's not going away no matter what comes. Thank you so much for sharing all your expertise with us. If things ever change with SEO, we'll have you back. Thank you so much. Chris: Awesome. Sharon, thanks so much for having me.
This episode of Tech Sales Insights features Chris Bowen, Senior Vice President for Sales at Hammerspace. Sales and marketing are essential pillars in any business and Chris discusses how these 2 departments align to ensure that their messaging is synced. He also discusses the central role of enablement to ensure that the sales team is always focusing on the problems they solve rather than the technology and features which do not contribute to value-selling. HIGHLIGHTSAligning sales and marketing to the problems they solveValue-selling: It's not about the tech, it's about the use cases they solveSales leaders to look up to and advice from mentors QUOTESCustomer success today and moving forward - Chris: "Most of it's more on implementation to make sure that we're getting customers that have gone through the POC stage, getting them implemented into production, and then make sure they're happy. So that's something that we'll also start to grow as we move into early 2023."Build your network and keep learning - Chris: "It's really about building relationships. I mean, at the end of the day, whether it's relationships internally or externally with customers, those last a lifetime. Whether it's this company or 10 years from now."Tips for new sellers to be a great salesperon - Chris: "You should have, at any point, when you're 6 to 8 months into your new sales job, you should have 16 to 20 opportunities and 4 to 6 of those should be at various points of the POC stage. And you should be touching 5 partners a week and even 5 customers a week." Find out more about Chris in the links below:LinkedIn: https://www.linkedin.com/in/bowenca/Website: https://hammerspace.com/Send in a voice message to us: https://anchor.fm/salescommunity/message
It's Steph and Chris' last show. Steph found a game, and if you've been following the journey, all of the Test::Unit test files are now live in RSpec. JWTs really grind Chris' gears. They wrap up with things they've learned, takeaways they've had, and their proudest podcasting moments. They also thank all the folks who've helped make The Bike Shed happen. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Microservices (https://www.youtube.com/watch?v=y8OnoxKotPQ) Transcript: CHRIS: One more round of golden roads, our golden. So here we go. STEPH: Oh, one more round of golden roads. Okay, maybe that's going to get to me today. [laughs] CHRIS: [singing] Golden roads take me home to the place. STEPH: [singing] I belong. CHRIS: Yeah, there you go. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together we're here to share a bit of what we've learned along the way, at least one more time. So with that [chuckles] as an intro, Steph, what would you say is new in your world? STEPH: Hey, Chris. Well, today is the big day. It is the day that you and I are recording our final Bike Shed episode, which we have all the feels about, and we will definitely dive into. But to ignore some of that for now, I have another small fun update I can provide about a new game that I found. So one of the things that's new in my world is I started playing a new board game with Tim; it's called Ticket to Ride. Have you heard of that? CHRIS: I have. I don't know if I've played it. I feel like it's a particularly popular one now. But I don't know if I've ever had the pleasure. STEPH: It's a very cute game, so we have the smaller version of it. For anyone that's not familiar, it's essentially a map. And then there's a bunch of spots where you can build trains and connect them, and then you get tickets. So your goal is that you're going to connect one location to another location. And then you get points and yada yada, but it's so much fun and especially the two-player version. It's like this perfect 20, maybe 30-minute game. I'll be honest; I'm not really a board game person. I always enjoy it. Once I get into it, then I'm like, this is great. I don't know why I was resistant to this. But every time someone's like, "Do you want to play a board game?" I'm like, "Not really." [laughs] I first have to get into it. But I have really enjoyed Ticket to Ride. That's been a really fun game to play. And it's been a nice way to, like, even during the day, we'll break for lunch and squeeze in a game. CHRIS: Well, I love good two-player games. They're hard to find. But when you find a good one, and it's got that easy pickup and play...I believe I'm going to now purchase this. And thank you for the tip. STEPH: Yeah, this is definitely one of those where it's easy to pick up, and then you can get the expanded board. So there's a two-player version, but then yeah, you can get one that's a map of the U.S. or a map of Europe. And I think it accommodates up to five players as the maximum, so not a huge group but definitely more than two. On a slightly more technical note, I have something that I'm very excited to share. It is a journey that you have been on with me, that everybody listening has been on this journey with me. And I'm very excited. I see you nodding your head, so I'm guessing that you're going to know where I'm headed with this. But I'm very excited to announce that all of the Test::Unit test files now live in RSpec. So that is a big win. I'm very, very excited for that to be a previous state of life and not an ongoing state of life. Because I have certainly developed too much niche knowledge around migrating these tests, and that became apparent to me when I was pairing with another developer that works with the client because they had offered...they had some time. They're like, "Hey, do you want help migrating a test file?" And I was like, "Sure." I was like, "But this is wonky enough, like, we should pair and work on this together because I just know some ins and outs. And I don't want you to have to learn a lot of the hard lessons that I've learned." And the test that we happened to pick up was very gnarly. It had a lot of mystery guests. And we spent, I think it was a good two hours. And we only migrated one of the tests, so not even a full file but one of the tests. And at the end of it, I was like, I know way too much about some of the oddities and quirkiness of this. And we got through it, but we decided that wasn't a good use of their time for them to go at this alone. So that's why I'm extra excited and relieved because I didn't want this task to carry on to someone else. So, hooray, we did it. CHRIS: Hooray. Just in time. You're Indiana Jones grabbing your hat right as you roll out and off to [laughs] be away from the project for a bit. So you stuck the landing. Well done, Steph. STEPH: Thank you. Thank you. So that's some great news. And then also, everything else in life is pretty much focused around getting ready for maternity leave. That's about to happen soon, and I am so ready. I have thoroughly enjoyed a lot of the things that I'm doing, [laughs] but goodness, being pregnant is hard. And I am very much ready for that leave. So also, a lot of the things that I'm doing right now are very focused on making sure everything's transitioned and communicated and that I just feel really good about that day of departure. That covers all the newness in my world other than the big thing that we're just not talking about yet. How about you? What's new in your world? CHRIS: Well, continuing to skirt the bigger topic that we will certainly get to in the episode, what is new in my world? I'm actually quite excited workwise right now. We have a much larger body of work that finally we got the clarity. All the pieces fell into place, and now we're sort of everybody rowing in the same direction. There's interesting, I think, really impactful code that we're writing for Sagewell right now. So that's really fantastic. We've got the whole team back together on the engineering side. And so we're, I think, in the strongest and most interesting point that I have experienced thus far. So that's all really fantastic. On a slight technical deep dive, you know what really grinds my gears? It's JWTs. JSON Web Tokens and I have never gotten along. It's never been a match made in heaven. And we have a webhook that comes from Plaid. Plaid is a vendor for connecting bank accounts and whatnot. And they have webhooks like many people do. So they can inform us when things change, lovely feature of how we build web apps these days. But often, there's a signature that says, "This is definitively from us, and you can trust us." And usually, it's some calculated signature, HMAC, or something like that. For some reason, Plaid's uses JWTs, and more than that, they use JWKs. So there's JWT which is the signature. That JWT itself is signed with a JWK. You have to fetch the JWK from their server based on the key ID in the header of the JWT. But how do you know if you can trust the JWT before you've gotten the JWK? All of this broke in a recent upgrade. We went from Heroku-20 to Heroku-22 to the new platform with Heroku, which bumped us to OpenSSL 3.0, and it turns out JWT doesn't work with it. And so that's sad. It's a no. It's going to be a no. It turns out the way that OpenSSL 3.0 works is incompatible with some of the code paths in JWT. And so I was like, wait, we just can't do this? And it's low-level cryptographic primitive stuff that I'm not comfortable messing around with. I'm not going to hop in there and roll up my sleeves. And even just getting to the point that I understood what was broken about this took like an hour and a half just to sort of like, wait, which is okay...so the JWT signs and encodes. And this will be a theme that we come back to later, but I think web development should be simpler. I think we should strive for simplicity. And this is a perfect example where I'm guessing Plaid uses JWTs and that approach to communicating security things often, but I've not seen it used much for signing webhooks. And, oof, it led to a complicated day. And it's unfixable now as far as I can tell. There is a commit on the JWT Ruby repo as of five days ago, but it doesn't build in our system. And it's not released. And it's just a mess. So yeah, engineering is complicated. I'm both wildly excited about what we're doing at Sagewell, and then today was this local minimum of like, oh, JWTs again. Again, we find ourselves battling. And you won today, but hopefully not for too long. STEPH: Oof, how did this manifest that you first noticed? So is it because a webhook suddenly stopped working, and that was like the error that rose up, and that's what helped you dive into it? CHRIS: Yeah, we have a little bit of code in the controller for where Plaid events come in. We calculate and verify the signature of the webhook to make sure that it's valid, and we reject it otherwise. And we alert ourselves via Sentry, and then we also have a Datadog scan that can show what's the status code of the response. Because these are incoming HTTP payloads or requests, and so we can see there were 200 up until this magical day when suddenly everything changed. And that was when we switched Heroku stacks. And then we can see it also in Sentry. So we're able to look at it, and we're like, why are none of the Plaid webhooks able to verify the signature anymore? That seems weird. And so then Datadog confirmed that it consistently was broken from this point in time. And then we were able to track that back. It was also pretty easy to guess because the error was "pkeys are immutable in OpenSSL 3.0," and that was the data. And I was like, oh, cool, that sounds fun. Let me go figure out what that means. STEPH: [laughs] Well, it's a nice use of Datadog. I remember in the past you were talking about adding it. And I was excited because I've never been at that point where a team has just introduced it; either a team doesn't have it, and they wish they had more insights, or they have it and don't use it. And nobody ever checks the board. So that's a nice anecdote for Datadog helping you out. Yeah, I'm not envious of your situation, friend. CHRIS: I do love the cup half full take [laughs] that you have on the overall situation, but that's nice how Datadog worked out for you. And you know what? It was. Thank you, Steph, for once again being that voice of positivity. STEPH: I appreciate that you enjoy it because there are times that when someone points it out to me that I do that, I have to be like, "I'm sorry, I'm not trying to be toxic positivity over here. [chuckles] That's just how my brain works." CHRIS: Oh, you are definitively not toxic positivity. That's a different thing. Because you ended with but also, I feel bad for you, and I'm glad that I'm not in your shoes. So you are the right level of positivity. I don't think I could have talked to you for three and a half years as co-host on a podcast if I didn't appreciate the level of positivity or the general approach that you bring to thinking about stuff. STEPH: Okay. Well, to borrow a phrase from Matt Sumner, who has been a guest on the show, cool, cool, cool, cool. I'm glad my positivity has been well calibrated. And I was about to say I'm interested to hear how this turns out for the team. [laughs] But we're in an awkward spot where I mean, you and I, we can still totally chat. But listeners won't get to hear the rest of that particular saga. I mean, you can share. I mean, you do you. I'm setting all sorts of boundaries for you right now. Okay. And now I'm just rambling, and I'm getting weird with it. Because the truth is that, you know, we won't be back. And this is our final episode together. So I think let's just go ahead and rip off the Band-Aid. Let's dive into it. Let's talk about it. Given that it's our last episode that we are recording, we thought of a couple of things that we'd like to talk about. You brought up a great idea that I'm excited to dive into. Do you want to lead us in? CHRIS: Sure. Well, if we go back all the way to Episode 172, that is the first episode that you came on as a guest. I actually continue to really love the title of that episode, which is What I Believe About Software. And it both captured that conversation really well, but also, more generally, it's actually become the tagline of the show when we do our little introduction. What do we believe about building great software? Et cetera. And I think that's been the throughline of the conversations that we've had is what remains true. What are the themes? Not necessarily the specific technologies, although we certainly talk about that. But what do we believe about building great software? And so today, I thought it would be fun for us to talk about what do we still believe about building great software? It's roughly three and a half years or so that we've been doing this. What's still true? STEPH: Oh, well, I have the first unequivocal one, the thing that I still believe about building great software, and that's you should hire thoughtbot. That's definitely the way to go. We'll help you get it done, not that I'm biased in any way. CHRIS: No. I'd say collectively between us; there's zero bias with regard to thoughtbot or any other web development shop out there. But thoughtbot is the best. STEPH: All right, perfect. So we've got the first one, the clutch one of hire thoughtbot. And then I also really like this topic. And I still think back to that first episode that I recorded with you and how much fun that was and how that really got me to start thinking about this. Because it was something that, at the time, I didn't really reflect on a lot in terms of what does it take to build great software? I was often just doing the day-to-day actions but then not really going high-level think about it. So I'm excited this is one of the topics that we're revisiting. So for the next one, this one is, I don't know, maybe it's a little cutesy, but I was trying to think of an alliteration that I enjoyed. And so this one is be an assumption assassin. So what assumptions are you making? And then how can you validate or disprove them? And that is something that I find myself doing constantly. And it always yields better work, better questions, better software, better code, better code reviews. And that's my first one is be an assumption assassin and identify what assumptions you have. And I had a really good example come up today while I was having a conversation with Joël about something that I was looking to merge. But I was a little hesitant about it because there are some oddities that I won't dig in too deeply. But essentially, there's a test that I migrated that highlights an existing concern in the code. And I was like, should I go ahead and merge this test that documents it, or should I wait to fix that concern and address it? And he brought up a good point. And he's like, "Well, we're assuming it's a bug and an issue, but it may not actually be depending on how the software is being used." And so then he was encouraging me to reevaluate that assumption that I had where I'm like, oh, this is definitely a problem to, like, I don't know, is it a problem? Let's ask somebody. CHRIS: First off, I love that as a theme, as one of the things that you still believe about software. Second, I believe you correctly said that you were looking for an alliteration, but my brain heard acronym. STEPH: [laughs] CHRIS: And so then I was like, B-A-A-A. Is it BAAA? What are you going for there? Oh, you just wanted a bunch of As. Okay, I got it now. Secondly or thirdly, I think I'm on my third now. Apparently, within Sagewell team culture, one of the things that I'm most known for is... there are two phrases: one is just to name it, and the other is to be clear. And these are the two things that I do apparently constantly so much that it's become a meme within the team. It's just like, okay, everybody's been talking. But I just want to make sure we're on the same page here. So just to be clear, or just to name it, here's what I'm seeing. But I agree; I think taking those things...what are the implicit bits? What are the assumptions? And making them more explicit. Our job as developers is just to yell at computers all the time and make them try and do human stuff. And there's so much room for lossy conversions at every point in that conversation chain. And so yeah, being very clear, getting rid of assumptions, love it. It's all great stuff. Actually, in a very related note, the first on my list is that code is for humans to read. This is one of the things that I believe most deeply and most impacts the way that I write software. Any given piece of functionality that we want to author in our code feels like 10, 20, 50, frankly, almost infinite different versions of the code that would produce nearly identical functionality. So at the end of the day, the actual symbols and strings of text that we bring together to write the code is all about other humans, other people on your team, you five months from now, you a week from now, frankly, or me. I'm going to say me, me a week from now. I want to do future me and everyone else on the team a solid and spend that extra 10% of okay, I have something that works now, but let me try and push it around and try and massage it into a shape that is a little more representative of how we're actually thinking about the code, how we talk about it as an organization. Is that the word that we use to describe that domain concept? Maybe we could change that just a little bit. Can I push more of this into the private API? What actually needs to be known here? And I think that's where I'm happiest is in those moments because that's where all of the parts of the job come together, the bit where I trick a computer into doing what I want and simultaneously making it so that that code is revisitable, clear, expressive, all of those things. So yeah, code is for humans. And that's true across every language, and framework, and domain that I have worked in. And I've only believed it more and more so over time. So yeah, that's mine. STEPH: Yeah, I love that one. That's one of the things that comes to mind when people talk about disliking code reviews. And I can imagine there are a number of reasons that people may have had a poor experience with a code review process. But at the end of the day, if you're not getting that feedback or validation from fellow humans, then how do you know that you've been successful, that you've written something that other people can follow up on? Which goes back to the assumptions in terms of like, you're assuming that you have written something that your future self or that other people are going to be able to read and maintain down the road. So yeah, I love that one. One of the other things that I still hold really true to building great software is prioritize early and often. So always be checking in to understand with your users, with your tech concerns, with data that you may have, new insights, and then just confirm that yes, you and the team are constantly working on the thing that has been prioritized and that is the most important. And also, be ready to let go. That can be really hard. I have definitely had those moments in my career where I've spent two weeks working really hard on something. And then we've realized that the thing that we were pursuing isn't that valuable, or it's something that users don't need or actually want. And so it was better to let go of it than to pursue it and ship it anyways. So that's one of my other mantras that I have adopted now is prioritize, prioritize, prioritize. CHRIS: Unsurprisingly, I agree wholeheartedly with all of that. We're still searching for that thing, that core thing that we disagree on other than Pop-Tarts and IPAs. But I don't know that today is the episode that we're actually going to find that. But yeah, prioritizing is such a critical activity. And it is this interesting collaboration point. It gets different groups together. It's this trade-off. It's this balance. And it's a way to focus on and make explicit the choices that we're making. And we're always making choices. We're always making trade-offs. And so being more explicit, being more connected and collaborative around those I believe in so, so, so much. So love that that was something on your list. Let's see, next up on my list is reduce complexity, just sort of as an adage, just always be reducing complexity. It is amazing to me in my time, particularly as a consultant, but even now, this is something that I hold very true is just it's so easy to grow a system in anticipation of future complexity or imagine that the performance concerns that we're going to run into will be so large that we must switch from Postgres and a nice, simple atomic database into a sharded, clustered Kafka queue adventure. And there are absolutely cases that make sense for that sort of thing. But at a minimum, I beg of you, anyone starting a new system, don't start with microservices. Don't start with an event queue-based system. These are wildly complex versions of what often can be done with so much simpler of an application. And this scales through to everything. What's the complexity of an API? Do we need caching in that API layer? Or can we just be a little bit inefficient for a little while and avoid the complexity and the overhead of caching? Turns out caching is a tricky thing to get right, just as an aside. And so the idea of like, oh, let's just sprinkle in a little bit of caching. It'll be easy, and then we'll get better performance, like, yeah, but did you get it right? Or did you introduce a subtle bug into your program that's going to be really hard to debug later? Because do you cache in development? Well, maybe, I'm not sure, could be. So over time, this is something that I've sort of always felt, but I've only ratcheted it up. It's only something that I've come to believe in more and to hold more firmly to. I think earlier in my career, it was something that I felt, but I would more easily be swayed by aspirational ideas of the staggering amounts of traffic that we would be getting soon or the nine different ways that the data model will expand. And so, we should code the current version in anticipation of that. And I have become somewhat the old man on his lawn yelling at the clouds like, "Nah, we don't need it yet. We can grow to that." And there's a certain category of things that are useful to try and get out in front of and don't introduce additional complexity, but they're a tiny, tiny list. And so, for most things, my stance is what's the simplest thing that we can get away with right now, that still provides a meaningful experience to our users, that doesn't compromise on security or robustness or correctness but just solves the problem we have right now? And over and over and over again, that has served me incredibly well. So yeah, keep that complexity at bay. STEPH: That is one that I've definitely struggled with. And frankly, it works in my favor, that idea of keeping things simple. Because I'm terrible when it comes to predicting the future or trying to build things in a way that I just don't have enough information to really drive the architecture or the application that I'm building. So anytime I'm trying to then stretch and reach for the future in those ways unless I really have a concrete understanding of I am building for these particular scenarios, it's really hard to do. So I very much like keeping it simple and not optimizing before you need to. And it reminds me of I think it's Mark Twain, who has a quote, "Worrying is like paying a debt that you don't owe." And that's something that comes to mind for me when also writing code and building features and software is that I tend to be someone who will worry about stuff. And I'm like, oh, is this going to be easy to extend? Is it going to be what it needs to be six months from now if we need to add more features to this and build on top of it? And I have to remind myself it's like, well, let's just wait. Let's wait till we get there and we know more. One of my other ideas that couples nicely with the one that you just shared in regards to keeping things simple and then waiting for those needs to arise is that mistakes are going to happen. They are a part of the process. As we are learning and growing and we're stretching our skills and trying things out, things are going to go wrong. We're going to introduce bugs. And to take those opportunities, that's when we start to use that feedback to then improve things like observability, like capturing logs, and how we handle error reporting or having a plan for emergencies. So maybe that's the part of worrying that can pay off is thinking through, all right, if something does break, or if something gets shipped that shouldn't, then what is our plan in how we handle that? How do we roll back? Or how do we get things back to a stable build? CHRIS: It's funny. I was actually visiting with a friend this past weekend, and we were chatting more generally about life things but the idea of worrying and anticipation and trying to prepare for every bad outcome. And there's the adage of an ounce of prevention is worth a pound of cure. But increasingly, both in life, depending on the context, and in code, I've found that I've shifted to the opposite of it's impossible to stop everything. There are going to be bugs that are going to get out there. There are going to be places where we code things incorrectly. And I would rather...I still want to try as hard as I can to get things right, to be clear. I'm not giving up on trying. But I'm all the more focused on how do we know and how do we recover when those things happen? So it's interesting that you just described exactly that, which, again, is a very human life conversation, and yet it applies to the code. STEPH: I love that rephrasing of it. Instead of the mistakes are going to happen, it's, like, how do we know, and how do we recover? I think that's perfect. I've also found that by answering the how do we know and how do we recover, that really helps you build trust with clients as well. Because again, things are going to happen, things are going to break. And the more prepared you are for that and then the better plan that you have, and then they can watch how you execute that plan, and it's going to establish a lot of deep trust with other engineers and also the team that you're working with, that you have been thoughtful and that you have ideas on how are we going to address this? Instead of waiting for that moment to happen. That's going to happen too. You're going to make decisions in the heat of the moment. But I have found that to be a really useful way to establish yourself with a team in terms of I care about this team and these processes and this application. So how do we handle the bad times, not just the good times? I do want to circle back because you alluded to the fact that you and I, we've tried to find things that we disagree on. And so far, Pop-Tarts and beer have been the two things that we disagree on. But I do have a question for you that maybe I will disagree with you on. But I need to know some more about it first. You have alluded to there's the Brussels snack, (Oh, I'm going to get this wrong.) Brussels sprout snack hour or working lunch, something combination of those words. [laughs] And it's the working lunch that has stuck out to me, and I've wanted to ask you about it. So here I am. I'm asking you about it. What's a working lunch? What's the Brussels snack happy hour, snackariffic working lunch look like? CHRIS: This is fantastic. I love that you waited until the last episode that this was rolling around in the back of your head. And you're like, are you making the team work through lunch? And now, on this final episode, we get to address the controversy that has been brewing in the back of your head. Spoiler alert, no, this is just ridiculous nomenclature. These are two meetings that we have that are more like, let's get the dev team together and talk about stuff that's in our platform sort of developer experience. Or stuff in observability often is talked about in this context because it doesn't quite impact users, but it's how we think about the work. And so there are two different meetings that alternate every other week. So every Friday afternoon, we do this, but it's one of two meetings depending on the day. So there's a crispy Brussels snack hour that was the first one that was named, which was named purely for nonsense reasons because we don't have anything else that's named nonsensically in our organization. And so I was like, oh when we name this meeting, we should make it nonsense because we don't have any other...We don't have, you know, an SOA microservices fleet with Barbie doll and Galactus and all of the other wonderful names. Those are references to the greatest video ever about microservices; if you've not seen it, that will be in the show notes. It's required reading. But anyway, we don't have that. And so we thought, let's be funny with the name of this. So the crispy Brussels snack hour is one, and the crispy Brussels we wanted something that was...the first one is a planning meeting. The second is like, let's actually sort of ensemble program. Let's get the four of us together, and we'll work on some of the stuff that we're talking about here but as a group. And so I wanted the idea of we're working, and so I was like, oh, this will be the crispy Brussels work lunch. But it's purely a name. It's the same time slot. It's 3:00 o'clock on a Friday afternoon. [laughs] So it is not at all us working through lunch. I don't think we should work through lunch. I'm concerned that you thought that for a while, and you were just like, I'm a little worried, but I'm not going to bring it up. But I'm glad we got to cover this before we wrapped up this whole Bike Shed co-hosting adventure together. STEPH: I feel relieved and also a little robbed of an opportunity for us to have something that we disagree on because I thought this might be a thing. [laughs] CHRIS: We can continue searching for that thing. But maybe it's okay that we agreed on most stuff for the run [laughs] of this fun, little show that we did together. STEPH: Yeah, that's gone on quite a time. We've got like three years together that we have managed to really only find two, I mean, very important of course, two things. But yeah, it's been pretty limited to those two areas. And each time that you'd mentioned the work lunch, I was like, huh, I need to ask about that because I have feelings about it. But then, you always would dive into very interesting stories of things that came out of it, and I quickly forgot about it. So this feels good. This feels like very good important closure. I'm glad that this finally surfaced. But circling back, since I took us on a detour for a little bit, what are some other things that you still hold deeply about building great software? CHRIS: I've really got one last thing on the list. It's interesting, there's not a ton technically in this list, which I think represents broadly how I feel about software, and I think how you feel about software. It's like, it's actually mostly about how the people interact at the end of the day. And you can program in any language or framework, and you can get the job done. We certainly have our preferences and things that we enjoy. But the last one really rounds us out, which is think about the users. I always want to be anchoring the conversations that we're having, the approach that we're taking to building the software in what do the users think? Who are our users? What do we know about them? What do they care about? How are they using this technology? How is it impacting their lives? We've talked a number of times about potentially actually watching the sales demo as an engineering team, trying to understand what's the messaging that we're putting out into the world for this piece of software that we're building? Or write along with customer support and understand what are the pain points that people are hitting? And really, like, real humans, what are they experiencing? Potentially with a name attached. And that just changes the way that you think about the software. There's also even the lower-level version of it. As we're building classes or modules, what are the public facets of that, and what are the private API? What's the stuff that we're hiding away? And what's the shape that we are exposing to the outside world for varying definitions of outside? And how can we just bring in a little bit of empathy to try and think about, again, in the case of like the API for a class, it's probably you on the other side of it, but it's future you in a slightly different mindset with a little bit less information and context on the current problem that you're working on. And so, how can we make things easier for ourselves in the code, for our users at the end of the day? How can we deliver real value that is not mired in the minutiae of technical complexity and whatnot but really is trying to help people live better lives? That's a little too fancy as I say it out loud. But it is kind of the core of what I believe, so I'm not going to take it back. STEPH: I love how you've expanded users where more traditionally, it's people that are then using the software. But then you've expanded it to include developers because that is something that is often on my mind and something that I just agree with wholeheartedly in terms of when you're writing software; as you mentioned before, software is for people. And so we want to include others. And it does improve people's lives. People show up to work every day, and if you've been thoughtful if your past you has been thoughtful, it's either going to give you your future self a better day, or it's going to give other people a better day. So I think that's a very fair statement, improving lives by being thoughtful in regards to focusing on the users, people consuming software, and working in the codebase. CHRIS: I know we've talked about this before, but I was having a conversation with one of the developers on the team at Sagewell just last week, and they were mentioning how they really loved working on admin features. And I was like, oh, that's interesting. Let's talk more about that. And it was really it's that same thing that I think you and I have discussed of like there's that immediacy. There's that connection. These are actually colleagues, but you can build software to make their day better. You can understand in detail what the pain points are. What's the workflow that as you watch it, you're like, oh, I could put a button up in the corner of the screen that would automate almost all of this and your day would be that much faster? Oh, let me do that. That's exciting. And so I love that as another variation of it, like, yeah, there's for other developers. There's also for the admin team or other users in the organization of the software. There are so many different versions of users, but I think I think we build a better thing if we think about them more. STEPH: I have definitely worked with teams where I can tell that certain people are demoralized, and it comes down to they feel frustrated and often disconnected from the people that they are building for. And so then you really feel isolated. I'm pushing code around, but I don't really see the benefit or the purpose of it. And I think that's very hard for developers who typically want to build something that's going to be useful and not feel like it's just going to be thrown away. So connecting your team to those users, I certainly understand. Getting to build something for your colleagues and then they get to say how much they like it is an incredible, rewarding experience. You also touched on something that I really appreciate, where you highlighted that a lot of the technical decisions that we make are important, but they're not at the center of the things that we believe when it comes to building great software. And that's something that I will often reflect on. Like, as we were thinking through these particular ideas that we still hold true today, how mine are more people and process-focused and rarely deep in the technical weeds. And there are times that I think, well, shouldn't there be something that's more technical, something that's very concrete? Yes, you should build your code this way or build your application or use a specific technology. But after all the projects and teams that I've been a part of, that's just usually not the most important part. And so I appreciate that you highlighted that because sometimes I have to remind myself that, yes, those things can be challenging, but it's often with people and process. That's where the heart of great software lies. CHRIS: That's a fantastic phrase, I think, that really encapsulates all of the conversations that we're having here. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: Actually shifting gears a little bit, so we've just talked about what we still believe about building great software. I'm intrigued. We've been chatting for a number of years here on this microphone, these microphones. We have separate ones because we're in different states. But I'm interested; what have we changed our minds about? What have you changed your mind about, Steph? I got a couple of ideas, but I'm intrigued to hear yours. STEPH: Nothing. I've never been wrong. I've stuck to everything that I've ever thought. CHRIS: That must be boring. STEPH: [laughs] Yeah, that's totally not true there. There are definitely things that I've changed my mind about. One of the things that I've changed my mind about is that people who know the most will ask the fewest questions. That's something that I used to consider the trademark of someone who is a more experienced senior developer in terms of you really know what you're doing. And so you typically don't ask for help or need help very often. And so, I'm going way back in terms of things that I have changed my mind about. But I have definitely changed my mind where people who know the most are actually the ones that do a really great job of constantly asking questions and asking for feedback. And I think that is still a misconception that people still carry forward. The idea that if you're asking a lot of questions or asking for help that you are not as skilled in your work, and I view it as quite the opposite, that you are very good at what you do and that you know precisely the value of your time. And then also reaching out to others for help, and then also just getting validation on things that you may have concerns around. So that's one I've changed my mind on is that I think the more experienced you are, the more questions you tend to ask. CHRIS: Oh, I love that one. It's a behavior that I know...I think we've talked about this before. But as consultants, we try and model it just the like; it's totally fine to ask questions. And because we often come in with less context, it makes sense for us to be asking questions, but I will definitely intentionally lean into it in those contexts to be like, everybody keeps throwing around this acronym. I don't actually know what that is. Let me raise my hand. And my favorite moment is when people disagree on what the acronym or what the particular word or what the particular project is. Like, I ask the question, and people are like, "Oh, it's this," and someone across the room is like, "Wait, that's what it means? I thought it was this totally other thing." I'm like, cool, glad that we sorted that out. Glad that we got that one up in the air. But I actually remember many, many, many years ago, at this point, there was a video series of...PeepCode was the company, and there was the Play by Play series. And so there were particular prominent developers, particularly in the Ruby community. And they would come and sort of be interviewed and pair program. And it was amazing getting to watch these big names that you had heard of, like Yehuda Katz is the one that stands out in my mind. He was one of the authors of merb, which was a framework that was merged with Rails, I want to say around the 3.0 time. And just an absolute, very big name in this world and someone that I looked up to and respected. And watching this video, they had to Google for particular API signatures and Rails methods. They were like, "Oh, how does that work? Is it link to and then you pass the name?" I forget what it was specifically. But it was just this very human normalizing moment of this person who has demonstrably done incredible work in our community and produced very complex software still needs to Google for the order of arguments to a particular method within Rails. I was like, oh, okay, that's good to know. And with complete humility in the moment, I was just like, yeah, this is normal. Like, it's impossible to hold all of that in your head. And seeing that early on shook me off the idea that that's the thing to do is just memorize everything. It's like no, no, get good at asking the questions. Get good at debugging. Get good, yeah, asking questions. It's a core skill rather than a thing that you grow out of. But I definitely shared early on I was like, not allowed to ask questions, that'll be scary. STEPH: I love that example. Because counterintuitively, to me, it demonstrates confidence when someone can say, "Oh, I don't remember how this works," or "Let me go look it up." And so I just very much appreciate when I see someone demonstrating that level of confidence of let's keep going. Let's keep making progress. I'm going to ask for help because that is totally fine, and we are in a safe space. Or I'm going to create a safe space for us to do that. One of my favorite versions of this where you shared like if you ask about an acronym and then people disagree, one of my favorite versions is to ask about a particular area of the codebase and be like, what would you say this code is doing here? What do you think users do here? Like, what is the purpose? What's the point of this? [chuckles] And then having people be able to say, "Oh, yeah, this definitively does this thing." Or people are like, "You know, I'm not sure. I don't even know if that code is getting run." That's one of my favorite outcomes of asking questions. How about you? What's something you've changed your mind about? CHRIS: I made a list of a couple of things like remote is on there. I didn't know if I'd like remote. I wanted to try it for a while. Tried it, turns out I like it a lot. It's complex. You got to manage it, whatever. But that I think everybody's talked about that a bunch. I think probably the most interesting one is deadlines. Initially, in my career, I didn't really feel anything about them. And then I experienced the badness of deadlines. Deadlines are bad. Deadlines are things that come down from on high and then you fail to hit them, and then you're sad. And maybe along the way, you're very stressed and work long hours to try and get there. But they're perhaps arbitrary. And what do they even mean? And also, we have this fixed scope, and they're just bad. And then there was a period of my time where, like, deadlines are bad. The only thing that we do is we show up, and we make the software as quickly as we can. But in reality, there are times that we need that constraint. And in fact, I have found a ton of value in deadlines when used intentionally. So we can draw a line in the sand, and we can say, at this point in time, we will have a version of the software. We have a marketing campaign that we need to align with this. So we got to have something at that point. And critically, if you're going to have a deadline, you've now fixed a point in time. You need to flex other things. And critically, I think the thing to flex is the scope. So we need to have team management. We have user accounts right now, but now we need to organize them into teams. That is like a category of functionality. It's not a singular feature. And so yeah, we can ship teams in the next quarter. What exactly that means is up in the air. And as long as we're able to have conversations essentially on a day-to-day at least weekly cadence as to what will make it in by that deadline and what won't, and we're able to have sometimes the hardest conversations but the very necessary conversations of the trade-offs that we have to make as we're building that software, then I find deadlines are absolutely fantastic tools for focusing and for actually reducing scope but in a really useful way. And getting something out there in the hands of users so that you start to get real feedback so that you start to learn, is this useful? What are the ways that people are using this? What should we lean into and do more of? What maybe should we roll back, actually? So yeah, deadlines. First, I didn't know them, then I feared them. Now I love them but only under the right circumstances. It's a double-edged sword, definitely. STEPH: I, too, have felt the terribleness of deadlines and railed against them pretty hard because I had gone through a negative experience with them but have also shifted my feelings about them where they can be incredibly useful. So I really liked that's one of the things that you've changed your mind about. It also reminds me of one of the other things...I'm going to circle back for a moment to one of the things that I believe about creating great software is to not wait for perfection, and deadlines are a really good tool that helps you not wait for perfection. Because I have also seen teams really struggle or sometimes fail because they waited until there was something perfect to present, and then you realize that you've built the wrong thing. So I do want to transition and talk a bit about the show because it's our last episode, and we should talk about it, and the fun adventures that we've had and some of the things that we've learned or things that we're feeling in the moment. So given that it's been a wonderful three years for me, it's been four years for you since you've been a host on the show. How are you feeling? CHRIS: I'm feeling a bunch of different things sort of all at once. I am definitely going to miss this immensely. Particularly, I loved when I started, and I got to interview a bunch of thoughtboters and other people from the community. But frankly, three-plus years of getting to chat with you has been just such a delight. There's been an ease to it. We kind of just show up and talk about what we're doing. And yet there are these themes that have run through it. And it has definitely helped me hone and shape my thinking and my ability to communicate about what I'm thinking. I've learned that you have a literal superpower to remember the last thing that you were talking about. Listeners, you may not know this, but we are not quite the put-together folks that we may sound like in these recordings. We have a wonderful editor, Mandy Moore, who makes us sound so much better than we are. But we'll often pause and stop and then discuss what we want to talk about next. And Steph always knows the exact phrase that she or I left off on. And it has been so valuable to the team. But really, it's been just such a pleasure getting to have these conversations. It's also been something that has just gently been in the back of my mind at all times. And so, I'm observing the work in any given week as I'm doing it. It's almost like meditation in a certain way, whereas I'm working on something, like, oh, this is actually really cool. I want to take a note about this and talk about it on The Bike Shed with Steph. And having this outlet, having this platform to be able to have those conversations and knowing that there are people out there is fantastic, although it's very weird because really, every one of these recordings is just you and I on a video call. And so there is an audience, I'm pretty sure. I think people listen to the show; I don't know, occasionally they write in, so it seems like they do. But at the end of the day, this really just feels like a conversation with a friend, and that has been so valuable to have. And yeah, I'm definitely going to miss that. It's been a wonderful run, you know, four years is a long time. It's about as long as I've done most things in my career. And so I'm very happy with what we have done here. And there's a trite saying that isn't...yeah, whatever; I'll just say it, which is, "Don't be sad that it's over. Be glad that it happened." And I guess I'm still going to be sad that it's over. But I am so glad that I got the opportunity to do this, that you joined in this adventure and that we got to chat each week. It's been really delightful. STEPH: I really liked how you refer to this as being a meditative state. And that is something that I have certainly picked up from you and thoroughly enjoyed that I have this space that I get to show up and bring these ideas and topics and then get to talk them out with you. And that has been such a nice way to either end the week or start a week. I mean, it doesn't matter. Anytime that we record, it's this very nice moment of the week where we get to come together and talk through some of the difficulties and share our stories. And that's been one of my favorite moments is because you and I get to show up and share everything that's going on. But then when someone writes into the show or if they send a tweet or something and they share their story or their version of something that happened, or if they said that we made them laugh, that was one of my favorite accomplishments is the idea that something that we have done was silly enough or fun enough that it has brought them joy and made them laugh. So I, too, I'm very, very much going to miss this. It has been a wonderful adventure. And I thank you for encouraging me to come on this adventure because I was quite nervous in the beginning. And this has definitely been an aspect of my life that started out as something that was very challenging and stretching my limits, and now it has become this very fun aspect and something that I get to show up and do and then get to share with everyone. And I do feel very proud of it, very much in part to Thom Obarski, who was our initial producer and helped us have that safe space to chat about things. And now Mandy, who keeps the show running smoothly and helps us sound our best week to week. So it's been a wonderful adventure. This is going to be hard to let go. And I think it's going to hit me most. Like, this was one of those things as we're talking about it, it's, like, I'll see you next week. This will be fine. But I think it's going to hit me when there's something that I want to talk about where I'm like, oh, this would be great to talk about, and I'll add it to The Bike Shed Trello board. And I'll be like, oh yeah, that's not a thing anymore, at least not quite in the same way that it was. CHRIS: So what I'm taking away from this is that you're immediately going to delete my phone number the minute we hang up this call and stop recording. [laughs] STEPH: Oh yeah. I preemptively deleted. So that's already done. Friendship is over at this point. CHRIS: That's smart. Yeah, because you might forget otherwise in the heat of the moment as we're wrapping this whole thing up. STEPH: [laughs] CHRIS: But actually, on that note, in a slightly more serious vein, again, there's this weird aspect where the audience is out there. But we're very sort of disconnected, particularly at the moment in time where we're recording. But it has been so wonderful getting various notes from listeners, listener questions, but also just commentary and the occasional thanks and focusing; oh, you pointed me in the direction, or you helped me think through a complicated piece of work or process a problem that we were having. And so that has been just so, so rewarding. And one of the facets of this that has been so interesting to me is being able to connect to people and basically put out there what we believe about software, and for the folks that resonate with it and be able to have that connection instantly. And meeting people, and they're like, "Oh, I've listened to The Bike Shed. I like all these things." I'm like, oh, cool, we get to skip way further into the conversation because I've already said a bunch, and you say you like that thing. So, cool, we're halfway through our introductory chat. And I know that we agree about a bunch of things, and that's really wonderful. And frankly, I'm going to miss that immensely. So for anyone out there who's found something valuable in this, who's enjoyed listening week to week, or perhaps even back to Upcase for things, I would love to hear from you. I'd love to connect to folks. Send me an email, Twitter. I'm on all the places. I'm Chris Toomey in various spots or ctoomey.com on the internet. Chris Toomey on GitHub. I'm findable, I think. Chris Toomey developer will probably get you there. But I would really love to hear from folks, to connect to folks, you know, someday down the road; I think I'll be hiring again. And that'll be fun. I would love to work with some of the folks that have listened to this show or meet you at a conference, or if I happen to be traveling to a city or you're traveling to Boston. Really for me, so much of what this show is about is connecting with people around how we think about building great software. And so, I would love to continue that forward into the future. So yeah, say hi, if you're interested. STEPH: I agree. That's been one of the most fun aspects of being co-host of the show. And I'll be honest, you are welcome to contact me, but I am going to be off-grid for probably six months. [laughs] So just know that there will be a bit of a delay before you hear back from me. But I would definitely love to hear from you. I also want to say a very heartfelt thanks to a couple of people, just folks that have made this journey incredible and have made it so much fun. One, in particular, is everyone at thoughtbot for their continuous stream of knowledge. I mean, frankly, my software opinions wouldn't be half as interesting if it wasn't for everyone at thoughtbot constantly sharing their knowledge and being a source of inspiration. So I deeply appreciate everyone that has contributed to topics and ideas and just constantly churning out blog posts because those are phenomenal. And I also want to give a shout-out to my husband, Tim, because he has listened to The Bike Shed for many years and even helped out with a number of show notes when that was something that you and I used to do before Mandy made our life so much easier and took that over for us. And has intervened a number of times when Utah mid-recording would decide it's time to play. So I want to give a very special thank you to him because he has been a very big supporter of the show and frankly helped me manage through a lot of the recordings for when I had an 80-pound dog that was demanding my attention. CHRIS: I think continuing on the note of thanks; similarly, I'm so grateful to thoughtbot as an organization for everything that is represented in my career. It's a decade-plus that I have been following and then listening to the podcasts and then joining the organization, and then getting so many wonderful opportunities to learn about this thing called web development. And then, even after I left the organization, I was able to stay on here on The Bike Shed and hang out and still chat with you, Steph, which has been really wonderful. So thank you, thoughtbot, so much. Thank you to Joël Quenneville, who will be the continuing host of the show. This show is not going anywhere. And, Steph, you and I aren't really going anywhere, but we won't be around anymore. But we are leaving it in the very, very capable hands of Joël, and I'm super excited to hear the direction that he takes it and Joel's incredibly thoughtful and nuanced approach to thinking about programming and communicating. So I think that will be really wonderful. And lastly, I definitely want to thank Derek Prior and Sage Griffin, the two original hosts of this show, who really produced something wonderful, and for many years, I think it was about four years that they hosted together. I was an avid listener despite actually working at the company the whole time and really loved the thing that they produced and was so grateful that they entrusted me with continuing it forward. And hopefully myself and then with the help of you along the way, we've...I think we've done an okay job, but now it is time to pass the torch or the green lantern. That's the adage I've been going with. Gotta pass the lantern, pass the mantle on to the next one. So, Joël, it's going to be in your hands now. STEPH: Yeah, I'm so looking forward to future episodes with Joël Quenneville. They are going to be fabulous. So I've been thinking in terms of this being our finale episode and then a fun ending for it, so there's a thing called the 21-gun salute, which is the military honor that's performed by firing cannons or artillery. Not to be confused with the three-volley salute, which I definitely confused earlier that is reserved and used at funerals, which this is not. So using the 21-gun salute, I was like, hmm, it is The Bike Shed, and we have this cute ring ring that goes. So I think for our finale, we should have a 21-bell salute as we exit the shed and right off into the sunset. CHRIS: I love it. I couldn't imagine a more perfect send-off. So with that, what do you think? Should we wrap up? STEPH: Yes, but I have one more silly thing to add. I've thought of a new software idiom that I'm excited about. And so, this may be my final send-off into glory that I'd like to share with you. And I think that we should make like a shard and split. CHRIS: [laughs] I so appreciate that in this moment, this final moment that we have together, you choose to go with a punny joke. It is so on brand for the show. It is absolutely perfect. And I think with that note, shall we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
The stars align on this episode of the Gratitude Through Hard Times podcast when two Italian empaths get together for a conversation about mentorship, leadership and personal growth that happens to coincide with the 7:47 Club's seven anniversary. Host Chris Schembra welcomes Matt Tedesco, VP & General Manager for Americas East at the iconic furniture design company MillerKnoll, for a lively give-and-take about his passion for coaching and unique approach to group engagement and retention. It all starts with embracing the journey! They reflect on the pitfalls of results-oriented management, strategies for getting out from under social constructs that do not serve and the compounding power of positivity. Matt highlights a pivotal spiritual moment that has shaped his value system and explains how vulnerability has empowered him — as a coach, an artist, a musician, a father, friend and colleague. Self-awareness is foundational to all the other elements that connect us with each other, says Matt, helping us tune into our own emotions and — as importantly — the challenges experienced by others.You'll also learn about what Matt calls his “cheat sheet to the world,” a simple but profound strategy that guarantees we're bringing our best selves, personally and professionally. Even when the world disappoints, we have it within us to learn and move on with open hearts. “Focus on gratitude, focus on the things that are positive,” says Matt. “Bringing positive energy is a matter of deciding.” This dynamic exchange is your first step! If you'd like to learn more about Chris and his 7:47 Virtual Gratitude Experience, please visitthis link. And click here to listen to previous episodes of Gratitude Through Hard Times. KEY TOPICS:Deflection and Dreams: Chris asks Matt reflect on his humility about accomplishments and the ways in which he feels pride in any endeavor that offers value-add in some way.Results-Oriented Living: The societal push is often towards outcomes, but Matt is committed to embracing the process and lessons learned along the way.Goals are finite. Then what? It's got to be the going — not the getting there — that's good.Putting It Out There: By making his music available on Spotify, which is a vulnerable thing to do, Matt is reclaiming an artistic part of himself. Self-consciousness be damned!We can't be good at everything. And that's the point! Art and other creative, spiritual or physical pursuits (even those that humble us) can be tools for continuous learning and movement towards self-actualization.Matt envisions members of his teams as ensemble musicians playing in concert; not basketball players taking unilateral command. The focus is on collaboration; not scrambling to hit every basket.Recalling a Pivotal Spiritual Moment: An angry kid lacking in confidence, Matt felt a spontaneous transformational release — and sense of equanimity in the world — following his eighth-grade confirmation in the Catholic Church.What does it mean to be of service and actually help? Matt believes the best coaching/mentoring starts with intentional, active listening. No pre-conceived notions or agenda.Above all, most people simply want to be heard — witnessed in a way that touches something deep in them and connects them to others and a sense of understanding.Nervous, sad confused: Matt's wife recently provided him safe harbor, a place to be fully seen, when he chose to get vulnerable and share personal and professional doubts.Defining the True Nature of Love: Borrowing inspiration from "Man's Search for Meaning," the Holocaust survivor Viktor E. Frankl's compelling take on what matters most in life, Matt's take is that our ultimate meaning isn't money, sex or power. It's simply love.Defining Leadership: Loving team members and those we serve requires whole-hearted commitment to listening and caring (that ideally resides in mutuality and reciprocity).Matt's Strategy for Moving Beyond Social Constructs That Do Not Serve:Cultivate awareness.Welcome uncomfortable, vulnerable feelings.Question and examine emotions as they arise.Own those moments of so-called “weakness” without judgment.How to erode performance? Deny, suppress or withdraw from vulnerable emotions.Gratitude as a Tool for Retention:No. 1: It must be authentic.Tokens like a pizza party or group “atta boy” aren't enough.Keep an eye out for the smaller acts, which often matter just as much as the big.Make the recognition specific and personal.Communicate in a way that makes people feel seen and heard.You will be most generous and present when leading from a place of love.Accentuate the Positive! Negative thoughts beget more. Don't miss the opportunity to energize everyone across the enterprise with the force for good that is gratitude!Finding the universal in the specific: By articulating exactly why we have gratitude and celebrating those who inspire it we are spreading a powerfully positive contagion.All in the Perspective: Whether kids on his son's lacrosse team or his top managers and staff, Matt focuses on Attitude and Work Ethic — the only two things we can fully control.The majority of us are capable, despite (or because of!) personal challenges or setbacks.The world will disappoint, but it's critical that we move on.Recipe for Success: For most of us, hard work and a smile are the two key ingredients.Looking for Outcome-Based Results? Here's a measure: Have you gotten a little better?You Are Not Alone: We're all on the journey together and interconnected There is tremendous power in it!QUOTABLE“He blends doing a good job in corporate American with having an artistic expression and play on the side. And it's a pretty cool balance!” (Chris)“To be a great leader, you have to do the work.” (Chris)“It is not about the result. It's about the process and really trying to enjoy and find meaning … If you only focus on the result, once you get that result, then what?” (Matt) LINKS/FURTHER RESOURCES:Matt's Spotify Albums"The Radical Leap: A Personal Lesson in Extreme Leadership," by Steve Farber."Man's Search for Meaning," by Viktor E. Frankl.Books about vulnerability and leadership by Brene Brown."Measure What Matters: The Revolutionary Movement Behind the Explosive Growth of Intel, Google, Amazon and Uber," by John Doerr, venture capitalist and OKR proponent. ABOUT OUR GUEST:Matt Tedesco leads cross-functional teams, manages P&L responsibilities and related metrics, and coordinates complex activities with myriad distribution partners, internal constituents, and outside stakeholders. His focus in on driving sustainable, profitable growth in a complex and challenging market. He has spent the last 20+ years developing the skills, experiences and network to deliver consistently at a high level for both colleagues and customers. FOLLOW MATT:WEBSITE | LINKEDIN | SPOTIFY ABOUT OUR HOST:Chris Schembra is a philosopher, question asker and facilitator. He's a columnist at Rolling Stone magazine, USA Today calls him their "Gratitude Guru" and he's spent the last six years traveling around the world helping people connect in meaningful ways. As the offshoot of his #1 Wall Street Journal bestselling book, “Gratitude Through Hard Times,” he uses this podcast to blend ancient stoic philosophy and modern-day science to teach how the principles of gratitude can be used to help people get through their hard times.FOLLOW CHRIS:WEBSITE | INSTAGRAM | LINKEDIN | BOOKS
Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough? They answer a listener's question about where to get started testing a legacy app. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. jnicklas/capybara_table: (https://github.com/jnicklas/capybara_table) Capybara selectors and matchers for working with HTML tables Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Just gotta hold on. Fly this thing straight to the crash site. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. CHRIS: And I'm Steph Viccari. STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs] CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are. STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened. CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari. STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody. CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time. STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world? CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever. But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain. It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it. And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into. First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while? Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass. So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something. The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough. STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes. But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take. CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate. And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something. But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen. There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal. And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common. But to highlight something that's really great and wonderful that I've been working with, there is a project called capybaratable. capybaratable is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it. And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it. And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found. STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future. CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you? [laughter] STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do. So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for. On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done. CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience. STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered. So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine. But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle. And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does. And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land. CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up? STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow. I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case. CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions. There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in. STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough? So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace? And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity? CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting. There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that. There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets. And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual. I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction. STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well. For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase? Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in. I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily. I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals. And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off. And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing. But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like. I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster? And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours. Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed. I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to? I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough. CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation. So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations. Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose. Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations. But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations. And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that. If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do. So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question. STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself. So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily. And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing. CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team. But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all. My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this. STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful. But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off. But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start. And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making. And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows. So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well. So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went. So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you. CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks. And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected. Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that. Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things. And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it. So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind. One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true? Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited. And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation. The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day. So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done. And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts. STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project. If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything? CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph? STEPH: I think those are fabulous thoughts. I think you covered it all. CHRIS: Sounds good. Well, in that case, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon. Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Greenland (https://www.imdb.com/title/tt7737786/) Geostorm (https://www.imdb.com/title/tt1981128/) San Andreas (https://www.imdb.com/title/tt2126355/) Armageddon (https://www.imdb.com/title/tt0120591/) This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles] I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby. CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling. STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride. CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact? STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right? CHRIS: They sure do. STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go? CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say. It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not? STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones. CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great. STEPH: Oh yeah, that's a good combo. CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that. STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts. CHRIS: Right. That is what they say in the movie. STEPH: [laughs] Okay. CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space. STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role. CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out. STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up. CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically. And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work. I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time. STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo. CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me. In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think... I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah. STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'. CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two. STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced. I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people. And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either. In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it. And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability. And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point." And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up. CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it's also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase. Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that. But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it. I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles] STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day. CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially? STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing. CHRIS: Sold. STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs] CHRIS: And occasional biscuits, I hope. STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world? CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things. Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work. We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood. And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that. I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true. So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice. I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication. STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing. Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point. CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything. We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us. As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them. And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain. STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up." CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations. But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff. But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please. So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that. So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application. STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that. CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system. In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought. MIDROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today! CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph? STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase. So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application. I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized? Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage? I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app. CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software? Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't. And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points. If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code. Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application? I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button. But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works. The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency. Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box. But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this. STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one. I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there. I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity. CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way. STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application? That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me. One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me. And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase. CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration. I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is. Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo. Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff. STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on. But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code? There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs] CHRIS: We've been burned before. STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question. CHRIS: What would Michael Bay do? STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
About ChrisChris Short has been a proponent of open source solutions throughout his over two decades in various IT disciplines, including systems, security, networks, DevOps management, and cloud native advocacy across the public and private sectors. He currently works on the Kubernetes team at Amazon Web Services and is an active Kubernetes contributor and Co-chair of OpenGitOps. Chris is a disabled US Air Force veteran living with his wife and son in Greater Metro Detroit. Chris writes about Cloud Native, DevOps, and other topics at ChrisShort.net. He also runs the Cloud Native, DevOps, GitOps, Open Source, industry news, and culture focused newsletter DevOps'ish.Links Referenced: DevOps'ish: https://devopsish.com/ EKS News: https://eks.news/ Containers from the Couch: https://containersfromthecouch.com opengitops.dev: https://opengitops.dev ChrisShort.net: https://chrisshort.net Twitter: https://twitter.com/ChrisShort TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Coming back to us since episode two—it's always nice to go back and see the where are they now type of approach—I am joined by Senior Developer Advocate at AWS Chris Short. Chris, been a few years. How has it been?Chris: Ha. Corey, we have talked outside of the podcast. But it's been good. For those that have been listening, I think when we recorded I wasn't even—like, when was season two, what year was that? [laugh].Corey: Episode two was first pre-pandemic and the rest. I believe—Chris: Oh. So, yeah. I was at Red Hat, maybe, when I—yeah.Corey: Yeah. You were doing Red Hat stuff, back when you got to work on open-source stuff, as opposed to now, where you're not within 1000 miles of that stuff, right?Chris: Actually well, no. So, to be clear, I'm on the EKS team, the Kubernetes team here at AWS. So, when I joined AWS in October, they were like, “Hey, you do open-source stuff. We like that. Do more.” And I was like, “Oh, wait, do more?” And they were like, “Yes, do more.” “Okay.”So, since joining AWS, I've probably done more open-source work than the three years at Red Hat that I did. So, that's kind of—you know, like, it's an interesting point when I talk to people about it because the first couple months are, like—you know, my friends are like, “So, are you liking it? Are you enjoying it? What's going on?” And—Corey: Do they beat you with reeds? Like, all the questions people have about companies? Because—Chris: Right. Like, I get a lot of random questions about Amazon and AWS that I don't know the answer to.Corey: Oh, when I started telling people, I fixed Amazon bills, I had to quickly pivot that to AWS bills because people started asking me, “Well, can you save me money on underpants?” It's I—Chris: Yeah.Corey: How do you—fine. Get the prime credit card. It docks 5% off the bill, so there you go. But other than that, no, I can't.Chris: No.Corey: It's—Chris: Like, I had to call my bank this morning about a transaction that I didn't recognize, and it was from Amazon. And I was like, that's weird. Why would that—Corey: Money just flows one direction, and that's the wrong direction from my employer.Chris: Yeah. Like, what is going on here? It shouldn't have been on that card kind of thing. And I had to explain to the person on the phone that I do work at Amazon but under the Web Services team. And he was like, “Oh, so you're in IT?”And I'm like, “No.” [laugh]. “It's actually this big company. That—it's a cloud company.” And they're like, “Oh, okay, okay. Yeah. The cloud. Got it.” [laugh]. So, it's interesting talking to people about, “I work at Amazon.” “Oh, my son works at Amazon distribution center,” blah, blah, blah. It's like, cool. “I know about that, but very little. I do this.”Corey: Your son works in Amazon distribution center. Is he a robot? Is normally my next question on that? Yeah. That's neither here nor there.So, you and I started talking a while back. We both write newsletters that go to a somewhat similar audience. You write DevOps'ish. I write Last Week in AWS. And recently, you also have started EKS News because, yeah, the one thing I look at when I'm doing these newsletters every week is, you know what I want to do? That's right. Write more newsletters.Chris: [laugh].Corey: So, you are just a glutton for punishment? And, yeah, welcome to the addiction, I suppose. How's it been going for you?Chris: It's actually been pretty interesting, right? Like, we haven't pushed it very hard. We're now starting to include it in things. Like we did Container Day; we made sure that EKS news was on the landing page for Container Day at KubeCon EU. And you know, it's kind of just grown organically since then.But it was one of those things where it's like, internally—this happened at Red Hat, right—when I started live streaming at Red Hat, the ultimate goal was to do our product management—like, here's what's new in the next version thing—do those live so anybody can see that at any point in time anywhere on Earth, the second it's available. Similar situation to here. This newsletter actually is generated as part of a report my boss puts together to brief our other DAs—or developer advocates—you know, our solutions architects, the whole nine yards about new EKS features. So, I was like, why can't we just flip that into a weekly newsletter, you know? Like, I can pull from the same sources you can.And what's interesting is, he only does the meeting bi-weekly. So, there's some weeks where it's just all me doing it and he ends up just kind of copying and pasting the newsletter into his document, [laugh] and then adds on for the week. But that report meeting for that team is now getting disseminated to essentially anyone that subscribes to eks.news. Just go to the site, there's a subscribe thing right there. And we've gotten 20 issues in and it's gotten rave reviews, right?Corey: I have been a subscriber for a while. I will say that it has less Chris Short personality—Chris: Mm-hm.Corey: —to it than DevOps'ish does, which I have to assume is by design. A lot of The Duckbill Group's marketing these days is no longer in my voice, rather intentionally, because it turns out that being a sarcastic jackass and doing half-billion dollar AWS contracts can not to be the most congruent thing in the world. So okay, we're slowly ameliorating that. It's professional voice versus snarky voice.Chris: Well, and here's the thing, right? Like, I realized this year with DevOps'ish that, like, if I want to take a week off, I have to do, like, what you did when your child was born. You hired folks to like, do the newsletter for you, or I actually don't do the newsletter, right? It's binary: hire someone else to do it, or don't do it. So, the way I structured this newsletter was that any developer advocate on my team could jump in and take over the newsletter so that, you know, if I'm off that week, or whatever may be happening, I, Chris Short, am not the voice. It is now the entire developer advocate team.Corey: I will challenge you on that a bit. Because it's not Chris Short voice, that's for sure, but it's also not official AWS brand voice either.Chris: No.Corey: It is clearly written by a human being who is used to communicating with the audience for whom it is written. And that is no small thing. Normally, when oh, there's a corporate newsletter; that's just a lot of words to say it's bad. This one is good. I want to be very clear on that.Chris: Yeah, I mean, we have just, like, DevOps'ish, we have sections, just like your newsletter, there's certain sections, so any new, what's new announcements, those go in automatically. So, like, that can get delivered to your inbox every Friday. Same thing with new blog posts about anything containers related to EKS, those will be in there, then Containers from the Couch, our streaming platform, essentially, for all things Kubernetes. Those videos go in.And then there's some ecosystem news as well that I collect and put in the newsletter to give people a broader sense of what's going on out there in Kubernetes-land because let's face it, there's upstream and then there's downstream, and sometimes those aren't in sync, and that's normal. That's how Kubernetes kind of works sometimes. If you're running upstream Kubernetes, you are awesome. I appreciate you, but I feel like that would cause more problems and it's worse sometimes.Corey: Thank you for being the trailblazers. The rest of us can learn from your misfortune.Chris: [laugh]. Yeah, exactly. Right? Like, please file your bugs accordingly. [laugh].Corey: EKS is interesting to me because I don't see a lot of it, which is, probably, going to get a whole lot of, “Wait, what?” Moments because wait, don't you deal with very large AWS bills? And I do. But what I mean by that is that EKS, until you're using its Fargate expression, charges for the control plane, which rounds to no money, and the rest is running on EC2 instances running in a company's account. From the billing perspective, there is no difference between, “We're running massive fleets of EKS nodes.” And, “We're managing a whole bunch of EC2 instances by hand.”And that feels like an interesting allegory for how Kubernetes winds up expressing itself to cloud providers. Because from a billing perspective, it just looks like one big single-tenant application that has some really strange behaviors internally. It gets very chatty across AZs when there's no reason to, and whatnot. And it becomes a very interesting study in how to expose aspects of what's going on inside of those containers and inside of the Kubernetes environment to the cloud provider in a way that becomes actionable. There are no good answers for this yet, but it's something I've been seeing a lot of. Like, “Oh, I thought you'd be running Kubernetes. Oh, wait, you are and I just keep forgetting what I'm looking at sometimes.”Chris: So, that's an interesting point. The billing is kind of like, yeah, it's just compute, right? So—Corey: And my insight into AWS and the way I start thinking about it is always from a billing perspective. That's great. It's because that means the more expensive the services, the more I know about it. It's like, “IAM. What is that?” Like, “Oh, I have no idea. It's free. How important could it be?” Professional advice: do not take that philosophy, ever.Chris: [laugh]. No. Ever. No.Corey: Security: it matters. Oh, my God. It's like you're all stars. Your IAM policy should not be. I digress.Chris: Right. Yeah. Anyways, so two points I want to make real quick on that is, one, we've recently released an open-source project called Carpenter, which is really cool in my purview because it looks at your Kubernetes file and says, “Oh, you want this to run on ARM instance.” And you can even go so far as to say, right, here's my limits, and it'll find an instance that fits those limits and add that to your cluster automatically. Run your pod on that compute as long as it needs to run and then if it's done, it'll downsize—eventually, kind of thing—your cluster.So, you can basically just throw a bunch of workloads at it, and it'll auto-detect what kind of compute you will need and then provision it for you, run it, and then be done. So, that is one-way folks are probably starting to save money running EKS is to adopt Carpenter as your autoscaler as opposed to the inbuilt Kubernetes autoscaler. Because this is instance-aware, essentially, so it can say, like, “Oh, your massive ARM application can run here,” because you know, thank you, Graviton. We have those processors in-house. And you know, you can run your ARM64 instances, you can run all the Intel workloads you want, and it'll right size the compute for your workloads.And I'll look at one container or all your containers, however you want to configure it. Secondly, the good folks over at Kubecost have opencost, which is the open-source version of Kubecost, basically. So, they have a service that you can run in your clusters that will help you say, “Hey, maybe this one notes too heavy; maybe this one notes too light,” and you know, give you some insights into Kubernetes spend that are a little bit more granular as far as usage and things like that go. So, those two projects right there, I feel like, will give folks an optimal savings experience when it comes to Kubernetes. But to your point, it's just compute, right? And that's really how we treat it, kind of, here internally is that it's a way to run… compute, Kubernetes, or ECS, or any of those tools.Corey: A fairly expensive one because ignoring entirely for a second the actual raw cost of compute, you also have the other side of it, which is in every environment, unless you are doing something very strange or pre-funding as a one-person startup in your spare time, your payroll costs will it—should—exceed your AWS bill by a fairly healthy amount. And engineering time is always more expensive than services time. So, for example, looking at EKS, I would absolutely recommend people use that rather than rolling their own because—Chris: Rolling their own? Yeah.Corey: —get out of that engineering space where your time is free. I assure you from a business context, it is not. So, there's always that question of what you can do to make things easier for people and do more of the heavy lifting.Chris: Yeah, and to your rather cheeky point that there's 17 ways to run a container on AWS, it is answering that question, right? Like those 17 ways, like, how much of this do you want to run yourself, you could run EKS distro on EC2 instances if you want full control over your environment.Corey: And then run IoT Greengrass core on top within that cluster—Chris: Right.Corey: So, I can run my own Lambda function runtime, so I'm not locked in. Also, DynamoDB local so I'm not locked into AWS. At which point I have gone so far around the bend, no one can help me.Chris: Well—Corey: Pro tip, don't do that. Just don't do that.Chris: But to your point, we have all these options for compute, and specifically containers because there's a lot of people that want to granularly say, “This is where my engineering team gets involved. Everything else you handle.” If I want EKS on Spot Instances only, you can do that. If you want EKS to use Carpenter and say only run ARM workloads, you can do that. If you want to say Fargate and not have anything to manage other than the container file, you can do that.It's how much does your team want to manage? That's the customer obsession part of AWS coming through when it comes to containers is because there's so many different ways to run those workloads, but there's so many different ways to make sure that your team is right-sized, based off the services you're using.Corey: I do want to change gears a bit here because you are mostly known for a couple of things: the DevOps'ish newsletter because that is the oldest and longest thing you've been doing the time that I've known you; EKS, obviously. But when prepping for this show, I discovered you are now co-chair of the OpenGitOps project.Chris: Yes.Corey: So, I have heard of GitOps in the context of, “Oh, it's just basically your CI/CD stuff is triggered by Git events and whatnot.” And I'm sitting here going, “Okay, so from where you're sitting, the two best user interfaces in the world that you have discovered are YAML and Git.” And I just have to start with the question, “Who hurt you?”Chris: [laugh]. Yeah, I share your sentiment when it comes to Git. Not so much with YAML, but I think it's because I'm so used to it. Maybe it's Stockholm Syndrome, maybe the whole YAML thing. I don't know.Corey: Well, it's no XML. We'll put it that way.Chris: Thankfully, yes because if it was, I would have way more, like, just template files laying around to build things. But the—Corey: And rage. Don't forget rage.Chris: And rage, yeah. So, GitOps is a little bit more than just Git in IaC—infrastructure as Code. It's more like Justin Garrison, who's also on my team, he calls it infrastructure software because there's four main principles to GitOps, and if you go to opengitops.dev, you can see them. It's version one.So, we put them on the website, right there on the page. You have to have a declared state and that state has to live somewhere. Now, it's called GitOps because Git is probably the most full-featured thing to put your state in, but you could use an S3 bucket and just version it, for example. And make it private so no one else can get to it.Corey: Or you could use local files: copy-of-copy-of-this-thing-restored-parentheses-use-this-one-dot-final-dot-doc-dot-zip. You know, my preferred naming convention.Chris: Ah, yeah. Wow. Okay. [laugh]. Yeah.Corey: Everything I touch is terrifying.Chris: Yes. Geez, I'm sorry. So first, it's declarative. You declare your state. You store it somewhere. It's versioned and immutable, like I said. And then pulled automatically—don't focus so much on pull—but basically, software agents are applying the desired state from source. So, what does that mean? When it's—you know, the fourth principle is implemented, continuously reconciled. That means those software agents that are checking your desired state are actually putting it back into the desired state if it's out of whack, right? So—Corey: You're talking about agents running it persistently on instances, validating—Chris: Yes.Corey: —a checkpoint on a cron. How is this meaningfully different than a Puppet agent running in years past? Having spent I learned to speak publicly by being a traveling trainer for Puppet; same type of model, and in fact, when I was at Pinterest, we wound up having a fair bit—like, that was their entire model, where they would have—the Puppet's code would live in an S3 bucket that was then copied down, I believe, via Git, and then applied to the instance on a schedule. Like, that sounds like this was sort of a early days GitOps.Chris: Yeah, exactly. Right? Like so it's, I like to think of that as a component of GitOps, right? DevOps, when you talk about DevOps in general, there's a lot of stuff out there. There's a lot of things labeled DevOps that maybe are, or maybe aren't sticking to some of those DevOps core things that make you great.Like the stuff that Nicole Forsgren writes about in books, you know? Accelerate is on my desk for a reason because there's things that good, well-managed DevOps practices do. I see GitOps as an actual implementation of DevOps in an open-source manner because all the tooling for GitOps these days is open-source and it all started as open-source. Now, you can get, like, Flux or Argo—Argo, specifically—there's managed services out there for it, you can have Flux and not maintain it, through an add-on, on EKS for example, and it will reconcile that state for you automatically. And the other thing I like to say about GitOps, specifically, is that it moves at the speed of the Kubernetes Audit Log.If you've ever looked at a Kubernetes audit log, you know it's rather noisy with all these groups and versions and kinds getting thrown out there. So, GitOps will say, “Oh, there's an event for said thing that I'm supposed to be watching. Do I need to change anything? Yes or no? Yes? Okay, go.”And the change gets applied, or, “Hey, there's a new Git thing. Pull it in. A change has happened inGit I need to update it.” You can set it to reconcile on events on time. It's like a cron or it's like an event-driven architecture, but it's combined.Corey: How does it survive the stake through the heart of configuration management? Because before I was doing all this, I wasn't even a T-shaped engineer: you're broad across a bunch of things, but deep in one or two areas, and one of mine was configuration management. I wrote part of SaltStack, once upon a time—Chris: Oh.Corey: —due to a bunch of very strange coincidences all hitting it once, like, I taught people how to use Puppet. But containers ultimately arose and the idea of immutable infrastructure became a thing. And these days when we were doing full-on serverless, well, great, I just wind up deploying a new code bundle to the Lambdas function that I wind up caring about, and that is a immutable version replacement. There is no drift because there is no way to log in and change those things other than through a clear deployment of this as the new version that goes out there. Where does GitOps fit into that imagined pattern?Chris: So, configuration management becomes part of your approval process, right? So, you now are generating an audit log, essentially, of all changes to your system through the approval process that you set up as part of your, how you get things into source and then promote that out to production. That's kind of the beauty of it, right? Like, that's why we suggest using Git because it has functions, like, requests and issues and things like that you can say, “Hey, yes, I approve this,” or, “Hey, no, I don't approve that. We need changes.” So, that's kind of natively happening with Git and, you know, GitLab, GitHub, whatever implementation of Git. There's always, kind of—Corey: Uh, JIF-ub is, I believe, the pronunciation.Chris: JIF-ub? Oh.Corey: Yeah. That's what I'm—Chris: Today, I learned. Okay.Corey: Exactly. And that's one of the things that I do for my lasttweetinaws.com Twitter client that I build—because I needed it, and if other people want to use it, that's great—that is now deployed to 20 different AWS commercial regions, simultaneously. And that is done via—because it turns out that that's a very long to execute for loop if you start down that path—Chris: Well, yeah.Corey: I wound up building out a GitHub Actions matrix—sorry a JIF-ub—actions matrix job that winds up instantiating 20 parallel builds of the CDK deploy that goes out to each region as expected. And because that gets really expensive with native GitHub Actions runners for, like, 36 cents per deploy, and I don't know how to test my own code, so every time I have a typo, that's another quarter in the jar. Cool, but that was annoying for me so I built my own custom runner system that uses Lambda functions as runners running containers pulled from ECR that, oh, it just runs in parallel, less than three minutes. Every time I commit something between I press the push button and it is out and running in the wild across all regions. Which is awesome and also terrifying because, as previously mentioned, I don't know how to test my code.Chris: Yeah. So, you don't know what you're deploying to 20 regions sometime, right?Corey: But it also means I have a pristine, re-composable build environment because I can—Chris: Right.Corey: Just automatically have that go out and the fact that I am making a—either merging a pull request or doing a direct push because I consider main to be my feature branch as whenever something hits that, all the automation kicks off. That was something that I found to be transformative as far as a way of thinking about this because I was very tired of having to tweak my local laptop environment to, “Oh, you didn't assume the proper role and everything failed again and you broke it. Good job.” It wound up being something where I could start developing on more and more disparate platforms. And it finally is what got me away from my old development model of everything I build is on an EC2 instance, and that means that my editor of choice was Vim. I use the VS Code now for these things, and I'm pretty happy with it.Chris: Yeah. So, you know, I'm glad you brought up CDK. CDK gives you a lot of the capabilities to implement GitOps in a way that you could say, like, “Hey, use CDK to declare I need four Amazon EKS clusters with this size, shape, and configuration. Go.” Or even further, connect to these EKS clusters to RDS instances and load balancers and everything else.But you put that state into Git and then you have something that deploys that automatically upon changes. That is infrastructure as code. Now, when you say, “Okay, main is your feature branch,” you know, things happen on main, if this were running in Kubernetes across a fleet of clusters or the globe-wide in 20 regions, something like Flux or Argo would kick in and say, “There's been a change to source, main, and we need to roll this out.” And it'll start applying those changes. Now, what do you get with GitOps that you don't get with your configuration?I mean, can you rollback if you ever have, like, a bad commit that's just awful? I mean, that's really part of the process with GitOps is to make sure that you can, A, roll back to the previous good state, B, roll forward to a known good state, or C, promote that state up through various environments. And then having that all done declaratively, automatically, and immutably, and versioned with an audit log, that I think is the real power of GitOps in the sense that, like, oh, so-and-so approve this change to security policy XYZ on this date at this time. And that to an auditor, you just hand them a log file on, like, “Here's everything we've ever done to our system. Done.” Right?Like, you could get to that state, if you want to, which I think is kind of the idea of DevOps, which says, “Take all these disparate tools and processes and procedures and culture changes”—culture being the hardest part to adopt in DevOps; GitOps kind of forces a culture change where, like, you can't do a CAB with GitOps. Like, those two things don't fly. You don't have a configuration management database unless you absolutely—Corey: Oh, you CAB now but they're all the comments of the pull request.Chris: Right. Exactly. Like, don't push this change out until Thursday after this other thing has happened, kind of thing. Yeah, like, that all happens in GitHub. But it's very democratizing in the sense that people don't have to waste time in an hour-long meeting to get their five minutes in, right?Corey: DoorDash had a problem. As their cloud-native environment scaled and developers delivered new features, their monitoring system kept breaking down. In an organization where data is used to make better decisions about technology and about the business, losing observability means the entire company loses their competitive edge. With Chronosphere, DoorDash is no longer losing visibility into their applications suite. The key? Chronosphere is an open-source compatible, scalable, and reliable observability solution that gives the observability lead at DoorDash business, confidence, and peace of mind. Read the full success story at snark.cloud/chronosphere. That's snark.cloud slash C-H-R-O-N-O-S-P-H-E-R-E.Corey: So, would it be overwhelmingly cynical to suggest that GitOps is the means to implement what we've all been pretending to have implemented for the last decade when giving talks at conferences?Chris: Ehh, I wouldn't go that far. I would say that GitOps is an excellent way to implement the things you've been talking about at all these conferences for all these years. But keep in mind, the technology has changed a lot in the, what 11, 12 years of the existence of DevOps, now. I mean, we've gone from, let's try to manage whole servers immutably to, “Oh, now we just need to maintain an orchestration platform and run containers.” That whole compute interface, you go from SSH to a Docker file, that's a big leap, right?Like, you don't have bespoke sysadmins; you have, like, a platform team. You don't have DevOps engineers; they're part of that platform team, or DevOps teams, right? Like, which was kind of antithetical to the whole idea of DevOps to have a DevOps team. You know, everybody's kind of in the same boat now, where we see skill sets kind of changing. And GitOps and Kubernetes-land is, like, a platform team that manages the cluster, and its state, and health and, you know, production essentially.And then you have your developers deploying what they want to deploy in when whatever namespace they've been given access to and whatever rights they have. So, now you have the potential for one set of people—the platform team—to use one set of GitOps tooling, and your applications teams might not like that, and that's fine. They can have their own namespaces with their own tooling in it. Like, Argo, for example, is preferred by a lot of developers because it has a nice UI with green and red dots and they can show people and it looks nice, Flux, it's command line based. And there are some projects out there that kind of take the UI of Argo and try to run Flux underneath that, and those are cool kind of projects, I think, in my mind, but in general, right, I think GitOps gives you the choice that we missed somewhat in DevOps implementations of the past because it was, “Oh, we need to go get cloud.” “Well, you can only use this cloud.” “Oh, we need to go get this thing.” “Well, you can only use this thing in-house.”And you know, there's a lot of restrictions sometimes placed on what you can use in your environment. Well, if your environment is Kubernetes, how do you restrict what you can run, right? Like you can't have an easily configured say, no open-source policy if you're running Kubernetes. [laugh] so it becomes, you know—Corey: Well, that doesn't stop some companies from trying.Chris: Yeah, that's true. But the idea of, like, enabling your developers to deploy at will and then promote their changes as they see fit is really the dream of DevOps, right? Like, same with production and platform teams, right? I want to push my changes out to a larger system that is across the globe. How do I do that? How do I manage that? How do I make sure everything's consistent?GitOps gives you those ways, with Kubernetes native things like customizations, to make consistent environments that are robust and actually going to be reconciled automatically if someone breaks the glass and says, “Oh, I need to run this container immediately.” Well, that's going to create problems because it's deviated from state and it's just that one region, so we'll put it back into state.Corey: It'll be dueling banjos, at some point. You'll try and doing something manually, it gets reverted automatically. I love that pattern. You'll get bored before the computer does, always.Chris: Yeah. And GitOps is very new, right? When you think about the lifetime of GitOps, I think it was coined in, like, 2018. So, it's only four years old, right? When—Corey: I prefer it to ChatOps, at least, as far as—Chris: Well, I mean—Corey: —implementation and expression of the thing.Chris: —ChatOps was a way to do DevOps. I think GitOps—Corey: Well, ChatOps is also a way to wind up giving whoever gets access to your Slack workspace root in production.Chris: Mmm.Corey: But that's neither here nor there.Chris: Mm-hm.Corey: It's yeah, we all like to pretend that's not a giant security issue in our industry, but that's a topic for another time.Chris: Yeah. And that's why, like, GitOps also depends upon you having good security, you know, and good authorization and approval processes. It enforces that upon—Corey: Yeah, who doesn't have one of those?Chris: Yeah. If it's a sole operation kind of deal, like in your setup, your case, I think you kind of got it doing right, right? Like, as far as GitOps goes—Corey: Oh, to be clear, we are 11 people and we do have dueling pull requests and all the rest.Chris: Right, right, right.Corey: But most of the stuff I talk about publicly is not our production stuff, so it really is just me. Just as a point of clarity there. I've n—the 11 people here do not all—the rest of you don't just sit there and clap as I do all the work.Chris: Right.Corey: Most days.Chris: No, I'm sure they don't. I'm almost certain they don't clap… for you. I mean, they would—Corey: No. No, they try and talk me out of it in almost every case.Chris: Yeah, exactly. So, the setup that you, Corey Quinn, have implemented to deploy these 20 regions is kind of very GitOps-y, in the sense that when main changes, it gets updated. Where it's not GitOps-y is what if the endpoint changes? Does it get reconciled? That's the piece you're probably missing is that continuous reconciliation component, where it's constantly checking and saying, “This thing out there is deployed in the way I want it. You know, the way I declared it to be in my source of truth.”Corey: Yeah, when you start having other people getting involved, there can—yeah, that's where regressions enter. And it's like, “Well, I know where things are so why would I change the endpoint?” Yeah, it turns out, not everyone has the state of the entire application in their head. Ideally it should live in—Chris: Yeah. Right. And, you know—Corey: —you know, Git or S3.Chris: —when I—yeah, exactly. When I think about interactions of the past coming out as a new DevOps engineer to work with developers, it's always been, will developers have access to prod or they don't? And if you're in that environment with—you're trying to run a multi-billion dollar operation, and your devs have direct—or one Dev has direct access to prod because prod is in his brain, that's where it's like, well, now wait a minute. Prod doesn't have to be only in your brain. You can put that in the codebase and now we know what is in your brain, right?Like, you can almost do—if you document your code, well, you can have your full lifecycle right there in one place, including documentation, which I think is the best part, too. So, you know, it encourages approval processes and automation over this one person has an entire state of the system in their head; they have to go in and fix it. And what if they're not on call, or in Jamaica, or on a cruise ship somewhere kind of thing? Things get difficult. Like, for example, I just got back from vacation. We were so far off the grid, we had satellite internet. And let me tell you, it was hard to write an email newsletter where I usually open 50 to 100 tabs.Corey: There's a little bit of internet out Californ-ie way.Chris: [laugh].Corey: Yeah it's… it's always weird going from, like, especially after pandemic; I have gigabit symmetric here and going even to re:Invent where I'm trying to upload a bunch of video and whatnot.Chris: Yeah. Oh wow.Corey: And the conference WiFi was doing its thing, and well, Verizon 5G was there but spotty. And well, yeah. Usual stuff.Chris: Yeah. It's amazing to me how connectivity has become so ubiquitous.Corey: To the point where when it's not there anymore, it's what do I do with myself? Same story about people pushing back against remote development of, “Oh, I'm just going to do it all on my laptop because what happens if I'm on a plane?” It's, yeah, the year before the pandemic, I flew 140,000 miles domestically and I was almost never hamstrung by my ability to do work. And my only local computer is an iPad for those things. So, it turns out that is less of a real world concern for most folks.Chris: Yeah I actually ordered the components to upgrade an old Nook that I have here and turn it into my, like, this is my remote code server, that's going to be all attached to GitHub and everything else. That's where I want to be: have Tailscale and just VPN into this box.Corey: Tailscale is transformative.Chris: Yes. Tailscale will change your life. That's just my personal opinion.Corey: Yep.Chris: That's not an AWS opinion or anything. But yeah, when you start thinking about your network as it could be anywhere, that's where Tailscale, like, really shines. So—Corey: Tailscale makes the internet work like we all wanted to believe that it worked.Chris: Yeah. And Wireguard is an excellent open-source project. And Tailscale consumes that and puts an amazingly easy-to-use UI, and troubleshooting tools, and routing, and all kinds of forwarding capabilities, and makes it kind of easy, which is really, really, really kind of awesome. And Tailscale and Kubernetes—Corey: Yeah, ‘network' and ‘easy' don't belong in the same sentence, but in this case, they do.Chris: Yeah. And trust me, the Kubernetes story in Tailscale, there is a lot of there. I understand you might want to not open ports in your VPC, maybe, but if you use Tailscale, that node is just another thing on your network. You can connect to that and see what's going on. Your management cluster is just another thing on the network where you can watch the state.But it's all—you're connected to it continuously through Tailscale. Or, you know, it's a much lighter weight, kind of meshy VPN, I would say, if I had to sum it up in one sentence. That was not on our agenda to talk about at all. Anyways. [laugh]Corey: No, no. I love how many different topics we talk about on these things. We'll have to have you back soon to talk again. I really want to thank you for being so generous with your time. If people want to learn more about what you're up to and how you view these things, where can they find you?Chris: Go to ChrisShort.net. So, Chris Short—I'm six-four so remember, it's Short—dot net, and you will find all the places that I write, you can go to devopsish.com to subscribe to my newsletter, which goes out every week. This year. Next year, there'll be breaks. And then finally, if you want to follow me on Twitter, Chris Short: at @ChrisShort on Twitter. All one word so you see two s's. Like, it's okay, there's two s's there.Corey: Links to all of that will of course be in the show notes. It's easier for people to do the clicky-clicky thing as a general rule.Chris: Clicky things are easier than the wordy things, yes.Corey: Says the Kubernetes guy.Chris: Yeah. Says the Kubernetes guy. Yeah, you like that, huh? Like I said, Argo gives you a UI. [laugh].Corey: Thank you [laugh] so much for your time. I really do appreciate it.Chris: Thank you. This has been fun. If folks have questions, feel free to reach out. Like, I am not one of those people that hides behind a screen all day and doesn't respond. I will respond to you eventually.Corey: I'm right here, Chris. Come on, come on. You're calling me out in front of myself. My God.Chris: Egh. It might take a day or two, but I will respond. I promise.Corey: Thanks again for your time. This has been Chris Short, senior developer advocate at AWS. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice and if it's YouTube, click the thumbs-up button. Whereas if you've hated this podcast, same thing, smash the buttons five-star review and leave an insulting comment that is written in syntactically correct YAML because it's just so easy to do.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Overview Creating a graphic novel involved working with others, so we discuss working with collaborators. Many authors are looking into collaborating and this is great information. YouTube https://youtu.be/eJRRLweNnI4 Transcript [00:00:47] Stephen: Well, let's talk some other. okay. When, when you first started writing at 19, what are some things that you've [00:00:54] Chris: learned? I really wrote poetry at 19. I didn't really start writing until I was probably like maybe 30. Okay. [00:01:00] Stephen: So when, what are some things you've learned over the years that you're doing different now than you di used to do? Don't [00:01:07] Chris: give up and keep writing. [00:01:09] Stephen: Oh, did you give up at some point? Quit? Yes. Plenty of times. OK. All right. Um, [00:01:15] Chris: and one of the two documentaries I'm working on, right? I haven't, I haven't done anything with it for a little while because I'm kind of waiting on interviews to be recorded. Cause I'm transcribing interviews to be written, you know, for the book, excuse me. So that is just a matter of me getting a hold of some of the people to be like, all right, let's sit down. Like we are with you and me and tell me your life story and I'll write it out. [00:01:38] Stephen: Got what's the, what's the documentary about, [00:01:43] Chris: again, they're not getting published yet, so I don't wanna get into it too well, too much. Okay. So no. Yeah, just the, uh, the publisher is like, Hey, we're gonna be sending over some press material and an NDA. You need to sign coming up. So I was like, all right, maybe I'll stop talking about this for [00:01:56] Stephen: a little bit. Nice. Okay. So when you're writing, what software and services do you use? [00:02:03] Chris: Oh, just word, nothing special. Okay. [00:02:06] Stephen: And since it's a graphic novel, do you have any software between you and the artists that you use? [00:02:12] Chris: No, that's all on them. I just tell 'em what it is. I want it to look like and then make the changes before they go permanent to, to it. I mean, some people using computers, so it's easy to erase and some they need to know before they go to in. Okay. So, [00:02:24] Stephen: all right. And besides, and I've [00:02:26] Chris: gotten into arguments, I've gotten into arguments with Ken hunt about like how I wanted to look and he had to explain to me why it can't look the way it is, but it's all fine. Okay. [00:02:35] Stephen: So besides getting on podcasts and getting your book in some local stores, what else are you doing to market the book? [00:02:42] Chris: That's kind of it other than calling stores and like sending 'em in emails or going to, oh, ed conventions festivals. I have a convention I'm coming up that Saturday. I don't remember the name of it, but it's being put on by a zombie hideouts, a conflict store in Springfield. Nice. OK. But I don't, I just don't remember the name of it off the top of my head, but I'm gonna be there as a guest all day. Nice. [00:03:04] Stephen: Good. Okay. Well, let's talk a little bit about collaborators because doing a graphic novel, a comic book definitely is a multi person endeavor, and you have to work with artists, which is different. Whereas a lot of authors working with collaborators, it's another author, so right. What's that like to work with the artists? What, what role do you do and how do you work together? [00:03:27] Chris: It could be very difficult, but in the end, it's just a matter of making sure that it gets drawn the way you want it and the way they think they can draw it. [00:03:35] Stephen: Okay. What issues do you have in happen to come up with doing something that's also graphic, [00:03:43] Chris: trying to explain how it is in my head versus how they're gonna draw it and explain to me that it's just not, it's just not gonna be able to be drawn that way,
Overview Chris Denmead hosts a midnight horror themed radio show The Dr. ChrisRadio of Horror show in Worcester MA. The show has been broadcastingon 91.3 FM from the WCUW building since Oct, 2007 It Airs Sundaynight at 10pm. He recently published Vlada a Dracula Tale, a GraphicNovella and Audio Book and Tie in Prequel Comic Book with artist KenHunt. He gender swaps the cast of Dracula. He lives in Framingham MA.He has one son and his Black Cat. When he is not writing books on themost unusual filmmakers in New England, he is watching Godzillamovies with his son or playing video games. He is an avid comic bookcollector. His Book https://www.amazon.com/Vlada-Dracula-Christopher-David-Denmead/dp/B09CKCQ23C?crid=8TIYC20ITH14&keywords=vlada&qid=1657569520&sprefix=vlada%2Caps%2C190&sr=8-2&linkCode=li2&tag=discoveredwordsmiths-20&linkId=14e4e16305f397793a5292569322dece&language=en_US&ref_=as_li_ss_il Favorites https://www.amazon.com/Girl-Next-Door-Jack-Ketchum-ebook/dp/B003IHC34I?crid=GVLKOC0OT44&keywords=girl+next+door&qid=1657654488&sprefix=girl+next+door%2Caps%2C121&sr=8-4&linkCode=li1&tag=discoveredwordsmiths-20&linkId=0dc0f8f4c111239d0da0d997547a1ee1&language=en_US&ref_=as_li_ss_il That's Entertainment - https://www.thatse.com/ YouTube https://youtu.be/u3Cmb58ZyOk Transcript [00:00:47] Stephen: Hello, welcome to episode one 15 of discovered word Smiths. Today. I have Chris DME on and Chris has done something fun and unique. He took the Bram Stoker classic of Dracula and rewrote it with gender, swapping all the roles and he turned it into a graphic novel. So I was really interested in this, cuz I love vampire stories. I've been involved in a vampire anthology, which should be coming out this October. And um, they're, they're just one of my favorite reads. Uh, I've enjoyed reading Dracula and uh, the sequel and prequel that Daker has done with JD Barker and uh, others. So. It was a, a fun talk. Uh, it was nice to hear about this and if you're interested in graphic novels, if you like vampires, this would be a good one to give a try. So here's Chris. Well, Chris, welcome to discovered word Smith. You said you're feeling a little under the weather today. Sorry about that. Hope. Hope you get better here. It's June. So before we talk about your book, tell us a little bit about yourself, where you live, what you like to do outside of writing, hobbies, that type of. [00:01:57] Chris: I live in Massachusetts in the Middlesex county area. I've been a writer for probably a DEC, you know, over a decade, including poetry as well. And at least going back to when I was like 19 now. And I've also been a, uh, the host of a radio show in Worcester mass for the last 15 [00:02:15] Stephen: years. Wow. Nice. What's the radio show about [00:02:20] Chris: called radio horror? [00:02:22] Stephen: What, what, what, what do you do on it? What's it. [00:02:25] Chris: It's called radio of horror. It's a horror thing. And radio show directors, actors, writers, musicians, come on the show. We interview them. Okay, cool. Plus playing music as well. Cause it is a, a broadcasting show on the dial. All right. [00:02:39] Stephen: And do you have any special hobbies or anything you do outside of your writing and podcast in that [00:02:45] Chris: that's taken up so much of my time. I mean, I ride my motorcycle. No, sir, that's really a hobby. I guess some people consider to be a hobby and I'm a avid comic book collector. Who's your [00:02:55] Stephen: favorite comic book series? That's [00:02:59] Chris: I don't know if I have a favorite series. Things have like been up and down and crazy in the comic book industry, but Spidermans, I'll just say Spiderman is my favorite comic book character. So [00:03:06] Stephen: I agree. I love Spiderman. He's been my favorite forever. My son works at a comic store, so I hear [00:03:11] Chris: all the new, I don't have one specific book.
Chris is getting ready to travel, and of course, Sagewell started the day with an incident, a situation, if you will... Steph talks books perfect for vacations and feels sufficiently scarred regarding still working with moving fixtures over to FactoryBot. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Back to Basics: Boolean Expressions (https://thoughtbot.com/blog/back-to-basics-booleans) Sarah Drasner tweet (https://twitter.com/sarah_edo/status/1538998936933122048) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: All right, I am now officially recording as well. Let me make sure my microphone is in front of my face. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What's new in my world? Today is an interesting day. We are recording on a Friday, which is not normal for us, was normal for a long time and then stopped, but now it's back to being normal. But it's the morning, which is confusing. Also, I am traveling this evening. I leave on a flight going to Europe. So I'm going to do a red-eye, that whole thing. So I got a lot to pack into today, literally packing being one of those things. And then this morning, because obviously, this is the way the world should play out, we started the day with an incident at Sagewell, a situation. Some code had gotten out there that was doing some stuff that we didn't want it to do. And so we had to sort of call in the dev team. And we all huddled together and tried to figure it out. Thankfully, it was a series of edge cases. It was sort of one of those perfect storms. So when this edge case happens in this context, then a bad thing could happen. Luckily, we were able to review the logs; nothing bad happened. While I'm unhappy that we had this situation play out... basically, it was a caching thing, just to throw that out there. Caching turns out to be very hard. And the particular way it played out could have manifested in behavior that would have been not good in our system, or an admin would have inadvertently done something that would have been incorrect. But on the positive side, we have an incident review process that we've been slowly incubating within the team. One of our team members introduced it to us, and then we've been using it on a few different cases. And it's really great to just have a structured process. I think it's one of those things that will grow over time. It's a very simple; what's the timeline of what happened? What's the story as to why it happened and why it wasn't caught earlier? What are the actions that we're going to take? And then what's the appendix? What's the data that we have around it? And so it's really great to just have that structure to work within. And then similarly, as far as I can tell, the first even observable instance of this behavior in our system was yesterday morning. We saw it, started to respond to it, saw one more. We were able to chase it down in the logs. Overall, the combination of the alerting that we have in Sentry and the way in which we respond to the alerting in Sentry, which I think is probably the most critical part. Datadog is our log metrics tool right now. So we're able to go through Datadog, and we have Lograge configured to add more detail to our log lines. And so we're able to see a very robust story of exactly what happened and ask the question, did anything actually bad happen? Or was it just possible that something bad could happen? And it turns out just possible. Nothing actually happened. We were able to determine that. We were even able to get a more detailed picture of who were all the users who potentially could have been impacted. Again, I don't think there was any impact. But all total, it was both a very stressful process, especially as I'm about to go on vacation. It's like, oh cool, start to the day where I'm trying to wrap up things, and instead, we're going to spend a couple of hours chasing down an incident. But that said, these things will happen. The way in which we were able to respond, the alerting and observability that we had in place make me feel good. STEPH: I like the incident structure that you just laid out. That sounds really nice in clarifying what happened when it happened in the logs. And the fact that you're able to go through and confirm if anything really bad happened or not is really nice. And I was also just debating this is one of those things, right? Right when you're about to go on vacation, that's when something's going to break. And that's like, is that good or bad? Is it good that I was here to take care of it right before, or is it bad? Because I'd really like to not be here to take care of it. [laughs] You may have mixed feelings. I have mixed feelings. CHRIS: I think I'm happy. Unsurprisingly, this exists in one of the most complex parts of our codebase. And it involves caching. And I remember when we introduced the caching, I looked at it, and I was like, hmm, we have a performance hotspot that involves us making a lot of requests to an external system. And so we thought about it a little bit, and we were like, well, if we do a little bit of caching here, we can actually reduce that down from seven calls down to one over external HTTP. And so okay, that seems to make sense. We had a pull request. We did a formal review. And even I looked at the pull request where this was introduced initially, and my comments on it were like, yep, this all looks good. Makes sense to me. But it's caching-related. So let's be very careful and look very closely at it and determine if there's anything, but it's so hard to know. And in fact, the code that actually was at play here was introduced a month ago. And interestingly, the observable side effect only occurred in the past two days, which we find very surprising. But again, it's this weird like, if A happens and then within a short period after that B happens...and so it's not quite a race condition. But it was something where a lot of stuff had to happen in a short span of time for this to actually manifest. And so, again, we were able to look through the logs and see all of the instances where it could have happened and then what did happen. Everything was fine, but yeah, it was interesting. I feel actually good to have seen it. And I think we've cleared everything up related to it and been very proactive in our response to it so that all feels good. And also, this is the sort of thing we've done this a few times now where we've had what I would call lesser incidents. There was no customer-facing impact to this. Similarly, previous incidents, we've had no or very minimal customer-facing impact. So at one point, we had a situation where we weren't processing our background jobs for a little while. So we eventually caught up and did everything we needed to. It just meant that something may not have happened in as timely a fashion as necessary. But there were no deep ramifications to that. But in each of those cases, we've pushed ourselves to go through the incident process to make sure that we're building the muscle as a team to like, actually, when the bad one comes, we want to be ready. We want to have done a couple of fire drills first. And so partly, I viewed this as that because again, there was smoke, but no fire is how we would describe it. STEPH: Nice. And that also makes sense to me how you were saying y'all introduced this about a month ago, but you were just now seeing that observable side effect. I feel like that's also how it goes. Like, you implement, especially with caching, some performance improvement, and then you immediately see that. And it's like, yay, this is wonderful. And then it's not til sometime passes that then you get that perfect storm of user interactions that then trigger some flow that you didn't consider or realize that could create an issue with that caching behavior. So yeah, that resonates. That seems right. All caching problems usually take about a month or two when you've just forgotten about what you've done. And then you have to go back in. CHRIS: Yep. Yep, yep, yep. So now we've done the obvious thing, which is we've removed every cache from the system whatsoever. There are no caches anymore because it turns out we just can't be trusted with caches in any form whatsoever. ActiveRecord, we turned off caching, Redis we threw it out. No, I'm kidding. We still have lots of caching in the app. But, man, caching is so hard. STEPH: I would love if that's in the project README where it says, "We can't be trusted with caches. No caches allowed." [laughs] CHRIS: Yeah, we have not gone all the way to forbid caching within the application. It's a trade-off. But this does have that you get those scars over time. You have that incident that happens, and then forever you're like, no, no, no, we can't do X. And I feel like I'm just a collection of those. Again, I think we've talked about this in previous episodes. But consulting for as long as I did, I saw a lot of stuff. And a lot of it was not great. And so I basically just look at everything, and I'm like, urgh, no, this will be hard to maintain. This is going to go wrong. That's going to blow up someday. And so, I'm having to work on trying to be a little more positive in my development work. But I do like that I have that inclination to be very cautious, be very pessimistic, assume the worst. I think it leads to safer code in general. There was actually a tweet by Sarah Drasner that was really wonderful. And it's basically a conversation between her and another developer. It's a pretend conversation. But it's like, "But why don't you like higher-order components?" And then it's Squints. "Well, in the summer of 2018, something bad happened, Takes a long drag of a cigarette. something very bad." It's just written so well and captures the ethos just perfectly. Like, sit down. Let me tell you a tale of the time in 2018. [laughs] So I'll include a link to that in the show notes because she actually wrote it so well too. It's got like scene direction within a tweet and really fantastic stuff. But yeah, we'll allow some caching to continue within the app. STEPH: That's amazing. So I was just thinking where you're talking about being more pessimistic versus optimistic. And there's an interesting nuance there for me because there's a difference in like if someone's pessimistic where if someone just brings up an idea and someone's like, "Nope, like, that's just not going to work," and they just always shoot it down, that level of being pessimistic is too much. And it's just going to prevent the team from having a collaborative and experimental environment. But always asking the question of like, well, what's the worst that could happen? And what are the things that we should mitigate for? And what are the things that are probably so unlikely that we should just wait and see if that happens and then address it? That feels like a really nice balance. So it's not just leaning into saying no to everything. But sure, let's consider all the really bad things that could happen, make a plan for those, but still move forward with trying things out. And I realized I do this in my own life, like when someone asks me a question around if there's something that we want to do that's a bit kind of risky. And the first thing I always think of is like, well, what's the worst that could happen? And I think that has confused people that I immediately go there because they think that I'm immediately saying no to the idea. And so I have to explain like, no, no, no. I'm very intrigued, very interested. I just have to think through what's the worst that can happen. And if I'm okay with that, then I feel better about accepting it. But my emotional state, I have to think through what's the worst and then go from there. CHRIS: Wow, it's a very bottom-up approach for your life planning there. [chuckles] STEPH: Yep, I think that's, you know, it's from being a developer for so long. It has impacted now how I make other decisions. Good or bad? Who knows? Yeah, it turns out being a developer has leaked into my personal life. I've got leaky abstractions over here. So, good or bad? Who knows? CHRIS: Leaky abstractions all the way down. Yeah, circling back to, like, I don't think I'm pessimistic per se. The way that I see this playing out often is there will be a discussion of an architectural approach, or there's a PR that goes up. And my reaction isn't no, or this has a known failure point; it is more of uh, this makes me uncomfortable. And it's that like; I can't even say exactly why, and that's what makes it so difficult. And I think this is a place that can be really complicated for communication, particularly between developers who have been around for a little bit longer and have done this sort of thing and have gathered these battle scars and developers who are a bit newer. Having that conversation and being like, um, I can't say exactly why. I can tell you some weird stories. I might not even remember the stories. Some of it just feeds into just like, does this code make me uncomfortable? Or does this code make me happy? And I tend towards wildly explicit code for these reasons. I want to make it as clear as possible and match as close as possible to the words that we're saying because I know that the bugs hide in the weird corners of our code. So I try and have as few corners. Make very rounded rooms of code is a weird analogy that doesn't play, but here we go. That's what I do on this show is I make weird analogies. Actually, we were working on some code that was dealing with branching conditional things. So we had a record which has a boolean value on it. So we've got true or false, and then we've got two states, and then we've gotten an enum with three states. So all total, we have six possible states. But as we were going through this conversation, I was pairing with another developer on the team. And I was like, something feels weird here. And I actually invoked the name of Joël Quenneville because much of the data structure thought that I had here I associate with work that Joël has done around Maybe and things like that. And then also, my suggestion was let's build a truth table because that seems like a fun way to manage this and look at it and see what's true. Because I know that there are spots on this two-by-three grid that should never happen. So let's name that and then put that in the code. We couldn't quite get it to map into the data type, like into that Boolean in the enum. Because it's possible to get into those states, but we never should. And therefore, we should alert and handle that and understand, like, how did this even happen? This should never happen. And so we ended up taking what was a larger method body with some of the logic in it and collapsing it down to very explicitly enumerate the branches of the conditional and then feed out to a method. Like, call a method that had a very explicit name to say, okay, if it's true and we're in this enum state, then it's bad, alert bad. And then the other case like, handle the good case. And I was very happy with what we refactored down to because this is another one of those very complex parts of our code. Critical infrastructure-y is how I would describe it. And so, in my mind, it was worth the I'm going to go with pathological refactoring that we got to there. But yes, I was channeling Joël in that moment. I'm very happy to have had many conversations with him that help me think through these things. STEPH: That's awesome. Yeah, those truth tables can be so helpful. There's a particular article that, of course, Joël has written that then describes how a truth table works and ways that you can implement it into your habits. It's called Back to Basics: Boolean Expressions. I will be sure to include a link in the show notes. CHRIS: But yeah, I think that summarizes my day and probably the next couple of days as I prepare for an adventure over to Europe and chat about developer spidey sense. But yeah, what's new in your world? STEPH: Yeah, that's a big day. There's a lot going on. Well, I actually want to circle back because you mentioned that you're packing and you're going on this trip. And I'm curious, do you have any books queued up for vacation? CHRIS: I do, yeah. I'm currently reading Elantris by Brandon Sanderson. Folks might be aware of his work from the highest-funded Kickstarter of all time, which was absurd. Did you see this happen? STEPH: I don't think so, uh-uh. CHRIS: He did this fun, cheeky little Kickstarter. The video was sort of a fake around...oh, it almost sounded like he might be retiring or something like that. And then he's like, JK, I wrote five new books. And so the Kickstarter was for those books with different tiered packages and whatnot. I think he got just the right viral coefficient going on. And apologies for the spoiler if anyone's not seen the video, but it's been out there for a while. So he wrote some books, and that's what the Kickstarter is for. You get some books. You sort of join a book club, and you'll get one a quarter. A million dollars seems like that will be a bunch for that. That'd be great. If he raised a million dollars, that'd be amazing. $40 million four-zero million dollars. [laughs] I'm just watching it play out in real-time as well. It just skyrocketed up. The video, I think, was structured just right. He got it onto the...it was on Reddit and Twitter and just bouncing around, and people were sharing it. And just everything about it seemed to go perfectly. And yes, the highest-funded Kickstarter of all time, I believe, certainly within the publishing world. But yeah, Brandon Sanderson, prolific author, and his stuff ends up just being kind of light and fun. And so I was reading Elantris for that. It's been a little bit slower to pick up than I would like. So I'm now in the latter half. I'm hoping it'll go a little bit more quickly and be...I'm just kind of looking for a fun read, some fantasy thing to go on an adventure. But as the next book, I downloaded a second one just to make sure I'm covered. I have a book by John Scalzi, who's a sci-fi, fantasy, more on the sci-fi end of the spectrum. And I've read some of his other stuff and enjoyed it. And this particular book has a very consistent set of reviews. I've read the reviews a few times. And everybody who reviews it is just like, "This isn't the greatest book I've ever read, but man was it a fun ride." Or "Yeah, no, best book? No. Fun book? Yes." And just like, "This book was a fun ride. This was great." And I was like, perfect. That is exactly what I'm looking for on a European vacation. The book is called The Kaiju Preservation Society, which also plays on monsters, Pacific Rim Godzilla. Kaiju, I think, is the word for that category of giant dinosaur-like monster. And so it's the Kaiju Preservation Society, which, I don't know, means some stuff, and I'm going to go on a fun adventure. So yeah, those are my books. STEPH: Nice. I've got one that I'm reading right now. It's called Clementine: The Life of Mrs. Winston Churchill, written by Sonia Purnell. And Sonia Purnell tends to focus on female historical figures. And so it's historical fiction, which is a sweet spot for me. The only thing I'm debating on is because I'm realizing as I'm reading through it, I'm questioning, okay, well, what's real and what's not? Because I don't want to be that person that's like, did you know? And then, I quote this fictional fact about somebody that was made up for the novel. [laughs] So I'm realizing that maybe historical fiction is fun, but then I'm having to fact-check all the things because then I'm just curious. I'm like, oh, did this really happen, or how did it go down? So it's been pretty good so far. But then it makes me wish that historical fiction novels had at the back of them they're like, these are all the events that were real versus some of the stuff that we fictionalized or added a little flair to. I'm in that interesting space. I also like how you highlighted that you chose a fun book. I was having a conversation with a colleague recently about downtime. And like, do you consume more tech during downtime? Like, are you actively looking for technical blog posts or technical books to read or podcasts, things like that? And I was like, I don't. My downtime is for fun. Like, I want it to be all the things that are not tech. Maybe some tech sneaks in there here and there, but for the most part, I definitely prioritize stuff that's fun over more technical content in my spare time, which has taken me a little while to not feel guilty about. Earlier in my career, I definitely felt like I should be crunching technical content all the time. And now I'm just like, nope, this is a job. I'm very thankful that I really enjoy my job, but it's still a job. CHRIS: It is an interesting aspect of the world that we work in where that's even a question. In my previous life as a mechanical engineer, the idea that I would go home and read about mechanical engineering...I could attend a conference, but I would do that for very particular reasons and not because, like, oh, it's fun. I'll go meet my friends. For me, this was a big reason that I moved into tech because I am one of those folks who will, like, I will probably watch a video about Remix in particular because that's my new thing that I like to play around with and think about. But it needs to be a particular shape of thing I've found. It needs to be exploratory, puzzle-y. Fun code, reading, learning work that I do needs to be separated from my work-work in a certain way. Otherwise, then it feels like work, then it is sort of a drudgery. But yeah, my brain just seems to really like the puzzle of programming and trying to build things. And being able to come into a world where people share as much as they do blogs and conference talks and all of that is utterly fantastic. But it is a double-edged sword because I 100% agree that the ability to disconnect to, like, work a nine-to-five and then go home at the end of the day. Yeah, go home, you know, because you remember when we went to an office and then we would go home afterwards? I have to commute every once in a while into the city and -- STEPH: You mean go downstairs or go to another room? That's what you mean? [laughs] CHRIS: I used to commute every day, and it took a lot of time. And now when I do it, I feel that so viscerally because I'm like, it's just a lot easier to just walk to my office in my house. But yes, I 100% I'm aligned to that like, yeah, no, you're done with work for the day, walk away. That's that. And learning a new technology or things like that, that's part of the job. There shouldn't be the expectation that that just happens. There's continuing education in every other field. It's like, oh, we'll pay for your master's degree so you can go learn a thing. That's the norm in every other...not in every other industry but many, many, many industries. And yet the nature of our world the accessibility of it is one of the most wonderful things about it. But it can be a double-edged sword in that if there are the expectations that, oh yeah, and then, of course, you're going to go home and have side projects and be learning things. Like, no, that is an unreasonable expectation, and we got to cut that off. But then again, I do do that. So I'm saying two things at the same time, and that's always complicated. STEPH: But I agree with what you're saying because you're basically respecting both sides. If people enjoy this as a hobby, more power to you.; that's great. This is what you enjoy doing. If you don't want to do this as a hobby and respect it as a job, then that's also great too. There can be both sides, and no side should feel guilty or judged for whichever path that they pursue. And I absolutely agree, if there are new skills that you need to learn for a job, then there should be time that's carved out during your work hours that then you get to focus on those new skills. It shouldn't be an expectation that then you're going to work all day and then spend your evening hours learning something else. And same for interviews; there shouldn't be a field that says, "Hey, what are your side projects?" Or at least that should not be an important part of the interview. There should be an alternative to be like, "Or what work code do you want to talk about?" Or something else that's more in that nine-to-five window that you want to talk about. That way, there's a balance between like, sure, if you have something that you want to talk about on the side, great, but if not, then let's focus on something that you've done during your actual work hours because that's more realistic. CHRIS: I do think there's an interesting aspect at play because the world of development moves so rapidly and because it's constantly changing. And to frame it differently, I don't think we've got this thing figured out. And so many people lament how quickly it changes and that there's a new framework every other week. And there's a bit of churn that is perhaps unnecessary. But at the same time, I do not feel like as a community, as a working population, that we're like, yeah, got it, crushed it. We know how to make great software, no question about it. It's going to be awesome. We're going to be able to maintain it for forever, don't even worry about it. New feature? We can get that in there. They're actually still pretty rare. So we need to be learning, and evolving, and exploring new techniques. I think the amount of thinking is probably good mostly in the development world. But organizations have to make space for that with their teams. And thoughtbot obviously does that with investment days. That's just such a wonderful structure that embraces that reality and also brings happiness, and it's just a pleasant way to work. And frankly, my team does not have that right now. We do the crispy Brussels snack hour, which also now has a corresponding crispy Brussels work lunch, which is one week we think about it, and the next week we do the thing. We're trying to make space for that. But even that is still more intentional and purposeful and less exploratory and learning. And so it's an interesting trade-off. I deeply believe in this thing, and also, the team that I'm leading isn't doing it right now. Granted, we're an early-stage startup. We got to build a bunch of stuff. I think that's fine for right now. But it is a thing that...again, I'm saying two things at the same time, always fun. STEPH: Well, and there might be a nice incremental approach to this as well. So thoughtbot has the entire day, and maybe it's less than a full day. So perhaps it's just there's an hour or two hours or something like that where you start to introduce some of that self-improvement time and then blossom out from there. Because yeah, I understand that not all teams may feel like they have the space for that. But then I agree with everything else you said that it really does improve team morale and gives people a space to then be able to get to explore some of those questions that they had earlier. So then they don't feel like they have to then dedicate some weekend time or off hours' time to then look into a question. And I admit, I'm totally guilty too. I am that person that then I've worked extra hours, but it's because, like you said, if there's a puzzle that my brain is stuck on and I just feel the need to get through it. But then I look at that as am I doing this because I want to? Yes. Okay, then as long as I'm happy and I don't feel like this is increasing any concern around burnout, then I don't worry about it. MIDROLL AD: Debugging errors can be a developer's worst nightmare… but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! Circling back to your original question about what's going on in my world, and you mentioned scarring earlier. I feel sufficiently scarred [laughs] in regards to still working with moving fixtures over to FactoryBot. This week has really confirmed that fixtures don't trigger a lot of the callbacks, the model callbacks that exist. And so this really means that you can just create bad data that your application doesn't actually allow your application to create. So there are tests that are exercising behavior that should never exist. And then porting that over to FactoryBot then highlights that because then as soon as I move that record over and then try to create it or do something with it, then the app, the test, do the right thing and let me know saying no, no, no, we've added validations. You can't do that anymore. That has been grinding my gears in terms of trying to then translate. Because then I have to really dive into the code to understand it. And the goal here is to stay as high level as possible and not have to dive in too much. But then that means that I do have to dive in and understand more. So this has frankly just been one of those times in my career where you just kind of have to slog through the work. It's important work to be done. It'll be great once it's done. But it's a painful process. And the best way that I've found to make it more enjoyable is to be in heavy communication with Joël, who's on the project with me, just so if we get stuck on something, then we can chat with each other. And then also there's one file that's particularly gnarly. And so we moved over one test. We were successful, and which felt great because then we could at least document like, okay, when we come back to this, at least we have one example that highlights the wonkiness that we ran into. But we've decided, okay, we're done with that file. We're going to take a break. There's a lot there, but we're going to move on and give ourselves a break and do some of the easier ones, and then we'll circle back to the harder one. Which was, I think, just a bit of bad luck in terms of, like, as we're going down the list, that happened to be like the gnarliest one, and it was like the first one that Joël picked up. And so I'm going through a couple of files, and Joël is like, "What? [laughs] How are you making progress?" And we realize it's just because that file, in particular, is very hard to find all the mystery guests and then to move everything over. Finding a positive note through all of the cruft, I will say this is helping with some of my code sleuthing skills. So as I am running into these problems and then looking for mystery guests, I'm noticing ways that I can then, as quickly as possible, try to triage and identify as to why one test doesn't match another test. Some of it is more specific to the application setup, so it won't be as applicable to future projects. But then some other areas have been really helpful. Like, I'm using caller a lot more to understand, like, I know this is getting called, but I don't know who's calling you. So I can put in a line that basically outputs like, show me your stack traces to how you got here. So that's been really nice as well. So it has improved some of my code sleuthing skills and also my spidey sense in terms of it's typically mystery guests. Like when a test isn't passing, it's because fixtures are creating extra data that are getting pulled in when there are queries that are being run. But they're not explicitly referenced in the test setup itself. So that's typically then where I start is looking for what record looks relevant to this test that I haven't pulled over to my test setup. CHRIS: I appreciate you finding the silver lining, the positive bit of this. Because as you're describing, the work that you're doing sounds like I think you use the word slog, which seems like a very accurate term. But sometimes we have to do that sometimes for a variety of reasons. We end up either having to introduce new code or fix old code, but this is sometimes the work. And this is something that I think you and I share about this show is we get to show all sides of the work. And the work can be glamorous and new. And oh, I've got this greenfield app that I'm building, and it's wonderful. Look at the architecture. And I know in the moment that I'm building someone else's legacy code three years from now. [laughs] And so telling the other side of the story and providing that rounded point of view, because like, yeah, this is all part of it. Again, I don't believe that this is a solved problem, building robust software that we can maintain. And so yeah, you're doing the good work in there. And I thank you for sharing it with us. STEPH: Thanks. Just don't use fixtures in your test, I beg of you. Please don't do that to the legacy code that you're writing for future developers. [laughs] That is my one request. CHRIS: And I will maybe add on to that, sparingly use callbacks. Maybe don't use them at all, and certainly don't use the combination because, my goodness, that'll lead you into some fun times. But yeah, just two small recommendations there. STEPH: Oh, there's something else I wanted to share. I saw that Slack added a new audio feature that allows you to record the pronunciation of your name, which is the feature that I was so excited about when we added it to our internal tool called Hub at thoughtbot. And now Slack has it on their profile so that way you can upload the pronunciation. And then anyone looking at your profile can then listen to how to pronounce your name. There are a couple of other features that they released, I think just in June, so about a month ago from the recording of today. [laughs] That's weird to say, but here we are. So I'll include a link in the show notes so folks can see that feature in addition to others, but I'm super excited. CHRIS: Oh, that is nice. I also like all right, so Slack now has it. Hub now has it. But I don't have access to Hub anymore. And I don't have access to every Slack in the world yet. But here's my suggestion. All right, everybody, stick with me here. I want you to own a domain. I want you to have a personal site on it. And I want the personal site to include the pronunciation of your name. I get that that's a big ask. And I get that there are other platforms that are calling to you, and you may be writing on those. But you know what? Just stand up a little site, just a little place on the internet that you own. And if it includes the pronunciation of your name, I will be forever grateful. STEPH: I like this idea. I initially was taking your idea and immediately running with it as you were speaking it because then I wondered if everyone had their own YouTube channel. But I don't know how hard it is to create a YouTube channel. I am not a YouTube channeler, so I don't know what that looks like. [laughs] But not everybody will know how to purchase a domain. So that might be another approach. CHRIS: I think it's pretty easy to do a YouTube channel. I'm conflating a couple of things. This is my basket of beliefs about people on the internet, but I kind of think everybody should own their own little slice of the internet. And so totally, YouTube is a place where the people make some stuff, make videos, put them on YouTube, absolutely. But ideally, you own something. I see a lot of people that are on YouTube, and that's it, and so their entire audience lives on YouTube. And if YouTube someday decides to change or remove them or say Medium as an example, Medium actually, I think, does a more interesting version of this where your identity kind of gets subsumed into Medium. And I really think everybody should just have their own little, tiny slice of the internet that's there. It has their name that they own that no platform can decide; hey, we've shifted, and now your stuff is gone. Cool URIs don't change as they say, and that's what I want. And then yeah, if you can have the pronunciation of your name on there, that's extra nice. Although I say that, and I don't know that I would do it because my name feels very obvious. One day someone was like, "Oh, how do you pronounce your last name?" I forget if I actually replied with the pronunciation. Or if I was like, "I need to know what options you're considering. I'm so interested because I've really only got the one." Maybe I'm anchored. Maybe I'm biased. [chuckles] I've been doing this for a while. But I really cannot think of another pronunciation of my name. STEPH: You might hear another one that you really like, and you need to pivot. CHRIS: Oh gosh. STEPH: That's the point where you start pronouncing your name differently. CHRIS: Wow, that would be a lot. And then, I could have a change log on my personal site where people can see this is the pronunciation, and this is what the pronunciation used to be. STEPH: [laughs] I like this idea. I also like this idea that everybody has their own slice of internet land. I like this encouragement that you're providing for everyone. On a slightly different note, there's a blog post that I'm really excited to talk about. It's written by Eric Bailey, who's a former thoughtboter. It's called The Optics of Pair Programming. And given how much pair programming that I'm doing, especially with Joël on the current project, it was a really wonderful read. And it also helped me think about pairing from a different perspective because we do have a very strong pairing culture at thoughtbot. So there's a lot of nuance, especially social nuances that can go along with when you invite someone to pair with you that I had not considered until I read this wonderful post by Eric. And we'll be sure to include a link in the show notes. But to provide an overview, essentially, Eric shares that given coming from thoughtbot where we do have a very open approach to pairing where pairing sessions are voluntary and then also last as long as the problem will last...but then when you're at a new company, you could experience pushback if you're inviting someone to pair and then to consider why that pushback may exist. And some of the high-level areas that Eric highlighted are power dynamics, assessment, privacy, and learning styles. So to dive into each of some of those, there's a power dynamics of it's important to consider who's offering to pair. So if I've joined a team as a consultant, there may be a power dynamic there that someone is feeling where their team is paying for my time. So they may feel like they can't say no if I offered to pair. They feel like they need to say yes to the invitation, even if they don't really want to. Or probably a more classic example would be like, what if your boss wants to pair or someone that's just more senior than you? Then it could leave you feeling like, well, I can't say no to this person, can I? Which yes, you totally can say no to that person, but it may leave you in a place where you feel like you can't. And so, it puts you in this sort of uncomfortable and powerless position. The other one is assessment, so offering to pair with someone could feel like you are implying that you want to assess their skills or that you're implying that they're not up to the task and therefore they need your help. So then that could also place someone in an uncomfortable position. There's also privacy. So someone who isn't confident may not want someone to observe their behavior or observe how they're working. It could make them feel really anxious, which then I love that Eric points this out. Ironically, pairing is really good at addressing that lack of confidence because then you get to see how other people work through their problems or how they think, or they may also have some anxiety. Or it just helps you become more comfortable in talking and thinking through with other people. So that one is a tough one where it's hard to get over that initial hurdle. But actually, the more you pair, then the less anxious you'll feel when you pair. And then there's also learning styles because pairing really involves a lot of deep thinking but in our personal time. And it can be hard to balance both of those, and it's just not as effective for some people. So I know that even as much as I really enjoy pairing, I just need to sit with code on my own sometimes. I need to think about it. I need to run it; I need to look at it. So it's really nice to talk with someone. But then I also need that alone time to then just think through it on my own because I can't have that same deep focus if I'm also worried about how the other person is experiencing that session because then my mental energy is going towards them. So that covers a number of the social nuances that can be included or running through someone's mind when you extend an invitation to them to pair. And it really resonated with me the areas that Eric highlights in this blog post. He also talks about a couple of strategies, which I'd love to dive into as well. But I'm going to pause here and see what thoughts you have. CHRIS: Yeah, I love this post. And it got me thinking about pairing and the broader human backdrop of all of the processes and workflows that we have. Everything he highlighted about pairing feels true. Although similar to you and to Eric, I've worked in a context where pairing was a very natural, very regular part of the work and sort of from the very top-down. And so everyone pairing between developers of any different level or developers and designers or really anyone in the...it was just such a part of how we worked that no one really questioned it or at least not after the first couple of weeks. I imagine joining thoughtbot those first weeks; you're like, oh God. As I shared, I think in the previous episode that we recorded, my pairing interview was with Joe Ferris, the CTO of thoughtbot, [laughs] writing a book about good and bad code. And I was like, I don't know what anything is here but very quickly getting over that hurdle. And having that normalizing experience was actually really great, and then have been comfortable with it since. But the idea that there are so many different social dynamics at play feels true. And then as I think about other things, like stand-up is one that I think of as this very simple this is a way to communicate where we're at. And where necessary or where useful, allow people to interject or step in to say, "Oh, let me help you get unblocked there or whatever it is." But so often, I see stand-up being a ritual about demonstrating that you are, in fact, doing work, which is like, here's what I did yesterday. I don't know if it's useful. Then mention that you're working on this project. But the enumeration of look, obviously, work was done by me. You can see it; here are the receipts. It's very much this social dynamic at play. And retro is another one where like, if retro is very much owned by one voice and not a place that change actually happens where people feel safe airing their opinions or their concerns, then it's going to be a terrible experience. But if you can structure it and enforce that it is a space that we can have a conversation, that everyone's voice is welcome and that real change happens as a result of, then it's a magical tool for making sure we're doing the right things. But always behind these are the people, and feelings, and the psychology at play. And so this was just such an interesting post to read and ruminate on that a little bit more. STEPH: Yeah, I agree, especially with a comment that you made about those daily syncs where I really just want to focus on today and what you have that you're blocked on. So it's a really nice update in case there are any cross-collaboration opportunities. That's really what I'm looking for in a daily update. And so I appreciate when people don't go through a laundry list of what they did yesterday because it's like, that's great. But then, like you said, it's just like you're trying to prove here's what I've done, and I trust you; you're working. So just let me know what you're doing today, friend. So Eric does a wonderful job of also including some strategies for ways that then you can address some of these concerns and then how there may be some extra anxiety that's increased when you're inviting somebody to pair. There are some wonderful strategies. I'll let folks read through the blog post itself. There are a couple in particular that came to mind for me because I was then self-assessing how do I tend to approach pairing with someone? And some ways that I want them to feel very comfortable with that experience. And there's a couple. There's one where I recognize that I need to build trust with each person. I can't just go on to a team and expect everyone to know that I have good intentions and that I'm going to do my best to be a fun, helpful pairing partner, and that it's not a zone of judgment. And that has to be cultivated with each person. Because especially as a consultant, if I'm joining a team, the people who hired me are not necessarily the people that I'm working with. It's someone that's probably in leadership or management that has then brought on thoughtbot. And so then the people that I'm working with they don't know me, and they don't know what my pairing style is going to be. So looking for ways to build trust with each person and then also inviting them or asking for help myself. So there's a bit of vulnerability that has to be shown to build trust with someone to say," Hey, I'm stuck on a problem. I would love a second set of eyes. Would you be willing to help me out with this?" So then that way, they're coming in to help me initially versus I'm going in and saying, "Hey, can I help you?" I have found that to be an effective strategy. And there's one that I do really want to talk about, and that's not everyone is going to pair well together. Like, you may find someone who always leaves you feeling just stressed or demoralized. And while it's important to consider your role and why that's true, that does not mean it's your fault and necessarily your problem to fix. So similar to having to manage up, you may need to coach the person that you're pairing with in ways that help you feel comfortable pairing. But if they don't listen to your requests and implement any of that feedback, then just don't pair with that person. That is a very fine option to recognize people that are not receptive to your needs and, therefore, not someone that you need to then force into being a great pairing buddy. And I emphasize that last one because it took me a little while to become comfortable with that and accepting that it wasn't my fault that I wasn't having a great pairing session with people. Similar to when I'm learning from someone that if someone is explaining something to me and they're making me feel inadequate while they're explaining it to me, that's not necessarily my fault. Like, I used to internalize that as like, oh, I just can't get this. But I am now a very staunch believer in if you can't explain it to me in a way that I understand, then that's probably more on you than on me. And that has also taken me time to just really accept and embrace. But once you do, it is so freeing to realize that if someone's explaining a concept and you're still not getting it, it's like, hey, how can we try harder together versus you just making me try harder? CHRIS: I like that right there of like, if I don't understand this, it may actually be you, not me, or something to that effect. Let's get that on a bumper sticker and put that in The Bike Shed store so that everybody can buy it and put it on their cars or at least just us. But yeah, that starting from the bottom sometimes it's just not going to work great. There are even...I think what you're describing sounds a little more complicated, individuals who are personally not great at communicating or pairing or things like that. And that's going to happen. We're going to run into folks that...pairing is communication. That's just the core of it, and some folks, that may not be their strongest suit. But I think there's another category of just like different working styles. And whereas I might...judge is such a heavy word, but I'm going to use it. I might judge someone who is not doing a great job at communicating to someone else, or understanding their point of view, or striving to do that, or taking feedback. Like, those are not great things. Whereas there may just be two different development styles or backgrounds, or there are other reasons that actually they may be not an ideal fit. That said, I have definitely found that in almost every variation of pairing, I've seen work at some point. Like, when I was very early on in my career pairing with folks that are very senior, I didn't get most of it, but I got some stuff. And then folks that are very much on the same level or folks that have a deep knowledge in framework, code base language, whatever and folks that are new to it but have a different set of experiences. Basically, every version of that, I found that pairing is actually an incredibly powerful technique for knowledge sharing, for collaboration, for all of that. So although there are rare cases where there might be some misalignment, in general, I think pairing can work. I do think you hit on something earlier of there are certain folks that are more private thinkers, is how I would describe it, where thinking out loud is complicated for them. I'm very much someone who talks. That's how I figure out what I think is I say stuff. And I'm like, oh, I agree with what I just said. That's good. But I find I actually struggle. There's something I think of...maybe I'm just a loudmouth is what I'm hearing as I say it, but that is how I process things. Other folks, that is not true. Other folks, it's quite internal, and actually trying to vocalize that or trying to share the thought process as they're going may be uncomfortable. And I think that's perfectly reasonable and something that we should recognize and make space for. And so pairing should not be forced upon a team or an individual because there are just different mindsets, different ways of thinking that we need to account for. But again, the vast majority of cases...I've seen plenty of cases where it's someone's like, "I don't like to pair. That's not my thing." And it's actually that they've had bad experiences. And then when they find a space that feels safe or they see the pattern demonstrated in a way that is collegial, and useful, and friendly, then they're like, oh, actually, I thought I didn't like pairing. I thought I didn't like retro. I thought I didn't like stand-up. But actually, all of these things can be good. STEPH: Yeah, absolutely. It's a skill like anything else. You need to see value in it. And if you haven't seen value in it yet or if it's always made you anxious and uncomfortable, then it's something that you're going to avoid as much as possible until someone can provide a valuable, positive experience around how it can go. I'm going to pull back the curtains just a little bit on our recording and share because you've mentioned that you are very much you think out loud, and that's how you decide that you agree with yourself. And I think already at least twice while we've been recording this episode, I have started to say something, and I'm like, no, wait, I don't agree with that and have backed myself up. CHRIS: [laughs] STEPH: And I'm like, no, I just thought through it; I'm going to cancel it out, [laughs] and then moved in a different direction. So I, too, seem to be someone that I start to say things, and I'm like, oh, wait, I don't actually agree with what I just said [laughs], so let's remove that. CHRIS: Yep. You've described it as Michael Scott-ing on a handful of different episodes or maybe things that were cut from episodes. But where you start a sentence and then you're like, I don't know where I was going to end up there. I hoped I'd figure it out by the end, but then I did not get there. And yeah, I think we've all experienced that at various times. STEPH: That's some of my favorite advice from you is where you've been like, just lean into it, just see where it goes. Finish it out. We can always take it out later. [laughs] Because I stop myself because I immediately start editing what I'm trying to say and you're like, "No, no, just finish it, and then we'll see what happens." That's been fun. CHRIS: This is how you find out what you think. You say it out loud, and then you're like, never mind. That was ridic – STEPH: [laughs] CHRIS: I do. Actually, now I'm thinking back, and I have plenty of those where I'll say a thing, and I'm like, nope, never mind, send that one back. [chuckles] As an aside, so we do this thing where we host a podcast, and we get to talk. But we're both now describing the pattern where we'll start to say something, and we'll be like no, no, no, actually, not that. And I think, dear listeners out there, you probably don't hear any of this, the vast majority of it, because we have wonderful editors behind the scenes, Thom Obarski for many years, and now Mandy Moore, who's been with us for a while. And so once again, thank you so much to the editor team that allows us to, I think, again, feel safe in this conversation that we can say whatever feels true and then know that we'll be able to switch that around. So thank you so much to the editors who help us out and make us sound better than we are. STEPH: Yeah, that has made a big difference in my capabilities to podcast. If we were doing this live, ooh goodness, this might be a whole different, weird show. [laughs] CHRIS: I mean, the same is true for code, right? I deeply value the ability to make an absolute mess in my local editor and have nine different commits that eventually I throw two out. And then I revert that file, and then eventually, the PR that I put up that's my Instagram selfie. That's like, I carefully curated this, but what's behind the scenes it's just a pile of trash. So yeah, the ability to separate the creation and the editing that's a meaningful thing to have in life. STEPH: Oh, I can't unsee that now. [laughs] A pull request is now the equivalent of that curated Instagram selfie. That is beautiful. [laughs] CHRIS: To be clear, I don't think I've ever taken an Instagram selfie. But I get the idea, and I felt like it was an analogy that would work. Again, I try out analogies on this show, and many of them do not stick. But I think that one is all right. STEPH: It might even go back to pairing because then you've got help in taking that picture. So hey, you're making a mess with somebody until you get that right perfect thing, and then you push it up for the world to see. So safe spaces for all the activities, I think that's the takeaway. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
About ChrisChris is the Co-founder and Chief Product Officer at incident.io, where they're building incident management products that people actually want to use. A software engineer by trade, Chris is no stranger to gnarly incidents, having participated (and caused!) them at everything from early stage startups through to enormous IT organizations.Links Referenced: incident.io: https://incident.io Practical Guide to Incident Management: https://incident.io/guide/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: DoorDash had a problem. As their cloud-native environment scaled and developers delivered new features, their monitoring system kept breaking down. In an organization where data is used to make better decisions about technology and about the business, losing observability means the entire company loses their competitive edge. With Chronosphere, DoorDash is no longer losing visibility into their applications suite. The key? Chronosphere is an open-source compatible, scalable, and reliable observability solution that gives the observability lead at DoorDash business, confidence, and peace of mind. Read the full success story at snark.cloud/chronosphere. That's snark.cloud slash C-H-R-O-N-O-S-P-H-E-R-E.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's promoted guest is Chris Evans, who's the CPO and co-founder of incident.io. Chris, first, thank you very much for joining me. And I'm going to start with an easy question—well, easy question, hard answer, I think—what is an incident.io exactly?Chris: Incident.io is a software platform that helps entire organizations to respond to recover from and learn from incidents.Corey: When you say incident, that means an awful lot of things. And depending on where you are in the ecosystem in the world, that means different things to different people. For example, oh, incident. Like, “Are you talking about the noodle incident because we had an agreement that we would never speak about that thing again,” style, versus folks who are steeped in DevOps or SRE culture, which is, of course, a fancy way to say those who are sad all the time, usually about computers. What is an incident in the context of what you folks do?Chris: That, I think, is the killer question. I think if you look at organizations in the past, I think incidents were those things that happened once a quarter, maybe once a year, and they were the thing that brought the entirety of your site down because your big central database that was in a data center sort of disappeared. The way that modern companies run means that the definition has to be very, very different. So, most places now rely on distributed systems and there is no, sort of, binary sense of up or down these days. And essentially, in the general case, like, most companies are continually in a sort of state of things being broken all of the time.And so, for us, when we look at what an incident is, it is essentially anything that takes you away from your planned work with a sense of urgency. And that's the sort of the pithy definition that we use there. Generally, that can mean anything—it means different things to different folks, and, like, when we talk to folks, we encourage them to think carefully about what that threshold is, but generally, for us at incident.io, that means basically a single error that is worthwhile investigating that you would stop doing your backlog work for is an incident. And also an entire app being down, that is an incident.So, there's quite a wide range there. But essentially, by sort of having more incidents and lowering that threshold, you suddenly have a heap of benefits, which I can go very deep into and talk for hours about.Corey: It's a deceptively complex question. When I talk to folks about backups, one of the biggest problems in the world of backup and building a DR plan, it's not building the DR plan—though that's no picnic either—it's okay. In the time of cloud, all your planning figures out, okay. Suddenly the site is down, how do we fix it? There are different levels of down and that means different things to different people where, especially the way we build apps today, it's not is the service or site up or down, but with distributed systems, it's how down is it?And oh, we're seeing elevated error rates in us-tire-fire-1 region of AWS. At what point do we begin executing on our disaster plan? Because the worst answer, in some respects is, every time you think you see a problem, you start failing over to other regions and other providers and the rest, and three minutes in, you've irrevocably made the cutover and it's going to take 15 minutes to come back up. And oh, yeah, then your primary site comes back up because whoever unplugged something, plugged it back in and now you've made the wrong choice. Figuring out all the things around the incident, it's not what it once was.When you were running your own blog on a single web server and it's broken, it's pretty easy to say, “Is it up or is it down?” As you scale out, it seems like that gets more and more diffuse. But it feels to me that it's also less of a question of how the technology has scaled, but also how the culture and the people have scaled. When you're the only engineer somewhere, you pretty much have no choice but to have the entire state of your stack shoved into your head. When that becomes 15 or 20 different teams of people, in some cases, it feels like it's almost less than a technology problem than it is a problem of how you communicate and how you get people involved. And the issues in front of the people who are empowered and insightful in a certain area that needs fixing.Chris: A hundred percent. This is, like, a really, really key point, which is that organizations themselves are very complex. And so, you've got this combination of systems getting more and more complicated, more and more sort of things going wrong and perpetually breaking but you've got very, very complicated information structures and communication throughout the whole organization to keep things up and running. The very best orgs are the ones where they can engage the entire, sort of, every corner of the organization when things do go wrong. And lived and breathed this firsthand when various different previous companies, but most recently at Monzo—which is a bank here in the UK—when an incident happened there, like, one of our two physical data center locations went down, the bank wasn't offline. Everything was resilient to that, but that required an immediate response.And that meant that engineers were deployed to go and fix things. But it also meant the customer support folks might be required to get involved because we might be slightly slower processing payments. And it means that risk and compliance folks might need to get involved because they need to be reporting things to regulators. And the list goes on. There's, like, this need for a bunch of different people who almost certainly have never worked together or rarely worked together to come together, land in this sort of like empty space of this incident room or virtual incident room, and figure out how they're going to coordinate their response and get things back on track in the sort of most streamlined way and as quick as possible.Corey: Yeah, when your bank is suddenly offline, that seems like a really inopportune time to be introduced to the database team. It's, “Oh, we have one of those. Wonderful. I feel like you folks are going to come in handy later today.” You want to have those pathways of communication open well in advance of these issues.Chris: A hundred percent. And I think the thing that makes incidents unique is that fact. And I think the solution to that is this sort of consistent, level playing field that you can put everybody on. So, if everybody understands that the way that incidents are dealt with is consistent, we declare it like this, and under these conditions, these things happen. And, you know, if I flag this kind of level of impact, we have to pull in someone else to come and help make a decision.At the core of it, there's this weird kind of duality to incidents where they are both kind of semi-formulaic and that you can basically encode a lot of the processes that happen, but equally, they are incredibly chaotic and require a lot of human impact to be resilient and figure these things out because stuff that you have never seen happen before is happening and failing in ways that you never predicted. And so, this is where incident.io plays into this is that we try to take the first half of that off of your hands, which is, we will help you run your process so that all of the brain capacity you have, it goes on to the bit that humans are uniquely placed to be able to do, which is responding to these very, very chaotic, sort of, surprise events that have happened.Corey: I feel as well—because I played around in this space a bit before I used to run ops teams—and, more or less I really should have had a t-shirt then that said, “I am the root cause,” because yeah, I basically did a lot of self-inflicted outages in various environments because it turns out, I'm not always the best with computers. Imagine that. There are a number of different companies that play in the space that look at some part of the incident lifecycle. And from the outside, first, they all look alike because it's, “Oh, so you're incident.io. I assume you're PagerDuty. You're the thing that calls me at two in the morning to make sure I wake up.”Conversely, for folks who haven't worked deeply in that space, as well, of setting things on fire, what you do sounds like it's highly susceptible to the Hacker News problem. Where, “Wait, so what you do is effectively just getting people to coordinate and talk during an incident? Well, that doesn't sound hard. I could do that in a weekend.” And no, no, you can't.If this were easy, you would not have been in business as long as you have, have the team the size that you do, the customers that you do. But it's one of those things that until you've been in a very specific set of a problem, it doesn't sound like it's a real problem that needs solving.Chris: Yeah, I think that's true. And I think that the Hacker News point is a particularly pertinent one and that someone else, sort of, in an adjacent area launched on Hacker News recently, and the amount of feedback they got around, you know, “You're a Slack bot. How is this a company?” Was kind of staggering. And I think generally where that comes from is—well, first of all that bias that engineers have, which is just everything you look at as an engineer is like, “Yeah, I can build that in a weekend.” I think there's often infinite complexity under the hood that just gets kind of brushed over. But yeah, I think at the core of it, you probably could build a Slack bot in a weekend that creates a channel for you in Slack and allows you to post somewhere that some—Corey: Oh, good. More channels in Slack. Just when everyone wants.Chris: Well, there you go. I mean, that's a particular pertinent one because, like, our tool does do that. And one of the things—so I built at Monzo, a version of incident.io that we used at the company there, and that was something that I built evenings and weekends. And among the many, many things I never got around to building, archiving and cleaning up channels was one of the ones that was always on that list.And so, Monzo did have this problem of littered channels everywhere, I think that sort of like, part of the problem here is, like, it is easy to look at a product like ours and sort of assume it is this sort of friendly Slack bot that helps you orchestrate some very basic commands. And I think when you actually dig into the problems that organizations above a certain size have, they're not solved by Slack bots. They're solved by platforms that help you to encode your processes that otherwise have to live on a Google Doc somewhere which is five pages long and when it's 2 a.m. and everything's on fire, I guarantee you not a single person reads that Google Doc, so your process is as good as not in place at all. That's the beauty of a tool like ours. We have a powerful engine that helps you basically to encode that and take some load off of you.Corey: To be clear, I'm also not coming at this from a position of judging other people. I just look right now at the Slack workspace that we have The Duckbill Group, and we have something like a ten-to-one channel-to-human ratio. And the proliferation of channels is a very real thing. And the problem that I've seen across the board with other things that try to address incident management has always been fanciful at best about what really happens when something breaks. Like, you talk about, oh, here's what happens. Step one: you will pull up the Google Doc, or you will pull up the wiki or the rest, or in some aspirational places, ah, something seems weird, I will go open a ticket in Jira.Meanwhile, here in reality, anyone who's ever worked in these environments knows that step one, “Oh shit, oh shit, oh shit, oh shit, oh shit. What are we going to do?” And all the practices and procedures that often exist, especially in orgs that aren't very practiced at these sorts of things, tend to fly out the window and people are going to do what they're going to do. So, any tool or any platform that winds up addressing that has to accept the reality of meeting people where they are not trying to educate people into different patterns of behavior as such. One of the things I like about your approach is, yeah, it's going to be a lot of conversation in Slack that is a given we can pretend otherwise, but here in reality, that is how work gets communicated, particularly in extremis. And I really appreciate the fact that you are not trying to, like, fight what feels almost like a law of nature at this point.Chris: Yeah, I think there's a few things in that. The first point around the document approach or the clearly defined steps of how an incident works. In my experience, those things have always gone wrong because—Corey: The data center is down, so we're going to the wiki to follow our incident management procedure, which is in the data center just lost power.Chris: Yeah.Corey: There's a dependency problem there, too. [laugh].Chris: Yeah, a hundred percent. [laugh]. A hundred percent. And I think part of the problem that I see there is that very, very often, you've got this situation where the people designing the process are not the people following the process. And so, there's this classic, I've heard it through John Allspaw, but it's a bunch of other folks who talk about the difference between people, you know, at the sharp end or the blunt end of the work.And I think the problem that people are facing the past is you have these people who sit in the, sort of, metaphorical upstairs of the office and think that they make a company safe by defining a process on paper. And they ship the piece of paper and go, “That is a good job for me done. I'm going to leave and know that I've made the bank—the other whatever your organization does—much, much safer.” And I think this is where things fall down because—Corey: I want to ambush some of those people in their performance reviews with, “Cool. Just for fun, all the documentation here, we're going to pull up the analytics to see how often that stuff gets viewed. Oh, nobody ever sees it. Hmm.”Chris: It's frustrating. It's frustrating because that never ever happens, clearly. But the point you made around, like, meeting people where you are, I think that is a huge one, which is incidents are founded on great communication. Like, as I said earlier, this is, like, a form of team with someone you've never ever worked with before and the last thing you want to do is be, like, “Hey, Corey, I've never met you before, but let's jump out onto this other platform somewhere that I've never been or haven't been for weeks and we'll try and figure stuff out over there.” It's like, no, you're going to be communicating—Corey: We use Slack internally, but we have a WhatsApp chat that we wind up using for incident stuff, so go ahead and log into WhatsApp, which you haven't done in 18 months, and join the chat. Yeah, in the dawn of time, in the mists of antiquity, you vaguely remember hearing something about that your first week and then never again. This stuff has to be practiced and it's important to get it right. How do you approach the inherent and often unfortunate reality that incident response and management inherently becomes very different depending upon the specifics of your company or your culture or something like that? In other words, how cookie-cutter is what you have built versus adaptable to different environments it finds itself operating in?Chris: Man, the amount of time we spent as a founding team in the early days deliberating over how opinionated we should be versus how flexible we should be was staggering. The way we like to describe it as we are quite opinionated about how we think incidents should be run, however we let you imprint your own process into that, so putting some color onto that. We expect incidents to have a lead. That is something you cannot get away from. However, you can call the lead whatever makes sense for you at your organization. So, some folks call them an incident commander or a manager or whatever else.Corey: There's overwhelming militarization of these things. Like, oh, yes, we're going to wind up taking a bunch of terms from the military here. It's like, you realize that your entire giant screaming fire is that the lights on the screen are in the wrong pattern. You're trying to make them in the right pattern. No one dies here in most cases, so it feels a little grandiose for some of those terms being tossed around in some cases, but I get it. You've got to make something that is unpleasant and tedious in many respects, a little bit more gripping. I don't envy people. Messaging is hard.Chris: Yeah, it is. And I think if you're overly virtuoustic and inflexible, you're sort of fighting an uphill battle here, right? So, folks are going to want to call things what they want to call things. And you've got people who want to import [ITIL 00:15:04] definitions for severity ease into the platform because that's what they're familiar with. That's fine.What we are opinionated about is that you have some severity levels because absent academic criticism of severity levels, they are a useful mechanism to very coarsely and very quickly assess how bad something is and to take some actions off of it. So yeah, we basically have various points in the product where you can customize and put your own sort of flavor on it, but generally, we have a relatively opinionated end-to-end expectation of how you will run that process.Corey: The thing that I find that annoys me—in some cases—the most is how heavyweight the process is, and it's clearly built by people in an ivory tower somewhere where there's effectively a two-day long postmortem analysis of the incident, and so on and so forth. And okay, great. Your entire site has been blown off the internet, yeah, that probably makes sense. But as soon as you start broadening that to things like okay, an increase in 500 errors on this service for 30 minutes, “Great. Well, we're going to have a two-day postmortem on that.” It's, “Yeah, sure would be nice if we could go two full days without having another incident of that caliber.” So, in other words, whose foot—are we going to hire a new team whose full-time job it is, is to just go ahead and triage and learn from all these incidents? Seems to me like that's sort of throwing wood behind the wrong arrows.Chris: Yeah, I think it's very reductive to suggest that learning only happens in a postmortem process. So, I wrote a blog, actually, not so long ago that is about running postmortems and when it makes sense to do it. And as part of that, I had a sort of a statement that was [laugh] that we haven't run a single postmortem when I wrote this blog at incident.io. Which is probably shocking to many people because we're an incident company, and we talk about this stuff, but we were also a company of five people and when something went wrong, the learning was happening and these things were sort of—we were carving out the time, whether it was called a postmortem, or not to learn and figure out these things. Extrapolating that to bigger companies, there is little value in following processes for the sake of following processes. And so, you could have—Corey: Someone in compliance just wound up spitting their coffee over their desktop as soon as you said that. But I hear you.Chris: Yeah. And it's those same folks who are the ones who care about the document being written, not the process and the learning happening. And I think that's deeply frustrating to me as—Corey: All the plans, of course, assume that people will prioritize the company over their own family for certain kinds of disasters. I love that, too. It's divorced from reality; that's ridiculous, on some level. Speaking of ridiculous things, as you continue to grow and scale, I imagine you integrate with things beyond just Slack. You grab other data sources and over in the fullness of time.For example, I imagine one of your most popular requests from some of your larger customers is to integrate with their HR system in order to figure out who's the last engineer who left, therefore everything immediately their fault because lord knows the best practice is to pillory whoever was the last left because then they're not there to defend themselves anymore and no one's going to get dinged for that irresponsible jackass's decisions, even if they never touched the system at all. I'm being slightly hyperbolic, but only slightly.Chris: Yeah. I think [laugh] that's an interesting point. I am definitely going to raise that feature request for a prefilled root cause category, which is, you know, the value is just that last person who left the organization. That it's a wonderful scapegoat situation there. I like it.To the point around what we do integrate with, I think the thing is actually with incidents that's quite interesting is there is a lot of tooling that exists in this space that does little pockets of useful, valuable things in the shape of incidents. So, you have PagerDuty is this system that does a great job of making people's phone making noise, but that happens, and then you're dropped into this sort of empty void of nothingness and you've got to go and figure out what to do. And then you've got things like Jira where clearly you want to be able to track actions that are coming out of things going wrong in some cases, and that's a great tool for that. And various other things in the middle there. And yeah, our value proposition, if you want to call it that, is to bring those things together in a way that is massively ergonomic during an incident.So, when you're in the middle of an incident, it is really handy to be able to go, “Oh, I have shipped this horrible fix to this thing. It works, but I must remember to undo that.” And we put that at your fingertips in an incident channel from Slack, that you can just log that action, lose that cognitive load that would otherwise be there, move on with fixing the thing. And you have this sort of—I think it's, like, that multiplied by 1000 in incidents that is just what makes it feel delightful. And I cringe a little bit saying that because it's an incident at the end of the day, but genuinely, it feels magical when some things happen that are just like, “Oh, my gosh, you've automatically hooked into my GitHub thing and someone else merged that PR and you've posted that back into the channel for me so I know that that happens. That would otherwise have been a thing where I jump out of the incident to go and figure out what was happening.”Corey: This episode is sponsored in part by our friend EnterpriseDB. EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years. And now EnterpriseDB has you covered wherever you deploy PostgreSQL on-premises, private cloud, and they just announced a fully-managed service on AWS and Azure called BigAnimal, all one word. Don't leave managing your database to your cloud vendor because they're too busy launching another half-dozen managed databases to focus on any one of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB. They can save you time and money, they can even help you migrate legacy applications—including Oracle—to the cloud. To learn more, try BigAnimal for free. Go to biganimal.com/snark, and tell them Corey sent you.Corey: The problem with the cloud, too, is the first thing that, when there starts to be an incident happening is the number one decision—almost the number one decision point is this my shitty code, something we have just pushed in our stuff, or is it the underlying provider itself? Which is why the AWS status page being slow to update is so maddening. Because those are two completely different paths to go down and you are having to pursue both of them equally at the same time until one can be ruled out. And that is why time to identify at least what side of the universe it's on is so important. That has always been a bit of a tricky challenge.I want to talk a bit about circular dependencies. You target a certain persona of customer, but I'm going to go out on a limb and assume that one explicit company that you are not going to want to do business with in your current iteration is Slack itself because a tool to manage—okay, so our service is down, so we're going to go to Slack to fix it doesn't work when the service is Slack itself. So, that becomes a significant challenge. As you look at this across the board, are you seeing customers having problems where you have circular dependency issues with this? Easy example: Slack is built on top of AWS.When there's an underlying degradation of, huh, suddenly us-east-1 is not doing what it's supposed to be doing, now, Slack is degraded as well, as well as the customer site, it seems like at that point, you're sort of in a bit of tricky positioning as a customer. Counterpoint, when neither Slack nor your site are working, figuring out what caused that issue doesn't seem like it's the biggest stretch of the imagination at that point.Chris: I've spent a lot of my career working in infrastructure, platform-type teams, and I think you can end up tying yourself in knots if you try and over-optimize for, like, avoiding these dependencies. I think it's one of those, sort of, turtles all the way down situations. So yes, Slack are unlikely to become a customer because they are clearly going to want to use our product when they are down.Corey: They reach out, “We'd like to be your customer.” Your response is, “Please don't be.” None of us are going to be happy with this outcome.Chris: Yeah, I mean, the interesting thing that is that we're friends with some folks at Slack, and they believe it or not, they do use Slack to navigate their incidents. They have an internal tool that they have written. And I think this sort of speaks to the point we made earlier, which is that incidents and things failing or not these sort of big binary events. And so—Corey: All of Slack is down is not the only kind of incident that a company like Slack can experience.Chris: I'd go as far as that it's most commonly not that. It's most commonly that you're navigating incidents where it is a degradation, or some edge case, or something else that's happened. And so, like, the pragmatic solution here is not to avoid the circular dependencies, in my view; it's to accept that they exist and make sure you have sensible escape hatches so that when something does go wrong—so a good example, we use incident.io at incident.io to manage incidents that we're having with incident.io. And 99% of the time, that is absolutely fine because we are having some error in some corner of the product or a particular customer is doing something that is a bit curious.And I could count literally on one hand the number of times that we have not been able to use our products to fix our product. And in those cases, we have a fallback which is jump into—Corey: I assume you put a little thought into what happened. “Well, what if our product is down?” “Oh well, I guess we'll never be able to fix it or communicate about it.” It seems like that's the sort of thing that, given what you do, you might have put more than ten seconds of thought into.Chris: We've put a fair amount of thought into it. But at the end of the day, [laugh] it's like if stuff is down, like, what do you need to do? You need to communicate with people. So, jump on a Google Chat, jump on a Slack huddle, whatever else it is we have various different, like, fallbacks in different order. And at the core of it, I think this is the thing is, like, you cannot be prepared for every single thing going wrong, and so what you can be prepared for is to be unprepared and just accept that humans are incredibly good at being resilient, and therefore, all manner of things are going to happen that you've never seen before and I guarantee you will figure them out and fix them, basically.But yeah, I say this; if my SOC 2 auditor is listening, we also do have a very well-defined, like, backup plan in our SOC 2 [laugh] in our policies and processes that is the thing that we will follow that. But yeah.Corey: The fact that you're saying the magic words of SOC 2, yes, exactly. Being in a responsible adult and living up to some baseline compliance obligations is really the sign of a company that's put a little thought into these things. So, as I pull up incident.io—the website, not the company to be clear—and look through what you've written and how you talk about what you're doing, you've avoided what I would almost certainly have not because your tagline front and center on your landing page is, “Manage incidents at scale without leaving Slack.” If someone were to reach out and say, well, we're down all the time, but we're using Microsoft Teams, so I don't know that we can use you, like, the immediate instinctive response that I would have for that to the point where I would put it in the copy is, “Okay, this piece of advice is free. I would posit that you're down all the time because you're the kind of company to use Microsoft Teams.” But that doesn't tend to win a whole lot of friends in various places. In a slightly less sarcastic bent, do you see people reaching out with, “Well, we want to use you because we love what you're doing, but we don't use Slack.”Chris: Yeah. We do. A lot of folks actually. And we will support Teams one day, I think. There is nothing especially unique about the product that means that we are tied to Slack.It is a great way to distribute our product and it sort of aligns with the companies that think in the way that we do in the general case but, like, at the core of what we're building, it's a platform that augments a communication platform to make it much easier to deal with a high-stress, high-pressure situation. And so, in the future, we will support ways for you to connect Microsoft Teams or if Zoom sought out getting rich app experiences, talk on a Zoom and be able to do various things like logging actions and communicating with other systems and things like that. But yeah, for the time being very, very deliberate focus mechanism for us. We're a small company with, like, 30 people now, and so yeah, focusing on that sort of very slim vertical is working well for us.Corey: And it certainly seems to be working to your benefit. Every person I've talked to who is encountered you folks has nothing but good things to say. We have a bunch of folks in common listed on the wall of logos, the social proof eye chart thing of here's people who are using us. And these are serious companies. I mean, your last job before starting incident.io was at Monzo, as you mentioned.You know what you're doing in a regulated, serious sense. I would be, quite honestly, extraordinarily skeptical if your background were significantly different from this because, “Well, yeah, we worked at Twitter for Pets in our three-person SRE team, we can tell you exactly how to go ahead and handle your incidents.” Yeah, there's a certain level of operational maturity that I kind of just based upon the name of the company there; don't think that Twitter for Pets is going to nail. Monzo is a bank. Guess you know what you're talking about, given that you have not, basically, been shut down by an army of regulators. It really does breed an awful lot of confidence.But what's interesting to me is the number of people that we talk to in common are not themselves banks. Some are and they do very serious things, but others are not these highly regulated, command-and-control, top-down companies. You are nimble enough that you can get embedded at those startup-y of startup companies once they hit a certain point of scale and wind up helping them arrive at a better outcome. It's interesting in that you don't normally see a whole lot of tools that wind up being able to speak to both sides of that very broad spectrum—and most things in between—very effectively. But you've somehow managed to thread that needle. Good work.Chris: Thank you. Yeah. What else can I say other than thank you? I think, like, it's a deliberate product positioning that we've gone down to try and be able to support those different use cases. So, I think, at the core of it, we have always tried to maintain the incident.io should be installable and usable in your very first incident without you having to have a very steep learning curve, but there is depth behind it that allows you to support a much more sophisticated incident setup.So, like, I mean, you mentioned Monzo. Like, I just feel incredibly fortunate to have worked at that company. I joined back in 2017 when they were, I don't know, like, 150,000 customers and it was just getting its banking license. And I was there for four years and was able to then see it scale up to 6 million customers and all of the challenges and pain that goes along with that both from building infrastructure on the technical side of things, but from an organizational side of things. And was, like, front-row seat to being able to work with some incredibly smart people and sort of see all these various different pain points.And honestly, it feels a little bit like being in sort of a cheat mode where we get to this import a lot of that knowledge and pain that we felt at Monzo into the product. And that happens to resonate with a bunch of folks. So yeah, I feel like things are sort of coming out quite well at the moment for folks.Corey: The one thing I will say before we wind up calling this an episode is just how grateful I am that I don't have to think about things like this anymore. There's a reason that the problem that I chose to work on of expensive AWS bills being very much a business-hours only style of problem. We're a services company. We don't have production infrastructure that is externally facing. “Oh, no, one of our data analysis tools isn't working internally.”That's an interesting curiosity, but it's not an emergency in the same way that, “Oh, we're an ad network and people are looking at ads right now because we're broken,” is. So, I am grateful that I don't have to think about these things anymore. And also a little wistful because there's so much that you do it would have made dealing with expensive and dangerous outages back in my production years a lot nicer.Chris: Yep. I think that's what a lot of folks are telling us essentially. There's this curious thing with, like, this product didn't exist however many years ago and I think it's sort of been quite emergent in a lot of companies that, you know, as sort of things have moved on, that something needs to exist in this little pocket of space, dealing with incidents in modern companies. So, I'm very pleased that what we're able to build here is sort of working and filling that for folks.Corey: Yeah. I really want to thank you for taking so much time to go through the ethos of what you do, why you do it, and how you do it. If people want to learn more, where's the best place for them to go? Ideally, not during an incident.Chris: Not during an incident, obviously. Handily, the website is the company name. So, incident.io is a great place to go and find out more. We've literally—literally just today, actually—launched our Practical Guide to Incident Management, which is, like, a really full piece of content which, hopefully, will be useful to a bunch of different folks.Corey: Excellent. We will, of course, put a link to that in the [show notes 00:29:52]. I really want to thank you for being so generous with your time. Really appreciate it.Chris: Thanks so much. It's been an absolute pleasure.Corey: Chris Evans, Chief Product Officer and co-founder of incident.io. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice along with an angry comment telling me why your latest incident is all the intern's fault.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About ChrisChris is a robotics engineer turned cloud security practitioner. From building origami robots for NASA, to neuroscience wearables, to enterprise software consulting, he is a passionate builder at heart. Chris is a cofounder of Common Fate, a company with a mission to make cloud access simple and secure.Links: Common Fate: https://commonfate.io/ Granted: https://granted.dev Twitter: https://twitter.com/chr_norm TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Let's face it, on-call firefighting at 2am is stressful! So there's good news and there's bad news. The bad news is that you probably can't prevent incidents from happening, but the good news is that incident.io makes incidents less stressful and a lot more valuable. incident.io is a Slack-native incident management platform that allows you to automate incident processes, focus on fixing the issues and learn from incident insights to improve site reliability and fix your vulnerabilities. Try incident.io, recover faster and sleep more.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. It doesn't matter where you are on your journey in cloud—you could never have heard of Amazon the bookstore—and you encounter AWS and you spin up an account. And within 20 minutes, you will come to the realization that everyone in this space does. “Wow, logging in to AWS absolutely blows goats.”Today, my guest, obviously had that reaction, but unlike most people I talked to, decided to get up and do something about it. Chris Norman is the co-founder of Common Fate and most notably to how I know him is one of the original authors of the tool, Granted. Chris, thank you so much for joining me.Chris: Hey, Corey, thank you for having me.Corey: I have done podcasts before; I have done a blog post on it; I evangelize it on Twitter constantly, and even now, it is challenging in a few ways to explain holistically what Granted is. Rather than trying to tell your story for you, when someone says, “Oh, Granted, that seems interesting and impossible to Google for in isolation, so therefore, we know it's going to be good because all the open-source projects with hard to find names are,” what is Granted and what does it do?Chris: Granted is a command-line tool which makes it really easy for you to get access and assume roles when you're working with AWS. For me, when I'm using Granted day-to-day, I wake up, go to my computer—I'm working from home right now—crack open the MacBook and I log in and do some development work. I'm going to go and start working in the cloud.Corey: Oh, when I start first thing in the morning doing development work and logging into the cloud, I know. All right, I'm going to log in to AWS and now I know that my day is going downhill from here.Chris: [laugh]. Exactly, exactly. I think maybe the best days are when you don't need to log in at all. But when you do, I go and I open my terminal and I run this command. Using Granted, I ran this assume command and it authenticates me with single-sign-on into AWS, and then it opens up a console window in a particular account.Now, you might ask, “Well, that's a fairly standard thing.” And in fact, that's probably the way that the console and all of the tools work by default with AWS. Why do you need a third-party tool for this?Corey: Right. I've used a bunch of things that do varying forms of this and unlike Granted, you don't see me gushing about them. I want to be very clear, we have no business relationship. You're not sponsoring anything that I do. I'm not entirely clear on what your day job entails, but I have absolutely fallen in love with the Granted tool, which is why I'm dragging you on to this show, kicking and screaming, mostly to give me an excuse to rave about it some more.Chris: [laugh]. Exactly. And thank you for the kind words. And I'd say really what makes it special or why I've been so excited to be working on it is that it makes this access, particularly when you're working with multiple accounts, really, really easy. So, when I run assume and I open up that console window, you know, that's all fine and that's very similar to how a lot of the other tools and projects that are out there work, but when I want to open that second account and that second console window, maybe because I'm looking at like a development and a staging account at the same time, then Granted allows me to view both of those simultaneously in my browser. And we do that using some platform sort of tricks and building into the way that the browser works.Corey: Honestly, one of the biggest differences in how you describe what Granted is and how I view it is when you describe it as a CLI application because yes, it is that, but one of the distinguishing characteristics is you also have a Firefox extension that winds up leveraging the multi-container functionality extension that Firefox has. So, whenever I wind up running a single command—assume with a-c' flag, then I give it the name of my AWS profile, it opens the web console so I can ClickOps my heart's content inside of a tab that is locked to a container, which means I can have one or two or twenty different AWS accounts and/or regions up running simultaneously side-by-side, which is basically impossible any other way that I've ever looked at it.Chris: Absolutely, yeah. And that's, like, the big differentiating factor right now between Granted and between this sort of default, the native experience, if you're just using the AWS command line by itself. With Granted, you can—with these Firefox containers, all of your cookies, your profile, everything is all localized into that one container. It's actually it's a privacy features that are built into Firefox, which keeps everything really separate between your different profiles. And what we're doing with Granted is that we make it really easy to open a specific profiles that correspond with different AWS profiles that you're using.So, you'd have one which could be your development account, one which could be production or staging. And you can jump between these and navigate between them just as separate tabs in your browser, which is a massive improvement over, you know, what I've previously had to use in the past.Corey: The thing that really just strikes me about this is first, of course, the functionality and the rest, so I saw this—I forget how I even came across it—and immediately I started using it. On my Mac, it was great. I started using it when I was on the road, and it was less great because you built this thing in Go. It can compile and install on almost anything, but there were some assumptions that you had built into this in its early days that did not necessarily encompass all of the use cases that I use. For example, it hadn't really occurred to you that some lunatic would try and only use an iPad when they're on the road, so they have to be able to run this to get federated login links via SSHing into an EC2 instance running somewhere and not have it open locally.You seemed almost taken aback when I brought it up. Like, “What lunatic would do that?” Like, “Hi, I'm such a lunatic. Let's talk about this.” And it does that now, and it's awesome. It does seem to me though, and please correct me if I'm wrong on this assumption slash assessment that this is first and foremost aimed at desktop users, specifically people running Mac on the desktop, is that the genesis of it?Chris: It is indeed. And I think part of the cause behind that is that we originally built a tool for ourselves. And as we were building things and as we were working using the cloud, we were running things—you know, we like to think that we're following best practices when we're using AWS, and so we'd set up multiple accounts, we'd have a special account for development, a separate one for staging, a separate one for production, even internal tools that we would build, we would go and spin up an individual account for those. And then you know, we had lots of accounts. and to go and access those really easily was quite difficult.So, we definitely, we built it for ourselves first and I think that that's part of when we released it, it actually a little bit of cause for some of the initial problems. And some of the feedback that we had was that it's great to build tools for yourself, but when you're working in open-source, there's a lot of different diversity with how people are using things.Corey: We take different approaches. You want to try to align with existing best practices, whereas I am a loudmouth white guy who works in tech. So, what I do definitionally becomes a best practice in the ecosystem. It's easier to just comport with the ones that are already existing that smart people put together rather than just trying to competence your way through it, so you took a better path than I did.But there's been a lot of evolution to Granted as I've been using it for a while. I did a whole write-up on it and that got a whole bunch of eyes onto the project, which I can now admit was a nefarious plan on my part because popping into your community Slack and yelling at you for features I want was all well and good, but let's try and get some people with eyes on this who are smarter than me—which is not that high of a bar when it comes to SSO, and IAM, and federated login, and the rest—and they can start finding other enhancements that I'll probably benefit from. And sure enough, that's exactly what happened. My sneaky plan has come to fruition. Thanks for being a sucker, I guess. I mean—[laugh] it worked. I'm super thrilled by the product.Chris: [laugh]. I guess it's a great thing I think that the feedback and particularly something that's always been really exciting is just seeing new issues come through on GitHub because it really shows the kinds of interesting use cases and the kinds of interesting teams and companies that are using Granted to make their lives a little bit easier.Corey: When I go to the website—which again is impossible to Google—the website for those wondering is granted.dev. It's short, it's concise, I can say it on a podcast and people automatically know how to spell it. But at the top of the website—which is very well done by the way—it mentions that oh, you can, “Govern access to breakglass roles with Common Fate Cloud,” and it also says in the drop shadow nonsense thing in the upper corner, “Brought to you by Common Fate,” which is apparently the name of your company.So, the question I'll get to in a second is what does your company do, but first and foremost, is this going to be one of those rug-pull open-source projects where one day it's, “Oh, you want to log into your AWS accounts? Insert quarter to continue.” I'm mostly being a little over the top with that description, but we've all seen things that we love turn into molten garbage. What is the plan around this? Are you about to ruin this for the rest of us once you wind up raising a round or something? What's the deal?Chris: Yeah, it's a great question, Corey. And I think that to a degree, releasing anything like this that sits in the access workflow and helps you assume roles and helps you day-to-day, you know, we have a responsibility to uphold stability and reliability here and to not change things. And I think part of, like, not changing things includes not [laugh] rug-pulling, as you've alluded to. And I think that for some companies, it ends up that open-source becomes, like, a kind of a lead-generation tool, or you end up with, you know, now finally, let's go on add another login so that you have to log into Common Fate to use Granted. And I think that, to be honest, a tool like this where it's all about improving the speed of access, the incentives for us, like, it doesn't even make sense to try and add another login for to try to get people to, like, to say, login to Common Fate because that would make your signing process for AWS take even longer than it already does.Corey: Yeah, you decided that you know, what's the biggest problem? Oh, you can sleep at night, so let's go ahead and make it even worse, by now I want you to be this custodian of all my credentials to log into all of my accounts. And now you're going to be critical path, so if you're down, I'm not able to log into anything. And oh, by the way, I have to trust you with full access to my bank stuff. I just can't imagine that is a direction that you would be super excited about diving head-first into.Chris: No, no. Yeah, certainly not. And I think that the, you know, building anything in this space, and with what we're doing with Common Fate, you know, we're building a cloud platform to try to make IAM a little bit easier to work with, but it's really sensitive around granting any kind of permission and I think that you really do need that trust. So, trying to build trust, I guess, with our open-source projects is really important for us with Granted and with this project, that it's going to continue to be reliable and continue to work as it currently does.Corey: The way I see it, one of the dangers of doing anything that is particularly open-source—or that leans in the direction of building in Amazon's ecosystem—it leads to the natural question of, well, isn't this just going to be some people say stolen—and I don't think those people understand how open-source works—by AWS themselves? Or aren't they going to build something themselves at AWS that's going to wind up stomping this thing that you've built? And my honest and remarkably cynical answer is that, “You have built a tool that is a joy to use, that makes logging into AWS accounts streamlined and efficient in a variety of different patterns. Does that really sound like something AWS would do?” And followed by, “I wish they would because everyone would benefit from that rising tide.”I have to be very direct and very clear. Your product should not exist. This should be something the provider themselves handles. But nope. Instead, it has to exist. And while I'm glad it does, I also can't shake the feeling that I am incredibly annoyed by the fact that it has to.Chris: Yeah. Certainly, certainly. And it's something that I think about a little bit. I like to wonder whether there's maybe like a single feature flag or some single sort of configuration setting in AWS where they're not allowing different tabs to access different accounts, they're not allowing this kind of concurrent access. And maybe if we make enough noise about Granted, maybe one of the engineers will go and flick that switch and they'll just enable it by default.And then Granted itself will be a lot less relevant, but for everybody who's using AWS, that'll be a massive win because the big draw of using Granted is mainly just around being able to access different accounts at the same time. If AWS let you do that out of the box, hey, that would be great and, you know, I'd have a lot less stuff to maintain.Corey: Originally, I had you here to talk about Granted, but I took a glance at what you're actually building over at Common Fate and I'm about to basically hijack slash derail what probably is going to amount the rest of this conversation because you have a quick example on your site for by developers, for developers. You show a quick Python script that tries to access a S3 bucket object and it's denied. You copy the error message, you paste it into what you're building over a Common Fate, and in return, it's like, “Oh. Yeah, this is the policy that fixes it. Do you want us to apply it for you?”And I just about fell out of my chair because I have been asking for this explicit thing for a very long time. And AWS doesn't do it. Their IAM access analyzer claims to. Like, “Oh, just go look at CloudTrail and see what permissions it uses and we'll build a policy to scope it down.” “Okay. So, it's S3 access. Fair enough. To what object or what bucket?” “Guess,” is what it tells you there.And it's, this is crap. Who thinks this is a good user experience? You have built the thing that I wish AWS had built in natively. Because let's be honest here, I do what an awful lot of people do and overscope permissions massively just because messing around with the bare minimum set of permissions in many cases takes more time than building the damn thing in the first place.Chris: Oh, absolutely. Absolutely. And in fact, this—was a few years ago when I was consulting—I had a really similar sort of story where one of the clients that we were working with, the CTO of this company, he was needing to grant us access to AWS and we were needing to build a particular service. And he said, “Okay, can you just let me know the permissions that you will need and I'll go and deploy the role for this.” And I came back and I said, “Wait. I don't even know the permissions that I'm going to need because the damn thing isn't even built yet.”So, we went sort of back and forth around this. And the compromise ended up just being you know, way too much access. And that was sort of part of the inspiration for, you know, really this whole project and what we're building with Common Fate, just trying to make that feedback loop around getting to the right level of permissions a lot faster.Corey: Yeah, I am just so overwhelmingly impressed by the fact that you have built—and please don't take this as a criticism—but a set of very simple tools. Not simple in the terms of, “Oh, that's, like, three lines of bash, and a fool could write that on a weekend.” No. Simple in the sense of it solves a problem elegantly and well and it's straightforward—well, straightforward as anything in the world of access control goes—to wrap your head around exactly what it does. You don't tend to build these things by sitting around a table brainstorming with someone you met at co-founder dating pool or something and wind up figuring out, “Oh, we should go and solve that. That sounds like a billion-dollar problem.”This feels very much like the outcome of when you're sitting around talking to someone and let's start by drinking six beers so we become extraordinarily honest, followed immediately by let's talk about what sucks. What pisses you off the most? It feels like this is sort of the low-hanging fruit of things that upset people when it comes to AWS. I mean, if things had gone slightly differently, instead of focusing on AWS bills, IAM was next on my list of things to tackle just because I was tired of smacking my head into it.This is very clearly a problem space that you folks have analyzed deeply, worked within, and have put a lot of thought into. I want to be clear, I've thrown a lot of feature suggestions that you for Granted from start to finish. But all of them have been around interface stuff and usability and expanding use cases. None of them have been, “Well, that seems screamingly insecure.” Because it hasn't been.Chris: [laugh].Corey: It has been effective, start to finish, I think that from a security posture, you make terrific choices, in many cases better than ones I would have made a starting from scratch myself. Everything that I'm looking at in what you have built is from a position of this is absolutely amazing and it is transformative to my own workflows. Now, how can we improve it?Chris: Mmm. Thank you, Corey. And I'll say as well, maybe around the security angle, that one of the goals with Granted was to try and do things a little bit better than the default way that AWS does them when it comes to security. And it's actually been a bit of a source for challenges with some of the users that we've been working with with Granted because one of the things we wanted to do was encrypt the SSO token. And this is the token that when you sign in to AWS, kind of like, it allows you to then get access to all of the rest of the accounts.So, it's like a pretty—it's a short-lived token, but it's a really sensitive one. And you know, by default, it's just stored in plain text on your disk. So, we dump to a file and, you know, anything that can go and read that, they can go and get it. It's also a little bit hard to revoke and to lock people out. There's not really great workflows around that on AWS's side.So, we thought, “Okay, great. One of the goals for Granted can be that we will go and store this in your keychain in your system and we'll work natively with that.” And that's actually been a cause for a little bit of a hassle for some users, though, because by doing that and by storing all of this information in the keychain, it's actually broken some of the integrations with the rest of the tooling, which kind of expects tokens and things to be in certain places. So, we've actually had to, as part of dealing with that with Granted, we've had to give users the ability to opt out for that.Corey: DoorDash had a problem. As their cloud-native environment scaled and developers delivered new features, their monitoring system kept breaking down. In an organization where data is used to make better decisions about technology and about the business, losing observability means the entire company loses their competitive edge. With Chronosphere, DoorDash is no longer losing visibility into their applications suite. The key? Chronosphere is an open-source compatible, scalable, and reliable observability solution that gives the observability lead at DoorDash business, confidence, and peace of mind. Read the full success story at snark.cloud/chronosphere. That's snark.cloud slash C-H-R-O-N-O-S-P-H-E-R-E.Corey: That's why I find this so, I think, just across the board, fantastic. It's you are very clearly engaged with your community. There's a community Slack that you have set up for this. And I know, I know, too many Slacks; everyone has this problem. This is one of those that is worth hanging in, at least from my perspective, just because one of the problems that you have, I suspect, is on my Mac it's great because I wind up automatically updating it to whatever the most recent one is every time I do a brew upgrade.But on the Linux side of the world, you've discovered what many of us have discovered, and that is that packaging things for Linux is a freaking disaster. The current installation is, “Great. Here's basically a curl bash.” Or, “Here, grab this tarball and install it.” And that's fine, but there's no real way of keeping that updated and synced.So, I was checking the other day, oh wow, I'm something like eight versions behind on this box. But it still just works. I upgraded. Oh, wow. There's new functionality here. This is stuff that's actually really handy. I like this quite a bit. Let's see what else we can do.I'm just so impressed, start to finish, by just how receptive you've been to various community feedbacks. And as well—I want to be very clear on this point, too—I've had folks who actually know what they're doing in an InfoSec sense look at what you're up to, and none of them had any issues of note. I'm sure that they have a pile of things like, with that curl bash, they should really be doing a GPG check. Yes, yes, fine. Whatever. If that's your target threat model, okay, great. Here in reality-land for what I do, this is awesome.And they don't seem to have any problems with, “Oh, yeah. By the way, sending analytics back up”—which, okay, fine, whatever. “And it's not disclosing them.” Okay, that's bad. “And it's including the contents of your AWS credentials.”Ahhhh. I did encounter something that was doing that on the back-end once. [cough]—Serverless Framework—sorry, something caught in my throat for a second.Chris: [laugh].Corey: No faster way I can think of to erode trust in that. But everything you're doing just makes sense.Chris: Oh, I do remember that. And that was a little bit of a fiasco, really, around all of that, right? And it's great to hear actually around that InfoSec folks and security people being, you know, not unhappy, I guess, with a tool like this. It's been interesting for me personally. We've really come from a practitioner's background.You know, I wouldn't call myself a security engineer at all. I would call myself as a sometimes a software developer, I guess. I have been hacking my way around Go and definitely learning a lot about how the cloud has worked over the past seven, eight years or so, but I wouldn't call myself a security engineer, so being very cautious around how all of these things work. And we've really tried to defer to things like the system keychain and defer to things that we know are pretty safe and work.Corey: The thing that I also want to call out as well is that your licensing is under the MIT license. This is not one of those, “Oh, you're required to wind up doing a bunch of branding stuff around it.” And, like some people say, “Oh, you have to own the trademark for all of these things.” I mean, I'm not an expert in international trademark law, let's be very clear, but I also feel that trademarking a term that is already used heavily in the space such as the word ‘Granted,' feels like kind of an uphill battle. And let's further be clear that it doesn't matter what you call this thing.In fact, I will call attention to an oddity that I've encountered a fair bit. After installing it, the first thing you do is you run the command ‘granted.' That sets it up, it lets you configure your browser, what browser you want to use, and it now supports standard out for that headless, EC2 use case. Great. Awesome. Love it. But then the other binary that ships with it is Assume. And that's what I use day-to-day. It actually takes me a minute sometimes when it's been long enough to remember that the tool is called Granted and not Assume what's up with that?Chris: So, part of the challenge that we ran into when we were building the Granted project is that we needed to export some environment variables. And these are really important when you're logging into AWS because you have your access key, your secret key, your session token. All of those, when you run the assume command, need to go into the terminal session that you called it. This doesn't matter so much when you're using the console mode, which is what we mentioned earlier where you can open 100 different accounts if you want to view all of those at the same time in your browser. But if you want to use it in your terminal, we wanted to make it look as really smooth and seamless as possible here.And we were really inspired by this approach from—and I have to shout them out and kind of give credit to them—a tool called AWSume—they're spelled A-W-S-U-M-E—Python-based tool that they don't do as much with single-sign-on, but we thought they had a really nice, like, general approach to the way that they did the scripting and aliasing. And we were inspired by that and part of that means that we needed to have a shell script that called this executable, which then will export things back out into the shell script. And we're doing all this wizardry under the hood to make the user experience really smooth and seamless. Part of that meant that we separated the commands into granted and assume and the other part of the naming for everything is that I felt Granted had a far better ring to it than calling the whole project Assume.Corey: True. And when you say assume, is it AWS or not? I've used the AWSume project before; I've used AWS Vault out of 99 Designs for a while. I've used—for three minutes—the native AWS SSO config, and that is just trash. Again, they're so good at the plumbing, so bad at the porcelain, I think is the criticism that I would levy toward a lot of this stuff.Chris: Mmm.Corey: And it's odd to think there's an entire company built around just smoothing over these sharp, obnoxious edges, but I'm saying this as someone who runs a consultancy and have five years that just fixes the bill for this one company. So, there's definitely a series of cottage industries that spring up around these things. I would be thrilled, on some level, if you wound up being completely subsumed by their product advancements, but it's been 15 years for a lot of this stuff and we're still waiting. My big failure mode that I'm worried about is that you never are.Chris: Yeah, exactly, exactly. And it's really interesting when you think about all of these user experience gaps in AWS being opportunities for, I guess, for companies like us, I think, trying to simplify a lot of the complexity for things. I'm interested in sort of waiting for a startup to try and, like, rebuild the actual AWS console itself to make it a little bit faster and easier to use.Corey: It's been done and attempted a bunch of different times. The problem is that the console is a lot of different things to a lot of different people, and as you step through that, you can solve for your use case super easily. “Yeah, what do I care? I use RDS, I use some VPC nonsense, and I use EC2. The end.” “Great. What about IAM?”Because I promise you're using that whether you know it or not. And okay, well, I'm talking to someone else who's DynamoDB, and someone else is full-on serverless, and someone else has more money than sense, so they mostly use SageMaker, and so on and so forth. And it turns out that you're effectively trying to rebuild everything. I don't know if that necessarily works.Chris: Yeah, and I think that's a good point around maybe while we haven't seen anything around that sort of space so far. You go to the console, and you click down, you see that list of 200 different services and all of those have had teams go and actually, like, build the UI and work with those individual APIs. Yeah.Corey: Any ideas as far as what's next for features on Granted?Chris: I think that, for us, it's continuing to work with everybody who's using it, and with a focus of stability and performance. We actually had somebody in the community raise an issue because they have an AWS config file that's over 7000 lines long. And I kind of pity that person, potentially, for their day-to-day. They must deal with so much complexity. Granted is currently quite slow when the config files get very big. And for us, I think, you know, we built it for ourselves; we don't have that many accounts just yet, so working to try to, like, make it really performant and really reliable is something that's really important.Corey: If you don't mind a feature request while we're at it—and I understand that this is more challenging than it looks like—I'm willing to fund this as a feature bounty that makes sense. And this also feels like it might be a good first project for a very particular type of person, I would love to get tab completion working in Zsh. You have it—Chris: Oh.Corey: For Fish because there's a great library that automatically populates that out, but for the Zsh side of it, it's, “Oh, I should just wind up getting Zsh completion working,” and I fell down a rabbit hole, let me tell you. And I come away from this with the perception of yeah, I'm not going to do it. I have not smart enough to check those boxes. But a lot of people are so that is the next thing I would love to see. Because I will change my browser to log into the AWS console for you, but be damned if I'm changing my shell.Chris: [laugh]. I think autocomplete probably should be higher on our roadmap for the tool, to be honest because it's really, like, a key metric and what we're focusing on is how easy is it to log in. And you know, if you're not too sure what commands to use or if we can save you a few keystrokes, I think that would be the, kind of like, reaching our goals.Corey: From where I'm sitting, you definitely have. I really want to thank you for taking the time to not only build this in the first place, but also speak with me about it. If people want to learn more, where's the best place to find you?Chris: So, you can find me on Twitter, I'm @chr_norm, or you can go and visit granted.dev and you'll have a link to join the Slack community. And I'm very active on the Slack.Corey: You certainly are, although I will admit that I fall into the challenge of being in just the perfectly opposed timezone from you and your co-founder, who are in different time zones to my understanding; one of you is on Australia and one of you was in London; you're the London guy as best I'm aware. And as a result, invariably, I wind up putting in feature requests right when no one's around. And, for better or worse, in the middle of the night is not when I'm usually awake trying to log into AWS. That is Azure time.Chris: [laugh]. Yeah, no, we don't have the US time zone properly covered yet for our community support and help. But we do have a fair bit of the world timezone covered. The rest of the team for Common Fate is all based in Australia and I'm out here over in London.Corey: Yeah. I just want to thank you again, for just being so accessible and, like, honestly receptive to feedback. I want to be clear, there's a way to give feedback and I do strive to do it constructively. I didn't come crashing into your Slack one day with a, “You know what your problem is?” I prefer to take the, “This is awesome. Here's what I think would be even better. Does that make sense?” As opposed to the imperious demands and GitHub issues and whatnot? It's, “I'd love it if it did this thing. Doesn't do this thing. Can you please make it do this thing?” Turns out that's the better way to drive change. Who knew?Chris: Yeah. [laugh]. Yeah, definitely. And I think that one of the things that's been the best around our journey with Granted so far has been listening to feedback and hearing from people how they would like to use the tool. And a big thank you to you, Corey, for actually suggesting changes that make it not only better for you, but better for everybody else who's using Granted.Corey: Well, at least as long as we're using my particular byzantine workload patterns in some way, or shape, or form, I'll hear that. But no, it's been an absolute pleasure and I really want to thank you for your time as well.Chris: Yeah, thank you for having me.Corey: Chris Norman, co-founder of Common Fate, as well as one of the two primary developers originally behind the Granted project that logs you into AWS without you having to lose your mind. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, incensed, raging comment that talks about just how terrible all of this is once you spend four hours logging into your AWS account by hand first.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Steph has an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. Chris got to do something exciting this week using dry-monads. Success or failure? This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Bartender (https://www.macbartender.com/) dry-rb - dry-monads v1.0 - Pattern matching (https://dry-rb.org/gems/dry-monads/1.0/pattern-matching/) alfred-workflows (https://github.com/tupleapp/alfred-workflows/blob/master/scripts/online_users.rb) Raycast (https://www.raycast.com/) ruby-science (https://github.com/thoughtbot/ruby-science) Inertia.js (https://inertiajs.com/) Remix (https://remix.run/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow, and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. STEPH: What type of bird is the strongest bird? CHRIS: I don't know. STEPH: A crane. [laughter] STEPH: You're welcome. And on that note, shall we wrap up? CHRIS: Let's wrap up. [laughter] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris, I saw a good movie I'd like to tell you about. It was just over the weekend. It's called The Duke, and it's based on a real story. I should ask, have you seen it? Have you heard of this movie called The Duke? CHRIS: I don't think so. STEPH: Okay, cool. It's a true story, and it's based on an individual named Kempton Bunton who then stole a particular portrait, a Goya portrait; if you know your artist, I do not. But he stole a Goya portrait and then essentially held at ransom because he was a big advocate that the BBC News channel should be free for people that are living on a pension or that are war veterans because then they're not able to afford that fee. But then, if you take the BBC channel away from them, it disconnects them from society. And it's a very good movie. I highly recommend it. So I really enjoyed watching that over the weekend. CHRIS: All right. Excellent recommendation. We will, of course, add that to the show notes mostly so that I can find it again later. STEPH: On a more technical note, I have a small update, or it's more of a question. It's an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. This has been quite a journey that Joël and I have been on for a while now. And we're making progress, but we're realizing that we're spending like 95% of our time in the test setup and porting that over, specifically because we're mapping fixture data over to FactoryBot, and we're just realizing that's really painful. It's taking up a lot of time to do that. And initially, when I realized we were just doing that, we hadn't even really talked about it, but we were moving it over to FactoryBot. I was like, oh, cool. We'll get to delete all these fixtures because there are around 208 files of them. And so that felt like a really good additional accomplishment to migrating the test over. But now that we realize how much time we're spending migrating the data over for that test setup, we've reevaluated, and I shared with Joël in the Slack channel. I was like, crap. I was like, I have a bad idea, and I can't not say it now because it's crossed my mind. And my bad idea was what if we stopped porting over fixtures to FactoryBot and then we just added the fixtures to a directory that RSpec would look so then we can rely on those fixtures? And then that way, we're literally then ideally just copying over from Test::Unit over to RSpec. But it does mean a couple of things. Well, one, it means that we're now running those fixtures at the beginning of RSpec test. We're introducing another pattern of where these tests are already using FactoryBot, but now they have fixtures at the top, and then we won't get to delete the fixtures. So we had a conversation around how to manage and mitigate some of those concerns. And we're still in that exploratory. We're going to test it out and see if this really speeds us up referencing the fixtures. The question that's wrapped up in this is there's something different between how fixtures generate data and how factories generate data. So I've run into this a couple of times now where I moved data over to just call a factory. But then I was hitting these callbacks or after-save-hooks or weird things that were then preventing me from creating the record, even though fixtures was creating them just fine. And then Joël pointed out today that he was running into something similar where there were private methods that were getting called. And there were all sorts of additional code that was getting run with factories versus fixtures. And I don't have an answer. Like, I haven't looked into this. And it's frankly intentional because I was trying hard to not dive into understanding the mechanics. We really want to get through this. But now I'm starting to ponder a little more as to what is different with fixtures and factories? And I liked that factories is running these callbacks; that feels correct. But I'm surprised that fixtures doesn't, or at least that's the experience that I'm having. So there's some funkiness there that I'd like to explore. I'll be honest; I don't know if I'm going to. But if anybody happens to know what that funkiness is or why fixtures and factories are different in that regard, I would be very intrigued because, at some point, I might look into it just because I would like to know. CHRIS: Oh, that is interesting. I have not really worked with fixtures much at all. I've lived a factory life myself, and thus that's where almost all of my experience is. I'm not super surprised if this ends up being the case, like, the idea that fixtures are just some data that gets shoveled into the database directly as opposed to FactoryBot going through the model layer. And so it's sort of like that difference. But I don't know that for certain. That sounds like what this is and makes sense conceptually. But I think this is what you were saying like, that also kind of pushes me more in the direction of factories because it's like, oh, they're now representative. They're using our model layer, where we're defining certain truths. And I don't love callbacks as a mechanism. But if your app has them, then getting data that is representative is useful in tests. Like one of the things I add whenever I'm working with FactoryBot is the FactoryBot lint rake task RSpec thing that basically just says, "Are your factories valid?" which I think is a great baseline to have. Because you may add a migration that adds a default constraint or something like that to the database that suddenly all your factories are invalid, and it's breaking tests, but you don't know it. Like subtly, you change it, and it doesn't actually break a test, but then it's harder later. So that idea of just having more correctness baked in is always nice, especially when it can be automated like that, so definitely a fan of that. But yeah, interested if you do figure out the distinction. I do like your take, though, of like, but also, maybe I just won't figure this out. Maybe this isn't worth figuring it out. Although you were in the interesting spot of, you could just port the fixtures over and then be done and call the larger body of work done. But it's done in sort of a half-complete way, so it's an interesting trade-off space. I'm also interested to hear where you end up on that. STEPH: Yeah, it's a tough trade-off. It's one that we don't feel great about. But then it's also recognizing what's the true value of what we're trying to deliver? And it also comes down to the idea of churn versus complexity. And I feel like we are porting over existing complexity and even adding a smidge, not actual complexity but adding a smidge of indirection in terms that when someone sees this file, they're going to see a mixed-use of fixtures and factories, and that doesn't feel good. And so we've already talked about adding a giant comment above fixtures that just is very honest and says, "Hey, these were ported over. Please don't mimic this. But this is some legacy tests that we have brought over. And we haven't migrated the fixtures over to use factories." And then, in regards to the churn versus complexity, this code isn't likely to get touched like these tests. We really just need them to keep running and keep validating scenarios. But it's not likely that someone's going to come in here and really need to manage these anytime soon. At least, this is what I'm telling myself to make me feel better about it. So there's also that idea of yes, we are porting this over. This is also how they already exist. So if someone did need to manage these tests, then going to Test::Unit, they would have the same experience that they're going to have in RSpec. So that's really the crux of it is that we're not improving that experience. We're just moving it over and then trying to communicate that; yes, we have muddied the waters a little bit by introducing this other pattern. So we're going to find a way to communicate why we've introduced this other pattern, but that way, we can stay focused on actually porting things over to RSpec. As for the factories versus fixtures, I feel like you're onto something in terms of it's just skipping that model layer. And that's why a lot of that functionality isn't getting run. And I do appreciate the accuracy of factories. I'd much rather know is my data representative of real data that can get created in the world? And right now, it feels like some of the fixtures aren't. Like, how they're getting created seemed to bypass really important checks and validations, and that is wrong. That's not what we want to have in our test is, where we're creating data that then the rest of the application can't truly create. But that's another problem for another day. So that's an update on a trade-off that we have made in regards to the testing journey that we are on. What's going on in your world? CHRIS: Well, we got to do something exciting this week. I was working on some code. This is using dry-monads, the dry-rb space. So we have these result objects that we use pretty pervasively throughout the app, and often, we're in a controller. We run one of these command objects. So it's create user, and create user actually encompasses a ton of logic in our app, and that object returns a result. So it's either a success or a failure. And if it's a success, it'll be a success with that new user wrapped up inside of it, or if it's a failure, it's a specific error message. Actually, different structured error messages in different ways, some that would be pushed to the form, some that would be a flash message. There are actually fun, different things that we do there. But in the controller, when we interact with those result objects, typically what we'll do is we'll say result equals create user dot run, (result=createuser.run) and then pass it whatever data it needs. And then on the next line, we'll say results dot either, (results.either), which is a method on these result objects. It's on both the success and failure so you can treat them the same. And then you pass what ends up being a lambda or a stabby proc, or I forget what they are. But one of those sort of inline function type things in Ruby that always feel kind of weird. But you pass one of those, and you actually pass two of them, one for the success case and one for the failure case. And so in the success case, we redirect back with a notice of congratulations, your user was created. Or, in the failure case, we potentially do a flash message of an alert, or we send the errors down, or whatever it ends up being. But it allows us to handle both of those cases. But it's always been syntactically terrible, is how I would describe it. It's, yeah, I'm just going to leave it at that. We are now living in a wonderful, new world. This has been something that I've wanted to try for a while. But I finally realized we're actually on Ruby 2.7, and so thus, we have access to pattern matching in Ruby. So I get to take it for a spin for the first time, realizing that we were already on the correct version. And in particular, dry-monads has a page in their docs specific to how we can take advantage of pattern matching with the result objects that they provide us. There's nothing specific in the library as far as I understand it. This is just them showing a bunch of examples of how one might want to do it if they're working with these result objects. But it's really great because it gives the ability to interact with, you know, success is typically going to be a singular case. There's one success branch to this whole logic, but there are like seven different ways it can fail. And that's the whole idea as to why we use these command objects and the whole Railway Oriented Programming and that whole thing which I have...what is this word? [laughs] I feel like I should know it. It's a positive rant. I have raved; that is how our users kindly pointed that out to us. I have raved about the Railway Oriented Programming that allows us to do. But it's that idea that they're actually, you know, there's one happy path, and there are seven distinct failure modes, seven unhappy paths. And now, using pattern matching, we actually get a really expressive, readable, useful way to destructure each of those distinct failures to work with the particular bits of data that we need. So it was a very happy day, and I got to explore it. This is, again, a feature of Ruby, not a feature of dry-monads. But dry-monads just happens to embrace it and work really well with it. So that was awesome. STEPH: That is awesome. I've seen one or two; I don't know, I've seen a couple of tweets where people are like, yeah, Ruby pattern matching. I haven't found a way to use it. So I'm excited that you just shared a way that you found to use it. I'm also worried what it says about our developer culture that we know the word rant so well, but rave, we always have to reach back into our memory to be like, what's that positive word or something that we like? [laughs] CHRIS: And especially here on The Bike Shed, where we try to gravitate towards the positive. But yeah, it's an interesting point that you make. STEPH: We're a bunch of ranters. It's what we do, pranting ranters. I don't know why we're pranting. [laughs] CHRIS: Because it's that exciting. That's what it is. Actually, there was an interesting thing as we were playing around with the pattern matching code, just poking around in the console session with it, and it prints out a deprecation warning. It's like, warning: this is an experimental feature. Do not use it, be careful. But in the back of my head, I was like, I actually know how this whole thing plays out, Ruby 2.7, and I assure you, it's going to be fine. I have been to the future, at least I'm pretty sure. I think the version that is in Ruby 2.7 did end up getting adopted basically as it stands. And so, I think there is also a setting to turn off that deprecation warning. I haven't done it yet, but I mostly just enjoyed the conversation that I had with this deprecation message of like, listen, I've been to the future, and it's great. Well, it's complicated, but specific to this pattern matching [laughs] in Ruby 3+ versions, it went awesome. And I'm really excited about that future that we now live in. STEPH: I wish we had that for so many more things in our life [laughs] of like, here's a warning, and it's like, no, no, I've seen the future. It's all right. Or you're totally right; I should avoid and back out of this now. CHRIS: If only we could know how the things would play out, you know. But yeah, so pattern matching, very cool. I'll include a link in the show notes to the particular page in the dry-monads docs. But there are also other cool things on the internet. In an unrelated but also cool thing that I found this week, we use Tuple a lot within our organization for pair programming. For anyone who's not familiar with it, it's a really wonderful piece of technology that allows you to pair program pretty seamlessly, better video quality, all of those nice things that we want. But I found there was just the tiniest bit of friction in starting a Tuple call. I know I want to pair with this person. And I have to go up and click on the little menu bar, and then I have to find their name, then I have to click a button. That's just too much. That's not how...I want to live my life at the keyboard. I have a thing called Bartender, which is a little menu bar manager utility app that will collapse down and hide the icons. But it's also got a nice, little hotkey accessible pop-up window that allows me to filter down and open one of the menu bar pop-out menus. But unfortunately, when that happens, the Tuple window isn't interactive at that point. I can't use the arrow keys to go up and down. And so I was like, oh, man, I wonder if there's like an Alfred workflow for this. And it turns out indeed there is actually managed by the kind folks at Tuple themselves. So I was able to find that, install it; it's great. I have it now. I can use that. So that was a nice little upgrade to my workflow. I can just type like TC space and then start typing out the person's name, and then hit enter, and it will start a call immediately. And it doesn't actually make me more productive, but it makes me happier. And some days, that's what matters. STEPH: That's always so impressive to me when that happens where you're like, oh, I need a thing. And then you went through the saga that you just went through. And then the people who manage the application have already gotten there ahead of you, and they're like, don't worry, we've created this for you. That's one of those just beautiful moments of like, wow, y'all have really thought this through on a bunch of different levels and got there before me. CHRIS: It's somewhat unsurprising in this case because it's a very developer-centric organization, and Ben's background being a thoughtbot developer and Alfred user, I'm almost certain. Although I've seen folks talking about Raycast, which is the new hotness on the quick launcher world. I started eons ago in Quicksilver, and then I moved to Alfred, I don't know, ten years ago. I don't know what time it is anymore. But I've been in Alfred land for a while, but Raycast seems very cool. Just as an aside, I have not allowed myself... [laughs] this is another one of those like; I do not have permission to go explore this new tool yet because I don't think it will actually make me more productive, although it could make me happier. So... STEPH: I haven't heard of that one, Raycast. I'm literally adding it to the show notes right now as a way so you can find The Duke later, and I can find Raycast later [chuckles] and take a look at it and check it out. Although I really haven't embraced the whole Alfred workflow. I've seen people really enjoy it and just rave about it and how wonderful it is. But I haven't really leaned into that part of the world; I don't know why. I haven't set any hard and fast rules for myself where I can't play around with these technologies, but I haven't taken the time to do it either. CHRIS: You've also not found yourself writing thousands of lines of Vimscript because you thought that was a good idea. So you don't need as many guardrails it would seem. That's my guess. STEPH: This is true. CHRIS: Whereas I need to be intentional [laughs] with how I structure my interaction with my dev tools. STEPH: Instead, I'm just porting over fixtures from one place to another. [laughs] That's the weird space that I'm living in instead. [laughs] CHRIS: But you're getting paid for that. No one paid me for the Vimscript I wrote. [laughter] STEPH: That's fair. Speaking around process-y things, there's something that's been on my mind that Valeria, another thoughtboter, suggested around how we structure our meetings and the default timing that we have for meetings. So Thursdays are my team-focused day. And it's the day where I have a lot of one on ones. And I realized that I've scheduled them back to back, which is problematic because then I have zero break in between them, which I'm less concerned about that because then I can go for an hour or something and not have a break. And I'm not worried about that part. But it does mean that if one of those discussions happens to go over just even for like two or three minutes, then it means that someone else is waiting for me in those two to three minutes. And that feels unacceptable to me. So Valeria brought up a really good idea where I think it's only with the Google Meet paid version. I could be wrong there. But I think with the paid version of it that then you can set the new default for how long a meeting is going to last. So instead of having it default to 30 minutes, have it default to 25 minutes. So then, that way, you do have that five-minute buffer. So if you do go over just like two or three minutes with someone, you've still got like two minutes to then hop to the next call, and nobody's waiting for you. Or if you want those five minutes to then grab some water or something like that. So we haven't implemented it just yet because then there's discussion around is this a new practice that we want everybody to move to? Because I mean, if just one person does it, it doesn't work. You really need everybody to buy into the concept of we're now defaulting to 25 versus 30-minute meetings. So I'll have to let you know how that goes. But I'm intrigued to try it out because I think that would be very helpful for me. Although there's a part of me that then feels bad because it's like, well, if I have 30 minutes to chat with somebody, but now I'm reducing it to 25 minutes each time, I didn't love that I'm taking time away from our discussion. But that still feels like a better outcome than making somebody wait for three to five minutes if something else goes over. So have you ever run into something like that? How do you manage back-to-back meetings? Do you intentionally schedule a break in between or? CHRIS: I do try to give myself some buffer time. I stack meetings but not so much so that they're just back to back. So I'll stack them like Wednesdays are a meeting-heavy day for me. That's intentional just to be like, all right, I know that my day is going to get chopped up. So let's just really lean into that, chop the heck out of Wednesday afternoons, and then the rest of the week can hopefully have slightly longer deep work-type sessions. And, yeah, in general, I try and have like a little gap in between them. But often what I'll do for that is I'll stagger the start of the next meeting to be rather than on the hour or the half-hour, I start it on the 15th minute. And so then it's sort of I now have these little 15-minute gaps in my workflow, which is enough time to do one or two small things or to go get a drink or whatever it is or if things do run over. Like, again, I feel what you're saying of like, I don't necessarily want to constrain a meeting. Or I also don't necessarily want to go into the habit of often over-running. I think it's good to be intentional. Start meetings on time, end meetings on time. If there's a great conversation that's happening, maybe there's another follow-up meeting that should happen or something like that. But for as nonsensical of a human as I believe myself to be, I am rather rigid about meetings. I try very hard to be on time. I try very hard to wrap them up on time to make sure I go to the next one. And so with that, the 15-minute staggering is what I've found works for me. STEPH: Yeah, that makes sense. One-on-ones feels special to me because I wholeheartedly agree with being very diligent about like, hey, this is our meeting time. Let's do a time check. Someone says that at the end, and then that way, everybody can move on. But one on ones are, there's more open discussion space, and I hate cutting people off, especially because it might not be until the last 15 minutes that you really got into the meat of the conversation. Or you really got somewhere that's a little bit more personal or things that you want to talk about. So if someone's like, "Yeah, let me tell you about my life goals," and you're like, "Oh, no, wait, sorry. We're out of time." That feels terrible and tragic to do. So I struggle with that part of it. CHRIS: I will say actually, on that note, I'm now thinking through, but I believe this to be true. Everyone that reports to me I have a 45-minute one-on-one with, and then my CEO I set up the one-on-one. So I also made that one a 45-minute one-on-one. And that has worked out really well. Typically, I try and structure it and reiterate this from time to time of, like, hey, this is your space, not mine. So let's have whatever conversation fits in here. And it's fine if we don't need to use the whole time, but I want to make sure that we have it and that we protect it. Because I often find much like retro, I don't know; I think everything's fine. And then suddenly the conversation starts, and you're like, you know what? Actually, I'm really concerned now that you mentioned it. And you need that sort of empty space that then the reality sort of pop up into. And so with one on one, I try and make sure that there is that space, but I'm fine with being like, we can cut this short. We can move on from one-on-one topics to more of status updates; let's talk about the work. But I want to make sure that we lead with is there anything deeper, any concerns, anything you want to talk through? And sort of having the space and time for that. STEPH: I like that. And I also think it speaks more directly to the problem I'm having because I'm saying that we keep running over a couple of minutes, and so someone else is waiting. So rather than shorten it, which is where I'm already feeling some pain...although I still think that's a good idea to have a default of 25-minute meetings so then that way, there is a break versus the full 30. So if people want to have back-to-back meetings, they still have a little bit of time in between. But for one on ones specifically, upping it to 45 minutes feels nice because then you've got that 15-minute buffer likely. I mean, maybe you schedule a meeting, but, I don't know, that's funky. But likely, you've got a 15-minute buffer until your next one. And then that's also an area that I feel comfortable in sharing with folks and saying, "Hey, I've booked this whole 45 minutes. But if we don't need the whole time, that's fine." I'm comfortable saying, "Hey, we can end early, and you can get more of your time back to focus on some other areas." It's more the cutting someone off when they're talking because I have to hop to the next thing. I absolutely hate that feeling. So thanks, I think I'll give that a go. I think I'll try actually bumping it up to 45 minutes, presuming that other people like that strategy too, since they're opting in [laughs] to the 45 minutes structure. But that sounds like a nice solution. CHRIS: Well yeah, happy to share it. Actually, one interesting thing that I'm realizing, having been a manager at thoughtbot and then now being a manager within Sagewell, the nature of the interactions are very different. With thoughtbot, I was often on other projects. I was not working with my team day to day in any real capacity. So it was once every two weeks, I would have this moment to reconnect with them. And there was some amount of just catching up. Ideally, not like status update, low-level sort of thing, but sort of just like hey, what have you been working on? What have you been struggling with? What have you been enjoying? There was more like I needed bigger space, I would say for that, or it's not surprising to me that you're bumping into 30 minutes not being quite long enough. Whereas regularly, in the one on ones that I have now, we end up cutting them short or shifting out of true one-on-one mode into more general conversation and chatting about Raycast or other tools or whatever it is because we are working together daily. And we're pairing very regularly, and we're all on the same project and all sorts of in sync and know what's going on. And we're having retro together. We have plenty of places to have the conversation. So the one-on-one again, still, I keep the same cadence and the same time structure just because I want to make sure we have the space for any day that we really need that. But in general, we don't. Whereas when I was at thoughtbot, it was all the more necessary. And I think for folks listening; I could imagine if you're in a team lead position and if you're working very closely with folks, then you may be on the one side of things versus if you're a little bit more at a distance from the work that they're doing day to day. That's probably an interesting question to ask, and think about how you want to structure it. STEPH: Yeah, I think that's an excellent point. Because you're right; I don't see these individuals. We may not have really gotten to interact, except for our daily syncs outside of that. So then yeah, there's always like a good first 10 minutes of where we're just chatting about life and catching up on how things are going before then we dive into some other things. So I think that's a really good point. Cool, solving management problems on the mic. I dig it. In slightly different news, I've joined a book club, which I'm excited about. This book club is about Ruby. It's specifically reading the book Ruby Science, which is a book that was written and published by thoughtbot. And it requires zero homework, which is my favorite type of book club. Because I have found I always want to be part of book clubs. I'm always interested in them, but then I'm not great at budgeting the time to make sure I read everything I'm supposed to read. And so then it comes time for folks to get together. And I'm like, well, I didn't do my homework, so I can't join it. But for this one, it's being led by Joël, and the goal is that you don't have to do the homework. And they're just really short sections. So whoever's in charge of leading that particular session of the book club they're going to provide an overview of what's covered in whatever the reading material that we're supposed to read, whatever topic we're covering that day. They're going to provide an overview of it, an example of it, so then we can all talk about it together. So if you read it, that's wonderful. You're a bit ahead and could even join the meeting like five minutes late. Or, if you haven't read it, then you could join and then get that update. So I'm very excited about it. And this was one of those books that I'd forgotten that thoughtbot had written, and it's one that I've never read. And it's public for anybody that's interested in it. So to cover a little bit of details about it, so it talks about code smells, ways to refactor code, and then also common patterns that you can use to solve some issues. So there's a lot of really just great content that's in it. And I'll be sure to include a link in the show notes for anyone else that's interested. CHRIS: And again, to reiterate, this book is free at this point. Previously, in the past, it was available for purchase. But at one point a number of years ago, thoughtbot set all of the books free. And so now that along with a handful of other books like...what's Edward's DNS book? Domain Name Sanity, I believe, is Edward's book name that Edward Loveall wrote when he was not a thoughtboter, [laughs] and then later joined as a thoughtboter, and then we made the book free. But on the specific topic of Ruby Science, that is a book that I will never forget. And the reason I will never forget it is that book was written by the one and only CTO Joe Ferris, who is an incredibly talented developer. And when I was interviewing with thoughtbot, I got down to the final day, which is a pairing session. You do a morning pairing session with one thoughtbot developer, and you do an afternoon pairing session with another thoughtbot developer. So in the morning, I was working with someone on actually a patch to Rails which was pretty cool. I'd never really done that, so that was exciting. And that went fine with the exception that I kept turning on Caps Lock on their keyboard because I was used to Caps Lock being CTRL, and then Vim was going real weird for me. But otherwise, that went really well. But then, in the afternoon, I was paired with the one and only CTO Joe Ferris, who was writing the book Ruby Science at that time. And the nature of the book is like, here's a code sample, and then here's that code sample improved, just a lot of sort of side-by-side comparisons of code. And I forget the exact way that this went, but I just remember being terrified because Joe would put some code up on the screen and be like, "What do you think?" And I was like, oh, is this the good code or the bad code? I feel like I should know. I do not know. I'm not sure. It worked out fine, I guess. I made it through. But I just remember being so terrified at that point. I was just like, oh no, this is how it ends for me. It's been a good run. STEPH: [laughs] CHRIS: I made it this far. I would have loved to work for this nice thoughtbot company, but here we are. But yeah, I made it through. [laughs] STEPH: There are so many layers to that too where it's like, well if I say it's terrible, are you going to be offended? Like, how's this going to go for me if I speak my truths? Or what am I going to miss? Yeah, that seems very interesting (I kind of like it.) but also a terrifying pairing session. CHRIS: I think it went well because I think the code...I'd been following thoughtbot's work, and I knew who Joe was and had heard him on podcasts and things. And I kind of knew roughly where things were, and I was like, that code looks messy. And so I think I mostly got it right, but just the openness of the question of like, what do you think? I was like, oh God. [laughs] So yeah, that book will always be in my memories, is how I would describe it. STEPH: Well, I'm glad it worked out so we could be here today recording a podcast together. [laughs] CHRIS: Recording a podcast together. Now that I say all that, though, it's been a long time since I've read the book. So maybe I'll take a revisit. And definitely interested to hear more about your book club and how that goes. But shifting ever so slightly (I don't have a lot to say on this topic.) but there's a new framework technology thing out there that has caught my attention. And this hasn't happened for a while, so it's kind of novel for me. So I tend to try and keep my eye on where is the sort of trend of web development going? And I found Inertia a while ago, and I've been very, very happy with that as sort of this is the default answer as to how I build websites. To be clear, Inertia is still the answer as to how I build websites. I love Inertia. I love what it represents. But I'm seeing some stuff that's really interesting that is different. Specifically, Remix.run is the thing that I'm seeing. I mentioned it, I think, in the last episode talking about there was some stuff that they were doing with data loading and async versus synchronous, and do you wait on it or? They had built some really nice levers and trade-offs into the framework. And there's a really great talk that Ryan Florence, one of the creators of Remix.run, gave about that and showed what they were building. I've been exploring it a little bit more in-depth now. And there is some really, really interesting stuff in Remix. In particular, it's a meta-framework, I think, is the nonsense phrase that we use to describe it. But it's built on top of React. That won't be true for forever. I think it's actually they would say it's more built on top of React Router. But it is very similar to Next.js for folks that have seen that. But it's got a little bit more thought around data loading. How do we change data? How do we revalidate data after? There's a ton of stuff that, having worked in many React client-side API-heavy apps that there's so much pain, cache invalidation. How do you think about the cache? When do you fetch from the network? How do you avoid showing 19 different loading spinners on the page? And Remix as a framework has some really, I think, robust and well-thought-out answers to a lot of that. So I am super-duper intrigued by what they're doing over there. There's a particular video that I think shows off what Remix represents really well. It's Ryan Florence, that same individual, the creator of Remix, building just a newsletter signup page. But he goes through like, let's start from the bare bones, simplest thing. It's just an input, and a form submits to the server. That's it. And so we're starting from web 2.0, long, long ago, sort of ideas, and then he gradually enhances it with animations and transitions and error states. And even at the end, goes through an accessibility audit using the screen reader to say, "Look, Remix helps you get really close because you're just using web fundamentals." But then goes a couple of steps further and actually makes it work really, really well for a screen reader. And, yeah, overall, I'm just super impressed by the project, really, really intrigued by the work that they're doing. And frankly, I see a couple of different projects that are sort of in this space. So yeah, again, very early but excited. STEPH: On their website...I'm checking it out as you're walking me through it, and on their website, they have "Say goodbye to Spinnageddon." And that's very cute. [laughs] CHRIS: There's some fundamental stuff that I think we've just kind of as a web community, we made some trade-offs that I personally really don't like. And that idea of just spinners everywhere just sending down a ball of application logic and a giant JavaScript file turning it on on someone's computer. And then immediately, it has to fetch back to the server. There are just trade-offs there that are not great. I love that Remix is sort of flipping that around. I will say, just to sort of couch the excitement that I'm expressing right now, that Remix exists in a certain place. It helps with building complex UIs. But it doesn't have anything in the data layer. So you have to bring your own data layer and figure out what that means. We have ActiveRecord within Rails, and it's deeply integrated. And so you would need to bring a Prisma or some other database connection or whatever it is. And it also doesn't have more sort of full-featured framework things. Like with Rails, it's very easy to get started with a background job system. Remix has no answer to that because they're like, no, no, this is what we're doing over here. But similarly, security is probably the one that concerns me the most. There's an open conversation in their discussion portal about CSRF protection and a back and forth of whether or not Remix should have that out of the box or not. And there are trade-offs because there are different adapters that you can use for auth. And each would require their own CSRF mitigation. But to me, that is the sort of thing that I would want a framework to have. Or I'd be interested in a framework that continues to build on top of Remix that adds in background jobs and databases and all that kind of stuff as a complete solution, something more akin to a Rails or a Laravel where it's like, here we go. This is everything. But again, having some of these more advanced concepts and patterns to build really, really delightful UIs without having to change out the fundamental way that you're building things. STEPH: Interesting. Yeah, I think you've answered a couple of questions that I had about it. I am curious as to how it fits into your current tech stack. So you've mentioned that you're excited and that it's helpful. But given that you already have Rails, and Inertia, and Svelte, does it plug and play with the other libraries or the other frameworks that you have? Are you going to have to replace something to then take advantage of Remix? What does that roadmap look like? CHRIS: Oh yeah, I don't expect to be using Remix anytime soon. I'm just keeping an eye on it. I think it would be a pretty fundamental shift because it ends up being the server layer. So it would replace Rails. It would replace the Inertia within the stack that I'm using. This is why as I started, I was like, Inertia is still my answer. Because Inertia integrates really well with Rails and allows me to do the sort of it's not progressive enhancement, but it's like, I want fancy UI, and I don't want to give up on Rails. And so, Inertia is a great answer for that. Remix does not quite fit in the same way. Remix will own all of the request-response lifecycle. And so, if I were to use it, I would need to build out the rest of that myself. So I would need to figure out the data layer. I would need to figure out other things. I wouldn't be using Rails. I'm sure there's a way to shoehorn the technologies together, but I think it sort of architecturally would be misaligned. And so my sense is that folks out there are building...they're sort of piecing together parts of the stack to fill out the rest. And Remix is a really fantastic controller and view from their down experience and routing layer. So it's routing, controller, view I would say Remix has a really great answer to, but it doesn't have as much of the other stuff. Whereas in my case, Inertia and Rails come together and give me a great answer to the whole story. STEPH: Got it. Okay, that's super helpful. CHRIS: But yeah, again, I'm in very much the exploratory phase. I'm super intrigued by a lot of what I've seen of it and also just sort of the mindset, the ethos of the project as it were. That sounds fancy as I say it, but it's what I mean. I think they want to build from web fundamentals and then enhance the experience on top of that, and I think that's a really great way to go. It means that links will work. It means that routing and URLs will work by default. It means that you won't have loading spinner Armageddon, and these are core fundamentals that I believe make for good websites and web applications. So super interested to see where they go with it. But again, for me, I'm still very much in the Rails Inertia camp. Certainly, I mean, I've built Sagewell on top of it, so I'm going to be hanging out with it for a while, but also, it would still be my answer if I were starting something new right now. I'm just really intrigued by there's a new example out there in the world, this Remix thing that's pushing the envelope in a way that I think is really great. But with that, my now…what was that? My second or my third rave? Also called the positive rant, as we call it. But yeah, I think on that note, what do you think? Should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Another toaster strudel debate?! Plus, the results are in for the most listened-to podcast in the RoR community! :: drum roll :: Steph has a "Dear Gerrit" message to share. Chris has a follow-up on mobile app strategy. The Bike Shed: 328: Terrible Simplicity (https://www.bikeshed.fm/328) When To Fetch: Remixing React Router - Ryan Florence (https://www.youtube.com/watch?v=95B8mnhzoCM) Virtual Event - Save Time & Money with Discovery Sprints (https://thoughtbot.com/events/save-time-money-with-discovery) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: thoughtbot's next virtual event "Save Time & Money with Discovery Sprints" is coming up on June 17th, from 2 - 3 PM Eastern. It's a discussion with team members from product management, design and development. From a developer perspective, topics will include how to plan a product's architecture, both the MVP and future version, how to lead a tech spikes into integrations and conduct a build vs buy reviews of third party providers. Head to thoughtbot.com/events to register, the event is June 17th 2 - 3 PM ET. Even if you can't make it, registering will get you on the list for the recording. CHRIS: We're the second-best. We're the second-best. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: I'm very happy to report that I picked up a treat from the store recently. So while I was in Boston and we were hanging out in person, we talked about Pop-Tarts because that always comes up as a debate, as it should. And then also Toaster Strudels came up, so I now have a package of Toaster Strudels, and those are legit. Pop-Tart or Toaster Strudel, I am team Toaster Strudel, which I know you're going to ask me about icing and if I put it on there, so go ahead. I'm going to pause. [laughs] CHRIS: It sounds like I don't even need to say anything. But yes, inquiring minds want to know. STEPH: I think that's also my very defensive response because yes, I put icing on my Toaster Strudel. CHRIS: How interesting. [laughs] STEPH: But it feels like a whole different class of pastry. So I'm very defensive about my stance on Pop-Tarts with no icing put Strudel with icing. CHRIS: A whole different class of pastry. Got it. Noted. Understood. So did you travel? Like, were these in your luggage that you flew back with? STEPH: [laughs] Oh no. They would be all gooey and melty. No, we bought them when we got back to North Carolina. Oh, that'd be a pro move; just pack little individual Strudels as your airplane snack. Ooh, I might start doing that now. That sounds like a great airplane snack. CHRIS: You got to be careful though if the icing, you know, if it's pressurized from ground level and then you get up there, and it explodes. And you gotta be careful. Or is it the reverse? It's lower pressure up in the plane. So it might explode. STEPH: [laughs] Either way, it might explode. CHRIS: Well, yeah. If you somehow buy a packet of icing that is sky icing that is at that pressure, and you bring it down, then...but if you take it up and down, I think it's fine. If you open it at the top, you might be in danger. If you open icing under the ocean, I think nothing's going to happen. So these are the ranges that we're playing with. STEPH: I will be very careful sky icing and probably pack two so that way I have a backup just in case. So if one explodes, we'll be like, all right, now I know what I'm working with and be more prepared for the next one. CHRIS: That's just smart. STEPH: I try to make smart travel decisions, Toaster Strudels on the go. Aside from travel treats and sky icing, I have some news regarding Planet Argon, who is a Ruby on Rails consultancy regarding their latest published this year's Ruby on Rails community survey results. And so they list a lot of fabulous different topics in there. And one of them includes a learning section that highlights most listened to podcasts in the Ruby on Rails community as well as blogs and some other resources. And Bike Shed is listed as the second most listened to podcast in the Ruby on Rails community, so whoo, golf clap. CHRIS: Fantastic. STEPH: And in addition to that, the thoughtbot blog got a really nice shout-out. So the thoughtbot blog is in the number two spot for the most visited blogs in the community. In the first spot is Ruby Weekly, which is like, you know, okay, that feels fair, that feels good. So it's really exciting for the thoughtbot blog because a lot of people work really hard on curating and creating that content. So that's wonderful that so many people are enjoying it. And then I should also highlight that for the podcast in first place is Remote Ruby, so congrats to Chris, Jason, and Andrew for grabbing that number one spot. And Brittany Martin, host of the Ruby on Rails Podcast, along with Brian Mariani, Jemma Issroff, and Nick Schwaderer, are in the number three spot. And some people say that Ruby is losing steam but look at all that content and all those highly ranked podcasts. I mean, we like Ruby so much we're spending time recording ourselves talking about it. So I say long live Ruby, long live Rails. CHRIS: Yes. Long live Ruby indeed. And yeah, it's definitely an honor to be on the list and to be amongst such other wonderful shows. Certainly big fans of the work of those other podcasts. We even did a joint adventure with them at one point, and that was a really wonderful experience, so yeah, honored to be on the list alongside them. And to have folks out there in the world listening to our tech talk and nonsense always nice to hear. STEPH: Yeah. You and I show up and say lots of silly things and technical things into the podcast. The true heroes are the ones that went and voted. So thank you to everybody who voted. That's greatly appreciated. It's really nice feedback. Because we get listener responses and questions, and those are wonderful because it lets us know that people are listening. But I have to say that having the survey results is also really nice. It lets us know people like the show. Oh, but I did go back and look at some of the previous stats because then I was like, huh, so I'm paying attention. I looked at this year's, and I was like, I wonder what last year's was or the year before that. And I think this survey comes out every two years because I didn't see one for 2021. But I did find the survey results for 2020, which we were in the number one spot for 2020, and Remote Ruby was in the second spot. So I feel like now we've got a really nice, healthy podcasting war situation going on to see who can grab the first spot. We've got two years, everybody, to see who [laughs] grabs the number one spot. That's a lot of prep time for a competition. CHRIS: Yeah, I feel like we should be like, I don't know, planning elaborate pranks on them or something like that now. Is that where this is at? It's something like that, I think. STEPH: I think so. I think this is where you put like sky frosting inside someone's suitcase, and that's the type of prank that you play. [laughs] CHRIS: The best of pranks. STEPH: We'll definitely put together a little task force. And we'll start thinking of pranks that we all need to start playing on each other for the podcasting wars that we're entering for the next few years. But anywho, what's going on in your world? CHRIS: Let's see, what's going on in my world? A fun thing happened recently. I had a chance to reflect back on some architectural choices that we've made in the Sagewell platform. And one of those specific choices is how we've approached building our native mobile apps. We made what some listeners may remember is an interesting set of choices. In particular, in Episode 328, which we'll include a link to in the show notes, I shared with you the approach that we're doing, which is basically like, Inertia is great, web user great. We like the web as a platform. What if we were to wrap it in a native shell and find this interesting and somewhat unique hybrid trade-off point? And so, at that point, we were building it. We had most of it built out, and things were going quite well. I think we maybe had the iOS app in the store and the Android app approaching the store or something like that. At this point, both apps have been released to the store, so they are live. Production users are signing in. It's wonderful. But I had a moment in the past couple of weeks to reassess or look at that set of choices and evaluate it. And thankfully, I'm happy with the choices that we've made. So that's good. But to get into the specifics, there were two things that happened that really, really framed the choice that we made, so one was we introduced a major new feature. We basically overhauled the first-run experience, the onboarding that users experience, and added a new, pretty fundamental facet to the platform. It's a bunch of new screens, and flows, and error states, and all of this complexity. And in the process, we iterated on it a bunch. Like, first, it looked like this, and then we changed the order of the screens and switched out the error messages, and et cetera, et cetera. And I'll be honest, we never even thought about the mobile apps. It just wasn't even a consideration. And interestingly, we did as a final check before going fully live and releasing this out to the full production audience; we did spot check it in the mobile apps, and it didn't work. But it didn't work for a very specific, boring, technical reason that we were able to resolve. It has to do with iframes and WebViews and embedded something, something. And we had to set a flag. Thankfully, it was solvable without a deploy of the native mobile apps. And otherwise, we never thought about the native apps. Specifically, we were able to add this fundamental set of features to our platform. And they just worked in native mobile. And they were the same as they roughly are if you're on a mobile WebView or if you're on a desktop web, you know, slightly different in terms of form factor. But the functionality was all the same. And critically, the error states and the edge cases and the flow, there's so much to think about when you're adding a nontrivial feature to an app. And the fact that we didn't have to consider it really spoke to the choice that we made here. And again, to name it, the choice that we made is we're basically just reusing the same WebViews, the same Rails controllers, and the same what are Svelte components under the hood but the same essentially view layer as well. And we are wrapping that in a native iOS. It's a Swift application shell, and on Android, it's a Kotlin application shell. But under the hood, it's the same web stuff. And that was really great. We just got these new features. And you know what? If we have to rip that whole set of functionality out, again, we won't need to deploy. We won't need to rethink it. Or, if we want to subtly tweak it, we can do that. If we want to think about feature flags or analytics, or error states or error reporting, all of this just naturally falls out of the approach that we took. And that was really wonderful. STEPH: That's super nice. I also love this saga of like, you made a choice, and then you're coming back to revisit and share how it's going. So as someone who's never done this before, in regards of wrapping an application in the manner that you have and then publishing it and distributing it that way, what does that process look like? Is this one of those like you run a command, and literally, it's going to wrap the application and then make it hostable on the different mobile app stores? Or what's that? Am I oversimplifying the process? What does that look like? CHRIS: I think there are a lot of platforms or frameworks I think would probably be the better word like Capacitor is something that comes to mind or Ionic or Expo. There are a handful of them that are a little more fully featured in what they provide. So you just point us at your React Views and whatnot, and we'll wrap that up, and it'll be great. But those are for, I may be overgeneralizing here, but my understanding is those are for more heavy client-side bundles that are talking to a common API. And so you're basically taking your same rich client-side application and bundling that up for reuse on the native app, the native app platforms. And so I think those do have some release to the store sort of thing. In our case, we went a little bit further with that integration wrapper thing that we built. So that is a thing that we maintain. We have a Sagewell iOS repo and a Sagewell Android repo. There's a bunch of Swift and Kotlin code, respectively, in each of them, and we deploy to the stores manually. We're doing that whole process. But critically, the code that is in each of those repositories is just the bridge glue code that says, oh, when this Inertia navigation event happens, I'm going to push a WebView to the navigation stack. And that's what that is. I'm going to render the tab bar of buttons at the bottom with the navigation elements that I get from the server. But it's very much server-driven UI, is the way that I would describe it. And it's wrapping WebViews versus actually having the whole client bundle wrapped up in the thing. It's unfortunately subtle to try and talk through on the radio, but yeah. [laughs] STEPH: You're doing great; this is helping. So if there's a change that you want to make, you go to the Rails application, and you make that change. And then do you need to update anything on that iOS repo? It sounds like you don't, which then you don't have to push a new update to the store. CHRIS: Correct. For the vast majority of things, we do not need to make any changes. It's very rare for us to deploy the iOS or the Android app is a different way to put it or to push new releases to the store. It happens we may want to add a new feature to the sort of bridge layer that we built, but increasingly, those are rare. And now it's basically like, yeah, we're just wrapping those WebViews, and it's going great. And again, to name it, it's a trade-off. It's an intentional trade-off that we've made. We're never going to have the richest, most deep platform integration, smooth experience. We are making a small trade-off on that front. But given where we're at as an organization, given how early we are, how much iteration and change, we chose an architecture that optimizes for that change. And so again, like what you just said, yeah, I can...you know how it's really nice to be able to deploy six times a day on a web app, and that's a very straightforward thing to do? It is not so straightforward in the native mobile world. And so, we now have afforded ourselves the ability to do that. But critically, and this is the fun part in my mind, have the trade-offs in the controls. So if we were just like, it's just a WebView, and that's it, and we put it in the stores, and we're done, that is too far of an extreme in my mind. I think the performance trade-offs, the experience trade-offs, it wouldn't feel like a native app like in a deep way, in a problematic way. And so as an example, we have a navigation bar at the top of our app, particularly on iOS, that is native iOS navigation. And we have a tab bar at the bottom, which is native tab UI element. I forget actually what it's called, but it's those elements. And we hide the web application navigation when we're in the mobile context. So we actually swap those out and say, like, let's actually promote these to formal native functionality. We also, within our UI on the web, have a persistent button in the top right corner of your screen that says, "Need help? Reach out to your retirement advocate." who is the person that you get to work with. You can send questions, et cetera, et cetera. It's this little help sidebar drawer thing that pops out. And we have that as a persistent HTML button in the top corner of the web frame. But when we're on native, we push that up as a distinct element in the native UI section. And then again, the bridge that I'm talking about allows for bi-directional communication between the JavaScript side and the native side or the native side and the JavaScript side. And so it's those sorts of pieces that have now afforded us all of the freedom to tinker, and we don't need to re-release where we're like, oh, we want to add a new weird button that does a thing in the WebView when you click on a button outside the WebView. We now just have that built-in. STEPH: Yeah, I really like the flexibility that you're describing. When you promoted those elements to be more native-friendly so, like the navigation or the footer or the little get help chat, is that something that then your team implemented in like the iOS or the Kotlin repo? Okay, I see you nodding, but other people can't see that, so...[laughs] CHRIS: Yeah. I was going to also say the words, but yes, those are now implemented as native parts. So the thing that we built isn't purely agnostic decoupled. It is Sagewell-specific; a lot of it is low-level. Like, let's say we want to wrap an Inertia app in a native mobile wrapper. Like, 90% of the code in it is that, but then there are little bits that are like, and put a button up there. And that button is the Sagewell button. And so it's not entirely decoupled from us. But it mostly is this agnostic bridge to connect things together. STEPH: Yeah, the way you're describing it sounds really nice in terms of you're able to get out the app quickly and have a mobile app quickly that works on both platforms, and then you're still able to deploy changes without having to push that. That was always my biggest mental, or emotional hurdle with the idea of mobile development was the concept of that you really had to batch everything together and then submit it for review and approval and then get it released. And then you got to hope people then upgrade and get the newest version. And it just felt like such a process, not that I ever did much of it. This was all just even watching like the mobile team and all the work that they had to do. And I had sympathy pains for them. But the fact that this approach allows you to avoid a lot of that but still have some nice, customized, more native elements. Yeah, I'm basically just recapping everything you said because I like all of it. CHRIS: Well, thank you, friend. Like I said, I've really enjoyed it, and similar to you, I'm addicted to the feedback loop of the web. It's beautiful. I can deploy ten times or however many I want. Anytime I want, I can push out a new version. And that ability to iterate, to test, to explore, to tweak, to not have to do as much formal testing upfront because I'm terrified that if a bug sneaks out, then, it'll take me two weeks to address it; it just is so, so freeing. And so to give that up moving into a native context. Perhaps I'm fighting too hard to hold on to my dream of the ability to rapidly iterate. But I really do believe in that and especially for where we're at as an organization right now. But, and a critical but here, again, it's a trade-off like anything else. And recently, I happened to be out about in the town, and I decided, oh, you know what? Let me open up the app. Let me see what it's like. And I wasn't on great internet. And so I open the app, and it loads because, you know, it's a native app, so it pops up. But then the thing that actually happened is a loading spinner in the middle of the screen and sort of a gray nothing for a little while until the server request to fetch the necessary UI elements to render the login screen appeared. And that experience was not great. In particular, that experience is core to the experience of using the app every single time. Every time you use it, you're going to have a bad time because we're re-downloading that UI element. And there's caching, and there's things that could happen there to help with that. But fundamentally, that experience is going to be a pretty common one. It's the first thing that you experience when you're opening the app. And so I noticed that and I chatted with the team, and I was like, hey, I feel like this is actually something that fixing this I think would really fundamentally move us along that spectrum of like, we've definitely made some trade-offs here. But overall, it feels snappy and like a native app. And so, we opted to prioritize work on a native login screen for both platforms. This also allows us to more deeply integrate. So particularly, we're going to get biometric logins like fingerprints or face scans, or whatever it is. But critically, it's that experience of like, I open the Sagewell native app on my iOS phone, and then it loads immediately. And then I show it my face like we do these days, and then it opens up and shows me everything that I want to see inside of it. And it's that first-run experience that feels worth the extra effort and the constraints. Because now that it's native mobile, that means in order to change it, we have to do a deploy, not a deploy, release; that's what they call it in the native world. [laughs] You can tell I'm well-versed in this ecosystem. But yeah, we're now choosing that trade-off. And what I really liked about this sort of set of things like the feature that we were able to just accidentally get for free on native because that's how this thing is built. And then likewise, the choice to opt into a fully native login screen like having that lever, having that control over I'm going to optimize for iteration generally, but where it's important, we want to optimize for performance and experience. And now we have this little slider that we can go back and forth. And frankly, we could choose to screen by screen just slowly replace everything in the app with true native WebViews backed by APIs. And we could Ship of Theseus style replace every element of the app with true native mobile things until none of the old bridge code exists. And our users, in theory, would never know. Having that flexibility is really nice given the trade-off and the choice that we've made. STEPH: You said a word there that I missed. You said ship something style. CHRIS: Ship of Theseus. STEPH: What is that? CHRIS: It's like an old biblical story, I want to say, but it's basically the idea of, like, you have the ship. And then some boards start to rot out, so replace those boards. And then the mast breaks, you replace the mast. And slowly, you've replaced every element on the ship. Is it still the same ship at that point? And so it's sort of a philosophical question. So if we replace every single view in this app with a native view, is it still the same map? Philosophers will philosophize about it forever, but whatever. As long as we get to keep iterating and shipping software, then I'm happy. STEPH: [laughs] Y'all philosophize. That's that word, right? CHRIS: Yeah. STEPH: And do your philosopher thing. We'll just keep building and shipping. CHRIS: I don't know if I pronounced it right. It's like either Theseus or Theseus, and I'm sure I said the wrong one. And now that I've said the other, I'm sure both of them are wrong somehow. It's like a USB where there's up and down, and yet somehow it takes three tries. So anyway, I may have mispronounced it, and I may be misattributing it, but that's the idea I was going for. STEPH: Well, given I wasn't even familiar with the word until just now, I'm going to give both pronunciations a thumbs up. I also really like how you decided that for the login screen, that's the area that you don't want people to wait because I agree if you're opening an application or opening...maybe it's the first time, maybe it's the 100th time. Who knows? But that feels important. Like, that needs to be snappy. I need to know it's responsive. And it builds trust from the minute that I clicked on that application. And if it takes a long time, I just immediately I'm like, what are y'all doing? Are y'all real? Do you know what you're doing over there? So I like how you focused on that experience. But then once I log in, like if something is slow to log me in, I will make up excuses for the application all day where I'm like, well, you know, maybe it's my connection. It's fine. I can wait for the next screen to load. That feels more reasonable. And it doesn't undermine my trust nearly as much as when I first click on the app. So that feels like a really nice trade-off as well, or at least a nice area that you've improved while still having those other trade-offs and benefits that you mentioned. CHRIS: To highlight it, you used a phrase there which I really liked. Like, it's building trust. If something's a little bit off in that first run experience every single time, then it kind of puts a question in the back of your head, maybe not even consciously. But you're just kind of looking at it, and you're like, what are you doing there? What are you up to, friend? Humans say to the apps they use on their phone. That's normal, right? When you talk... But to name it, we've also done a round of performance work throughout the app. And so there are a couple of layers to it. But it was work that we had planned for a while, but we kept deferring. But now that we're seeing more usage of the native apps, the native apps experience the same surface area of performance stuff but all the more so because they may be on degraded network connections, et cetera. And so this is another example where this whole thing kind of pays off. The performance work that we did affects everything. It affects the web. It's the same under the hood. It's let's reduce the network requests that we're making in the payloads that we're sending, particularly the network requests to upstream things, so like the banking partner that we're using and those APIs, like, collating all the data to then render the screen. Because of Inertia, we only have a single sort of back and forth conversation via the API as opposed to I think it's pretty common to have like seven different APIs and four different spinners on the screen. We're not doing that, none of that on my watch. [chuckles] But we minimize the background calls to the other parties that we're integrating with. And then, we reduce the payload of data that we're sending on each request. And each of those were like, we had to think about things and tweak and poke, but again it's uniform. So mobile web has that now, desktop web has that now. Android, iOS, they all just inherited it sort of that just happened one day without a deploy or release, without a release of either of the native mobile apps. We did deploy to the web to make that happen, but that's easy. I can do that a bunch of times a day. One last thing I want to share as we're on this topic of trade-offs and levers, there was a really great conference talk that I watched recently, which was Ryan Florence of remix.run also React Router fame if you're familiar with him from that. But he was talking about the most recent version of Remix, which is their meta framework on top of React. But they've done some really interesting stuff around processing data, fetching data, when and how to sequence that. And again, that thing that I talked about of nine different loading spinners on the screen, Remix is taking a very different approach but is targeting that same thing of like, that's not great for user experience. Cumulative layout shift being the actual number that you can monitor for this. But in that talk, there are features that they've added to Remix as a framework where you can just decide, like, do we wait for this or do we not? Do we make sure we have all of the data, or do we say, you know what? Actually, this is going to be below the fold. So it's okay to defer loading this until after we send down the first payload. And then we'll kick in, and we'll do it from the client-side. But it's this wonderful feature of the framework that they're adding in where there's basically just a keyword that you can add to sort of toggle that behavior. And again, it's this idea of like trade-offs. Are we okay with more layout shift, or are we okay with more waiting? Which is it that we're going to optimize for? And I really love that idea of putting that power very simply in the hands of the developers to make those trade-off decisions and optimize over time for what's important. So we'll share a link to that talk in the show notes as well. But it was very much in the same space of like, how do I have the power to decide and to change my mind over time? That's what I want. But yeah, with that, I think that's enough of me updating on the mobile app. I'll continue to share as new things happen. But again, I'm at this point very happy with where we're at. So yeah, it's been fun. But yeah, what else is up in your world? STEPH: I have a dear Gerrit message that I wrote earlier, so I want to share that with you. Gerrit is the system that we're using for when we push up code changes that then manages very similar in the competitive space of like GitHub and GitLab, and Bitbucket. And so the team that I'm working with we are using Gerrit. And Gerrit and I, you know, we get along for the most part. We've managed to have a working relationship. [chuckles] But this week, I wrote my dear Gerrit letter is that I really miss being able to tell a story with my commit messages. That is the biggest pain that I'm feeling right now. So for anyone that's less familiar or if you already are familiar with Gerrit, each change that Gerrit shows represents a single commit that's under review. And each change is identified by a Change-Id. So the basic concept of Gerrit is that you only have one commit per review. So if you were to translate that to GitHub terminology, every pull request is only going to have one commit, and so you really can't push up multiple. And so, where that has been causing me the most pain is I miss being able to tell a story. So like even simple stories that are like, hey, I removed something that's not used. I love separating that type of stuff into its own commit just so then people can see that as they're going through review. Now, before I merge, I'm likely to squash, and that doesn't feel important that it needs to be its own commit. That's really just for the reviewer so they can follow along for the changes. But the other one, I can slowly get over that one. Because essentially, the way I get around that is then when I do push up my code for review, is I then go through my change request, and then I just add comments. So I will highlight that line and say, "Hey, I'm removing this because it's not in use." And so, I found a workaround for that one. But the one I haven't found a workaround for is that I don't push up my local work very often because I love having lots of local, tiny, green commits so that way I can know the progress that I'm at. I know where I'm headed. Also, I have a safe space to roll back to, but then that means that I may have five or six commits that I have locally, but I haven't pushed up somewhere. And that is bothering me more and more hour by hour the more I think about it that I can't push stuff up because it makes me nervous. Because, I mean, usually, at least by the end of the day, I push everything up, so it's stored somewhere. And I don't have to worry about that work disappearing. Now I am working on a dev machine. So there is that aspect of it's technically...it's not even on my local machine. It is stored somewhere that I should still be able to access. CHRIS: What's a dev machine? The way you're saying it, it sounds like it's a virtual machine, not like a laptop. But what's a dev machine? STEPH: Good question. So the dev machine is a remote server or remote machine that then I am accessing, and then that's where I'm performing. That's where I'm writing all of my work. And then that's also kind of the benefit is everything is not local; it's controlled by the team. So then that also means that other teams, other individuals can help set up these environments for future developers. So then you have that consistency across everyone's working with the same Rails version, or gems, or has access to the same tools. So in that sense, my work isn't just on my laptop because then that would really worry me because then I've got nowhere...it's not backed up anywhere. So at least it is somewhere it's being stored that then could be accessed by someone. So actually, now, as I'm talking this through, that does help alleviate my concern about this a bit. [laughs] But I still miss it; I still miss being able to just push up my work and then have multiple commits. And I looked into it because I was like, well, maybe I'm misunderstanding something about Gerrit, and there's a way around this. And that's still always a chance. But from the research that I've done, it doesn't seem to be. And there are actually two very fiery takes that I saw that I have to share because they made me laugh. When I was Googling, the question of like, "Can I push up multiple commits to one single Gerrit CR? Or is there just a way to, like, can I have this concept of like a branch and then I have many commits, but then I turn it into one CR? Whatever the world would give me. What do they have? [laughs] I'm laughing just looking at this now. One of the responses was, have you tried squashing your commits into one commit? And I was like, [laughs] "Yeah, that's not what I had in mind, but sure." And then the other one, this is the more fiery take. They were very defensive about Gerrit, and they wrote that "People who don't like Gerrit usually just hack shit together. They cut corners and love squashing commits or throwing away history. And those people hate Gerrit. Developers who care love it. It's definitely possible and easy to produce agile software." And I just...that made me laugh. I was like, cool, I'm a developer that cuts corners and loves squashing commits. [laughs] CHRIS: So you don't care is what that take says. STEPH: I'm a developer who does not care. CHRIS: You know, Steph, I've worked with you for a while. And I've been looking for the opportunity to have this hard conversation with you. But I just wish you cared a little more about the software that you're writing, about the people that you're working with, about the commits that you're authoring. I just see it in every facet of your work. You just don't care. To be very clear for anyone listening at home, that is the deepest of sarcasm that I can make. Steph cares so very much. It's one of the things that I really enjoy about you. STEPH: I mean, we had the episode about toxic traits. This would have been the perfect time to confront me about my lack of caring about software and the processes that we have. So winding down on that saga, it seems to be the answer is no, friend; I cannot push up multiple commits. Oh, I tried to hack it. I am someone that tries to hack shit together because I tried to get around it just to see what would happen. [laughs] Because the docs had suggested that each change is identified by a Change-Id. And I was like, hmm, so what if there were two commits that had the same Change-Id, would Gerrit treat those as patch sets? Because right now, when you push up a change, you can see all the different patch sets, so that's nice. So that's a nice feature of Gerrit as you can see the history of, like, someone pushed up this change. They took in some feedback. They pushed up a new change. And so that history is there for each push that someone has provided. And I wondered maybe if they had the same Change-Id that then the patch sets would show the first commit and then the second commit. And so I manually altered the commits two of them to reference the same Change-Id. And I have to say, Gerrit was on to me because they gave me a very nice error message that said, "Same Change-Id and multiple changes. Squash the commits with the same Change-Ids or ensure Change-Ids are unique for each commit. And I thought, dang, Gerrit, you saw me coming. [laughs] So that didn't work either. I'm still in a world of where I now wait. I wait until I'm ready for someone to review stuff, and I have to squash everything, and then I go comment on my CRs to help out reviewers. CHRIS: I really like the emotional backdrop that you provided here where you're spending a minute; you're like, you know what? Maybe it's me. And there's the classic Seymour Skinner principle from The Simpsons. Am I out of touch? No, it's the children who are wrong. [laughs] And I liked that you took us on a whole tour of that. You're like, maybe it's me. I'll maybe read up. Nope, nope. So yeah, that's rough. There's a really interesting thing of tools constraining you. And then sometimes being like, I'm just going to yield control and back away and accept this thing that doesn't feel right to me. Like, Prettier does a bunch of stuff that I really don't like. It shapes code in a way, and I'm just like, no, that's not...nope, you know what? I've chosen to never care about this again. And there's so much utility in that choice. And so I've had that work out really well. Like with Prettier, that's a great example whereby yielding control over to this tool and just saying, you know what? Whatever you produce, that is our format; I don't care. And we're not going to talk about it, and that's that. That's been really useful for myself and for the teams that I'm on to just all kind of adopt that mindset and be like, yeah, no, it may not be what I would choose but whatever. And then we have nice formatted code; it's great. It happens automatically, love it. But then there are those times where I'm like; I tried to do that because I've had success with that mindset of being like, I know my natural thing is to try and micromanage and control every little bit of this code. But remember that time where it worked out really well for me to be like, I don't care, I'm just going to not care about this thing? And I try to not care about some stuff, which it sounds like that's what you're doing right here. [laughs] And you're like, I tried to not care, but I care. I care so much. And now you're in that [chuckles] complicated space. So I feel for you, Steph. I'm sorry you're in that complicated space of caring so much and not being able to turn that off [laughs] nor configure the software to do the thing you want. STEPH: I appreciate it. I should also share that the team that I'm working with they also don't love this. Like, they don't love Gerrit. So when I shared in the Slack channel my dear Gerrit message, they're both like, "Yeah, we feel you. [laughs] Like, we're in the same spot," which was also helpful because I just wanted to validate like, this is the pain I'm feeling. Is someone else doing something clever or different that I just don't know about? And so that was very helpful for them to say, "Nope, we feel you. We're in the same spot. And this is just the state that we're in." I think they have started transitioning some other repos over to GitLab and have several repos in Gitlab, but this one is still currently using Gerrit. So they very much commiserate with some of the things that I'm feeling and understand. And this does feel like one of those areas where I do care deeply. And frankly, this is one of those spaces that I do care about, but it's also like, I can work around it. There are some reasonable things that I can do, and it's fine as we just talked through. Like, the fact that my commits are not just locally on my machine already makes me feel better now that I've really processed that. So there are lower risks. It is more of just like a workflow. It's just, you know, it's crushing my work vibe. CHRIS: Harshing your buzz. STEPH: In the great words of Queen Elsa, I gotta let it go. This is the thing I'm letting go. So that's kind of what's going on in my world. What else is going on in your world? CHRIS: Well, first and foremost, fantastic reference and segue. I really liked that. But yeah, let's see, [laughs] what else is going on in my world? We had an interesting thing happen last week. So we had an outage on the platform last week. And then we had an incident review today, so a formal sort of post-mortem incident review. There are a couple of different names that folks have given to these. But this is a practice that we want to build within our engineering culture is when stuff goes wrong, we want to make sure that we have meaningful conversations around to try to address the root causes. Ideally, blameless is a word that gets used often in this context. And I've heard folks sort of take either side of that. Like, it's critical that it's blameless so that it doesn't feel like it's an attack. But also, like, I don't know, if one person did something, we should say that. So finding that gentle middle ground of having honest, real conversations but in a context of safety. Like, we're all going to make mistakes. We're all going to ship bugs; let's be clear about that. And so it's okay to sort of...anyway, that's about the process. We had an outage. The specific outage was that we have introduced a new process. This is a Sidekiq process to work off a specific queue. So we wanted that to have discrete treatment. That had been running, and then it stopped running; we still don't know why. So we never got to the root-root cause. Well, we know what the mechanism was, which was the dyno count for that process was at zero. And so, eventually, we found a bunch of jobs backed up in the Sidekiq admin. We're like, that's weird. And then, we went over to Heroku's configuration dashboard. And we saw, huh, that's weird. There are zero dynos processing this. That wasn't true yesterday. But unfortunately, Heroku doesn't log or have an audit trail around changes to those process counts. It's just not available. So that's unfortunate. And then the actual question of like, how did this happen? It probably had to be someone on the team. So there is like, someone did a thing. But that is almost immaterial because, again, people are going to do things, bugs will get shipped, et cetera. So the conversation very quickly turned to observability and understanding. I think we've done a pretty good job of instrumenting error reporting and being quite responsive to that, making sure the signal-to-noise ratio is very actionable. So if we see a bug or a Sentry alert come through, we're able to triage that pretty quickly, act on it where it is a real bug, understand where it's a bit of noise in the system, that sort of thing. But in this case, there were no errors. There was no Sentry. There was nothing; there was the absence of something. And so it was this really interesting case of that's where observability, I think, can really come in and help. So the idea of what can we do here? Well, we can monitor the count of jobs backed up in Sidekiq queue. That's one option. We could do some threshold alerting around the throughput of processed events coming from this other backend. There are a bunch of different ways, but it basically pushed us in the direction of doubling down and reinforcing the foundation of our observability within the platform. So we're just kicking that mini-project off now, but it is something we're like, yeah, we feel like we could add some here. In particular, we recently added Datadog to the stack. So we now have Datadog to aggregate our logs and ideally do some metric analysis, those sort of things, build some dashboards, et cetera. I haven't explored Datadog much thus far. But my sense is they've got the whiz-bang things that we need here. But yeah, it was an interesting outage. That wasn't fun. The incident conversation was actually a good conversation as a team. And then the outcome of like, how do we double down on observability? I'm actually quite excited for. STEPH: This is a fun moment for me because I have either joined teams that didn't have Datadog or have any of that sort of observability built into their system or that sort of dashboard that people go to. Or I've joined teams, and they already have it, and then nobody or people rarely look at it. And so I'm always intrigued between like what's that catalyst that then sparked a team to then go ahead and add this? And so I'm excited to hear you're in that moment of like, we need more observability. How do we go about this? And as soon as you said Datadog, I was like, yeah, that sounds nice because then it sounds like a place that you can check on to make sure that everything is still running. But then there's still also that manual process where I'm presuming unless there's something else you have in mind. There's still that manual process of someone has to check the dashboard; someone then has to understand if there's no count, no squiggly lines, that's a bad thing and to raise a concern. So I'm intrigued with my own initial reaction of, like, yeah, that sounds great. But now I'm also thinking about it still adds a lot of...the responsibility is still on a human to think of this thing and to go check it. Versus if there's something that gets sent to someone to alert you and say like, "Hey, this queue hasn't been processed in 48 hours. There may be a concern that actually feels nicer." It feels safer. CHRIS: Oh yeah, definitely. I think observability is this category of tools and workflows and whatnot. But I think what you're describing of proactive alerting that's the ideal. And so it would be wonderful if I never had to look at any of these tools ever. And I just knew if I got, let's say, it's PagerDuty connected up whatever, and I got a push notification from PagerDuty saying, "Hey, go look at this thing." That's all I ever need to think about. It's like, well, I haven't gotten a PagerDuty in a while, so everything must be fine, and having a deep trust in that. Similar to like, if we have a great test suite and it's green, I feel confident deploying the sort of absence of an alert being the thing that I can trust. But right now, we're early enough in this journey that I think what we need to do is stand up a bunch of these different graphs and charts and metric analysis and aggregations and whatnot, and then start to squint at it for a while and be like, which of these would I be really concerned if it started to wibble? And then you can figure the alerting around said wibble rate. And that's the dream. That's where we want to get to, but I think we've got to crawl, walk, run on this. So it'll be an adventure. This is very much the like; we're starting a thing. I'll tell you about it more when we've done it. But what you're describing is exactly what we want to get to. STEPH: I love wibble rate. That's my new measurement I'm going to start using for everything. It's funny, as you're bringing this up, it's making me think about the past week that Joël Quenneville and I have had with our client work. Because a somewhat similar situation came up in regards where something happened, and something was broken. And it seemed it was hard to define exactly what moment caused that to break and what was going on. But it had a big impact on the team because it essentially meant none of the bills were going through. And so that's a big situation when you got 100-plus people that are pushing up code and expecting some of the build processes to run. But it was one of those that the more we dug into it, the more it seemed very rare that it would happen. So, in this case, as a sort of a juxtaposition to your scenario, we actually took the opposite approach of where we're like; this is rare. But we did load up a lot of contexts. Actually, I was thinking back to the advice that you gave me in a previous episode where I was talking about at what point do you dig in versus try to stay at surface level? And this was one of those, like, we've spent a couple of days on getting context for this and understanding. So it felt really important and worthwhile to then invest a little bit more time to then document it. But then we still went with the simplest approach of like, this is weird. It shouldn't happen again. We think we understand it but then let's add a little bit of documentation or wiki page around like, hey, if you do run into this, here are some steps that will fix everything. And then, if you need to use this, let somebody know because this is so odd it shouldn't happen. So we took that approach in this case where we didn't increase the observability. It was more like we provided a fire extinguisher very close to the location in case it happens. And so that way, it's there should the need arise, but we're hoping it just never gets used. We're also in the process of changing how a lot of that logic works. So we didn't really want to optimize for observability into a system that is actively being changed because it should look very different in upcoming months. But overall, I love the conversations that you bring about observability, and I'm excited to hear about what wibble rates you decide to add to your Datadog dashboard. CHRIS: There's a delicate art and science to the selection of the wibble rates. So I will certainly report back as we get into that work. But with that, shall we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph and Chris are recording together! Like, in the same room, physically together. Chris talks about slowly evolving the architecture in an app they're working on and settling on directory structure. Steph's still working on migrating unit tests over to RSpec. They answer a listener question: "As senior-level developers, how do you set goals to ensure that you keep growing?" This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Faking External Services In Tests With Adapters (https://thoughtbot.com/blog/faking-external-services-in-tests-with-adapters) Testing Third-Party Interactions (https://thoughtbot.com/blog/testing-third-party-interactions) Jen Dary - On Future Goals (https://www.beplucky.com/on-future-goals/) Charity Majors - The Engineer Manager Pedulum (https://charity.wtf/2017/05/11/the-engineer-manager-pendulum/) Charity Majors Bike Shed Episode (https://www.bikeshed.fm/302) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What is new in my world? Actually, this episode feels different. There's something different about it. I can't quite put my finger on it. I think it may be that we're actually physically in the same room recording for the first time in two years and a little bit more, which is wild. STEPH: I can't believe it's been that long. I feel like it wasn't that long ago that we were in The Bike Shed...oh, I said The Bike Shed studio. I'm being very biased. Our recording studio [laughs] is the more proper description for it. Yeah, two and a half years. And we tried to make this happen a couple of months ago when I was visiting Boston, and then it just didn't work out. But today, we made it. CHRIS: Today, we made it. Here we are. So hopefully, the audio sounds great, and we get all that more richness in conversation because of the physical in-person manner. We're trying it out. It'll be fun. But let's see, to the normal tech talk and nonsense, what's new in my world? So we've been slowly evolving the architecture in the app that we're working on. And we settled on something that I kind of like, so I wanted to talk about it, directory structure, probably the most interesting topic in the world. I think there's some good stuff in here. So we have the normal stuff. There are app models, app controllers, all those; those make sense. We have app jobs which right now, I would say, is in a state of flux. We're in the sad place where some things are application records, and some things are Sidekiq workers. We have made the decision to consolidate everything onto Sidekiq workers, which is just strictly more powerful as to the direction we're going to go. But for right now, I'm not super happy with the state of app jobs, but whatever, we have that. But the things that I like so we have app commands; I've talked about app commands before. Those are command objects. They use dry-rb do notation, and they allow us to sequence a bunch of things that all may fail, and we can process them all in a much more reasonable way. It's been really interesting exploring that, building on it, introducing it to new developers who haven't worked in that mode before. And everyone who's come into the project has both picked it up very quickly and enjoyed it, and found it to be a nice expressive mode. So app commands very happy with that. App queries is another one that we have. We've talked about this before, query objects. I know we're a big fan. [laughs] I got a golf clap across the room here, which I could see live in person. It was amazing. I could feel the wind wafting across the room from the golf clap. [chuckles] But yeah, query objects, they're fantastic. They take a relation, they return a relation, but they allow us to build more complex queries outside of our models. The new one, here we go. So this stuff would all normally fall into app services, which services don't mean anything. So we do not have an app services directory in our application. But the new one that we have is app clients. So these are all of our HTTP clients wrapping external third parties that we're interacting with. But with each of them, we've taken a particular structure, a particular approach. So for each of them, we're using the adapter pattern. There's a blog post on the Giant Robots blog that I can point to that sort of speaks to the adapter pattern that we're using here. But basically, in production mode, there is an HTTP backend that actually makes the real requests and does all that stuff. And in test mode, there is a test backend for each of these clients that allows us to build up a pretty representative fake, and so we're faking it up before the HTTP layer. But we found that that's a good trade-off for us. And then we can say, like, if this fake backend gets a request to /users, then we can respond in whatever way that we want. And overall, we found that pattern to be really fantastic. We've been very happy with it. So it's one more thing. All of them were just gathering in-app models. And so it was only very recently that we said no, no, these deserve their own name. They are a pattern. We've repeated this pattern a bunch. We like this pattern. We want to even embrace this pattern more, so long live app clients. STEPH: I love it. I love app clients. It's been a while since I've been on a project that had that directory. But there was a greenfield project that I was working on. I think it might have been I was working with Boston.rb and working on giving them a new site or something like that and introduced app clients. And what you just said is perfect in terms of you've identified a pattern, and then you captured that and gave it its own directory to say, "Hey, this is our pattern. We've established it, and we really like it." That sounds awesome. It's also really nice as someone who's new to a codebase; if I jump in and if I look at app clients, I can immediately see what are the third parties that we're working with? And that feels really nice. So yeah, that sounds great. I'm into it. CHRIS: Yeah, I think it really was the question of like, is this a pattern we want to embrace and highlight within the codebase, or is this sort of a duplication but irrelevant like not really that important? And we decided no, this is a thing that matters. We currently have 17 of these clients, so 17 different third-party external things that we're integrating with. So for someone who doesn't really like service-oriented architecture, I do seem to have found myself in a place. But here we are, you know, we do what we have to with what we're given. But yes, 17 and growing our app clients. STEPH: That is a lot. [laughs] My eyes widened a bit when you said 17. I'm curious because you highlighted that app services that's not really a thing. Like, it doesn't mean anything. It doesn't have the same meaning of the app queries directory or app commands or app clients where it's like, this is a pattern we've identified, and named, and want to propagate. For app services, I agree; it's that junk drawer. But I guess in some ways...well, I'm going to say something, and then I'm going to decide how I feel about it. That feels useful because then, if you have something but you haven't established a pattern for it, you need a place for it to go. It still needs to live somewhere. And you don't necessarily want to put it in app models. So I'm curious, where do you put stuff that doesn't have an established pattern yet? CHRIS: It's a good question. I think it's probably app models is our current answer. Like, these are things that model stuff. And I'm a big believer in the it doesn't need to be an application record-backed object to go on app models. But slowly, we've been taking stuff out. I think it'd be very common for what we talk about as query objects to just be methods in the respective application record. So the user record, as a great example, has all of these methods for doing any sort of query that you might want to do. And I'm a fan of extracting that out into this very specific place called app queries. Commands are now another thing that I think very typically would fall into the app services place. Jobs naming that is something different. Clients we've got serializers is another one that we have at the top level, so those are four. We use Blueprinter within the app. And again, it's sort of weird. We don't really have an API. We're using Inertia. So we are still serializing to JSON across the boundary. And we found it was useful to encapsulate that. And so we have serializers as a directory, but they just do that. We do have policies. We're using Pundit for authorization, so that's another one that we have. But yeah, I think the junk drawerness probably most goes to app models. But at this point, more and more, I feel like we have a place to put things. It's relatively clear should this be in a controller, or should this be in a query object, or should this be in a command? I think I'm finding a place of happiness that, frankly, I've been searching for for a long time. You could say my whole life I've been searching for this contented state of I think I know where stuff goes in the app, mostly, most of the time. I'm just going to say this, and now that you've asked the excellent question of like, yeah, but no, where are you hiding some stuff? I'm going to open up models. Next week I'm going to be like, oh, I forgot about all of that nonsense. But the things that we have defined I'm very happy with. STEPH: That feels really fair for app models. Because like you said, I agree that it doesn't need to be ActiveRecord-backed to go on app models. And so, if it needs to live somewhere, do you add a junk drawer, or do you just create app models and reuse that? And I think it makes a lot of sense to repurpose app models or to let things slide in there until you can extract them and let them live there until there's a pattern that you see. CHRIS: We do. There's one more that I find hilarious, which is app lib, which my understanding...I remember at one point having one of those afternoons where I'm just like, I thought stuff works, but stuff doesn't seem to work. I thought lib was a directory in Rails apps. And it was like, oh no, now we autoload only under the app. So you should put lib under app. And I was just like, okay, whatever. So we have app lib with very little in it. [laughs] But that isn't so much a junk drawer as it is stuff that's like, this doesn't feel specific to us. This goes somewhere else. This could be extracted from the app. But I just find it funny that we have an app lib. It just seems wrong. STEPH: That feels like one of those directories that I've just accepted. Like, it's everywhere. It's like in all the apps that I work in. And so I've become very accustomed to it, and I haven't given it the same thoughtfulness that I think you have. I'm just like, yeah, it's another place to look. It's another place to go find some stuff. And then if I'm adding to it, yeah, I don't think I've been as thoughtful about it. But that makes sense that it's kind of silly that we have it, and that becomes like the junk drawer. If you're not careful with it, that's where you stick things. CHRIS: I appreciate you're describing my point of view as thoughtfulness. I feel like I may actually be burdened with historical knowledge here because I worked on Rails apps long, long ago when lib didn't go in-app, and now it does. And I'm like, wait a minute, but like, no, no, it's fine. These are the libraries within your app. I can tell that story. So, again, thank you for saying that I was being thoughtful. I think I was just being persnickety, and get off my lawn is probably where I was at. STEPH: Oh, full persnicketiness. Ooh, that's tough to say. [laughs] CHRIS: But yeah, I just wanted to share that little summary, particularly the app clients is an interesting one. And again, I'll share the adapter pattern blog post because I think it's worked really well for us. And it's allowed us to slowly build up a more robust test suite. And so now our feature specs do a very good job of simulating the reality of the world while also dealing with the fact that we have these 17 external situations that we have to interact with. And so, how do you balance that VCR versus other things? We've talked about this a bunch of times on different episodes. But app clients has worked great with the adapter pattern, so once more, rounding out our organizational approach. But yeah, that's what's up in my world. What's up in your world? STEPH: So I have a small update to give. But before I do, you just made me think of something in regards to that article that talks about the adapter pattern. And there's also another article that's by Joël Quenneville that's testing third-party interactions. And he made me reflect on a time where I was giving the RSpec course, and we were talking about different ways to test third-party interactions. And there are a couple of different ways that are mentioned in this article. There are stub methods on adapter, stub HTTP request, stub request to fake adapter, and stub HTTP request to fake service. All that sounds like a lot. But if you read through the article, then it gives an example of each one. But I've found it really helpful that if you're in a space that you still don't feel great about testing third-party interactions and you're not sure which approach to take; if you work with one API and apply all four different strategies, it really helps cement how to work through that process and the different benefits of each approach and the trade-offs. And we did that during the RSpec course, and I found it really helpful just from the teacher perspective to go through each one. And there were some great questions and discussions that came out of it. So I wanted to put that plug out there in the world that if you're struggling testing third-party interactions, we'll include a link to this article. But I think that's a really solid way to build a great foundation of, like, I know how to test a third-party app. Let me choose which strategy that I want to use. Circling back to what's going on in my world, I am still working on migrating unit tests over to RSpec. It's a thing. It's part of the work that I do. [laughs] I can't say it's particularly enjoyable, but it will have a positive payoff. And along that journey, I've learned some things or rediscovered some things. One of them is read expectations very carefully. So when I was migrating a test over to RSpec, I read it as where we expected a record to exist. The test was actually testing that a record did not exist. And so I probably spent an hour understanding, going through the code being like, why isn't this record getting created like I expect it to? And I finally went back, and I took a break, and I went back. And I was like, oh, crap, I read the expectation wrong. So read expectations very carefully. The other one...this one's not learned, but it is reinforced. Mystery Guests are awful. So as I've been porting over the behavior over to RSpec, the other tests are using fixtures, and I'm moving that over to use factorybot instead. And at first, I was trying to be minimal with the data that I was bringing over. That failed pretty spectacularly. So I have learned now that I have to go and copy everything that's in the fixtures, and then I move it over to factorybot. And it's painful, but at least then I'm doing that thing that we talked about before where I'm trying to load as little context as possible for each test. But then once I do have a green test, I'm going back through it, and I'm like, okay, we probably don't care about when you were created. We probably don't care when you're updated because every field is set for every single record. So I am going back and then playing a game of if I remove this line, does the test still pass? And if I remove that line, does the test still pass? And so far, that has been painful, but it does have the benefit of then I'm removing some of the setup. So Mystery Guests are very painful. I've also discovered that custom error messages can be tricky because I brought over some tests. And some of these, I'm realizing, are more user error than anything else. Anywho, I added one of the custom error messages that you can add, and I added it over to RSpec. But I had written an incorrect, invalid statement in RSpec where I was looking...I was expecting for a record to exist. But I was using the find by instead of where. So you can call exist on the ActiveRecord relation but not on the actual record that gets returned. But I had the custom error message that was popping up and saying, "Oh, your record wasn't found," and I just kept getting that. So I was then diving through the code to understand again why my record wasn't found. And once I removed that custom error message, I realized that it was actually because of how I'd written the RSpec assertion that was wrong because then RSpec gave me a wonderful message that was like, hey, you're trying to call exist on this record, and you can't do that. Instead, you need to call it on a relation. So I've also learned don't bring over custom error messages until you have a green test, and even then, consider if it's helpful because, frankly, the custom error message wasn't that helpful. It was very similar to what RSpec was going to tell us in general. So there was really no need to add that custom step to it. For the final bit that I've learned or the hurdle that I've been facing is that migrating tests descriptions are hard unless they map over. So RSpec has the context and then a description for it that goes with the test. Test::Unit has methods like method names instead. So it may be something like test redemption codes, and then it runs through the code. Well, as I'm trying to migrate that knowledge over to RSpec, it's not clear to me what we're testing. Okay, we're testing redemption codes. What about them? Should they pass? Should they fail? Should they change? What are they redeeming? There's very little context. So a lot of my tests are copying that method name, so I know which tests I'm focused on, and I'm bringing over. And then in the description, it's like, Steph needs help adding a test description, and then I'm pushing that up and then going to the team for help. So they can help me look through to understand, like, what is it that this test is doing? What's important about this domain? What sort of terminology should I include? And that has been working, but I didn't see that coming as part of this whole migrating stuff over. I really thought this might be a little bit more of a copypasta job. And I have learned some trickery is afoot. And it's been more complicated than I thought it was going to be. CHRIS: Well, at a minimum, I can say thank you for sharing all of your hard-learned lessons throughout this process. This does sound arduous, but hopefully, at the end of it, there will be a lot of value and a cleaned-up test suite and all of those sorts of things. But yeah, it's been an adventure you've been on. So on behalf of the people who you are sharing all of these things with, thank you. STEPH: Well, thank you. Yeah, I'm hoping this is very niche knowledge that there aren't many people in the world that are doing this exact work that this happens to be what this team needs. So yeah, it's been an adventure. I've certainly learned some things from it, and I still have more to go. So not there yet, but I'm also excited for when we can actually then delete this portion of the build process. And then also, I think, get rid of fixtures because I didn't think about that from the beginning either. But now that I'm realizing that's how those tests are working, I suspect we'll be able to delete those. And that'll be really nice because now we also have another single source of truth in factory_bot as to how valid records are being built. Mid-Roll Ad: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/ bikeshed. That's buildpulse.io/bikeshed. Pivoting just a bit, there's a listener question that I'm really excited for us to dive into. And this listener question comes from Joël Quenneville. Hey, Joël. All right, so Joël writes in, "As senior-level developers, how do you set goals to ensure that you keep growing? How do you know what are high-value areas for you to improve? How do you stay sharp? Do you just keep adding new languages to your tool belt? Do you pull back and try to dig into more theoretical concepts? Do you feel like you have enough tech skills and pivot to other things like communication or management skills? At the start of a dev career, there's an overwhelming list of things that it feels like you need to know all at once. Eventually, there comes a point where you no longer feel like you're drowning under the list of things that you need to learn. You're at least moderately competent in all the core concepts. So what's next?" This is a big, fun, scary question. I really like this question. Thank you, Joël, for sending it in because I think there's so much here that can be discussed. I can kick us off with a few thoughts. I want to first highlight one of the things that...or one of the things that resonates with me from this question is how Joël describes going through and reaching senior status how it can really feel like working through a backlog of features. So as a developer, I want to understand this particular framework, or as a developer, I want to be able to write clear and fast tests, or as a developer, I want to contribute to an open-source project. But now that that backlog is empty, you're wondering what's next on your roadmap, which is where I think that sort of big, fun scariness comes into play. So the first idea is to take a moment and embrace that success. You have probably worked really hard to get where you're at in your career. And there's nothing wrong with taking a pause and enjoying the view and just being appreciative of the fact that you are able to get your work done quickly or that you feel very confident in the work that you're doing. Growth is often very important to our careers, but I also think it's important to recognize when you've achieved certain growth and then, if you want to, just enjoy that and pause. And you're not constantly pushing yourself to the next level. I think that is a totally reasonable and healthy thing to do. The second thing that comes to mind is that you're on a Choose Your Own Adventure mode now, so you get to...I would encourage folks, once you've reached this stage, to reflect on where you're at and consider what is your dream? What are your aspirations? Maybe they're related to tech; maybe they're not. And consider where is it that you want to go next? And then, what are the concrete steps that will help you achieve those goals? So there's a really great article by Jen Dary, who's a career coach and owner of Plucky Manager Training, that describes this process. And there's a really great blog post that I'll be sure to include a link to in the show notes. But she has a couple of great questions that will then help you identify, like, what are my goals? Some of those questions are, "If I could do anything and money wasn't an object, I'd spend my time doing dot dot dot." And that doesn't necessarily mean sitting on a beach with your toes in the sand all day. I mean, it could, but then that probably just means you need a vacation. So take the vacation. And then, once you start to get bored, where does your mind start to wander? What are the things that you want to do? Where are you interested in spending your time? And then, once you have an idea of how you'd like to spend your time, you can consider what actions you could take next that will point you in that direction. There's also the benefit that by this point, you probably have an idea of the type of things that you like to do and where you like to spend your time. And so you can figure out which areas of expertise you want to invest in. So do you like more greenfield projects? Do you like architecture discussions? Do you like giving talks? Do you like teaching? Or maybe you're interested in management. I think there's also a more concrete approach that you can take that. You can just talk to your managers in your team and say, "Hey, what big, hard problems are you looking to solve? And then, you can get some inspiration from them and see if their problems align with your interests. Maybe it's not even your own team, but you can talk to other companies and see what other problems industries are trying to solve. That might be an area that then spurs some curiosity or some interest. And then, where do you feel underutilized? So with your current day-to-day, are there areas where you feel that you wish you had more responsibilities or more opportunities, but you feel like you don't have access to those opportunities? Maybe that's an area to explore as well. This feels like a wonderful coaching question in terms of you have done it; congratulations. You've reached a really great spot in your career. And so now you're figuring out that big next step. This is going to be highly customized to each person to then figuring out what it is that's going to help you feel fulfilled over the next five years, ten years, however long you want to project out. Those are some of my thoughts. How about you? What do you think? CHRIS: Well, first, those are some great thoughts. I appreciate that I get to follow them now. It's going to be a hard act to follow. But yeah, I think Joël has asked a fantastic question. And coming from Joël, I know how intentional and thoughtful a learner, and sharer, and teacher and all of these things are. So it's all the more sort of framed in that for me knowing Joël personally. I think to start, the kernel of the question is as senior developers, that's the like...or senior level developers is the way Joël phrased it, but it treats it as sort of this discrete moment in time, which I think there's maybe even something to unpack. And I think we've probably talked about this in previous episodes, but like, what does that even mean? And I think part of the story here is going from reactive where it's like, I don't know how anything works. I know a little bit. I can code some. And every day, I'm presented with new problems that I just don't understand. And I'm trying to build up that base of knowledge. Slowly, you know, you start, and it's like 95% of the time you feel like that. And slowly, the dial switches over, and maybe it's only 25% of the time you feel like that. Somewhere along that spectrum is the line of senior developer. I don't actually know where it is, but it's somewhere in there. And so I think it's that space where you can move from reactive learning things as necessitated by the work that's coming at you to I want to proactively choose the things that I want to be learning to try and expand the stuff that I know, and the ways that I can think about the work without being in direct response to a piece of work coming at me. So with that in mind, what do you do with this proactive space? And I think the way Joël frames the question, again, to what I know of Joël, he's such an intentional person. And I wouldn't be surprised if Joël is very purposeful and thinks about this and has approached it as a specific thing that he's doing. I have certainly been in more of “I'll figure it out when I get there.” I'll explore. Or actually, probably the most pointed thing that I did was I joined a consultancy. And that was a very purposeful choice early on in my career because I'm like, I think I know a little bit. I don't think I know a lot. I would like to know a lot. That seems fun. So what do I think is the best way to do that? My guess, and it turned out to be very much true, is if I join a consultancy, I'm going to see a bunch of different projects, different types of technologies, organizations, communication structure, stuff that works, stuff that doesn't work. And to be honest, I actually thought I would try out the consultancy thing for a little while, like a year or two, and then go on to my next adventure. Spoiler alert: I stayed for seven years. It was one of the best periods of my professional life. And I found it to be a much deeper well than I expected it to be. But for anyone that's looking for, like, how can I structure my career in a way that will just automatically provide the sort of novelty and space to grow? I would highly recommend a consultancy like thoughtbot. I wonder if they're hiring. STEPH: Well, yes, we are hiring. That was a perfect plug that I wasn't expecting for that to come. But yes, thoughtbot is totally hiring. We'll include a link in the show notes to all the jobs. [laughs] CHRIS: Sounds fantastic. But very sincerely, that was the best choice that I could have made and was a way to flip the situation around such that I don't have to be thinking about what I want to be learning. The learning will come to me. But even within that, I still tried to be intentional from time to time. And I would say, again, I don't have a holistic theory of how to improve. I just have some stuff that's kind of worked out well. One thing is focusing on fundamentals wherever I can, or a different way to put it is giving myself permission to spend a little bit more time whenever my work brushes up against what I would consider fundamentals. So things that are in that space are like SQL. Every time I'm working on something, I'm like, ah, I could use like a CROSS JOIN here, but I don't know what that is. Maybe I'll spend an extra 30 minutes Googling around and trying to figure out what a CROSS JOIN is. Is that a thing? Is a CROSS JOIN a real thing? I may be making it up. [laughs] A window function, I know that those are real. Maybe I'll learn what a window function is. I think a CROSS JOIN is a real thing. A LEFT OUTER JOIN that's a cool thing. And so, each time I've had that, SQL has been something that expanding my knowledge; I've continually felt like that was a good investment. Or fundamentals of HTTP, that's another one that really has served me well. With Ruby being the primary language that I program in, deeply understanding the language and the fundamentals and the semantics of it that's another place that has been a good investment. But by contrast, I would say I probably haven't gone as deep on the frameworks that I work with. So Rails is maybe a little bit different just because, like many people, I came to Ruby through Rails, and I've learned a lot of Rails. But like in JavaScript, I've worked with many different JavaScript frameworks. And I have been a little more intentional with how much time I invest into furthering my skills in them because I've seen them change and evolve enough times. And if anything, I'm trying to look for ones that are like, what if it's less about the framework and more about JavaScript and web fundamentals underneath? Thus, I've found myself in Svelte land. But I think it's that choice of trying to anchor to fundamentals wherever possible. And then I would say the other thing that's been really beneficial for me is what can I do that's wildly outside the stuff that I already know? And so probably the most pointed example I have of this is learning Elm. So I previously spent most of my career working in Ruby and JavaScript, so primarily object-oriented languages without a strong type system. And then, I was able to go over and experience this whole different paradigm way of working, way of structuring programs, feedback loop. There was so much about it that was really, really interesting. And even though I don't get to work in Elm, frankly, as much as I would want at this point or really at all, it informs everything that I do moving forward. And I think that falls out of the fact that it was so different than what I was doing. So if I were to do that again, probably the next type of language I would learn is Lisp because those are like, well, that's a whole other category of thing that I've heard about. People say some fun stuff about them, but I don't really know. So it's that fundamentals and weird stuff is how I would describe it. And by weird, I mean outside of the core base of knowledge that I have. STEPH: I love that categorization, and I'll stick with it, fundamentals and weird stuff, to stretch and grow and find some other areas. I also really like your framing, the reactive versus proactive. I think that's a really nice way to put it because so much of your career is you are just learning what your company needs you to learn, or you're learning what you need to keep progressing and to feel more competent with the types of features or the work that you're handling. And I think that's why Jen Dary's blog post resonates with me so much because it's probably...up until now, a lot of someone's career, maybe not Joël's particularly, but I know probably for my career, a lot of it has been reactive in terms of what are the things that I need to learn? And so then once you reach that point of like, okay, I feel competent and reasonably good at all the things that I needed to know, where do I want to go next? And rather than focus on necessarily the plans that are laid out in front of me, I can then go wide and think about what are some of the bigger things that I want to tackle? What are the things that are meaningful to me? Because then I can now push forward to this bigger goal versus achieving a particular salary band or title or things like that. But I can focus on something else that I really want to contribute to. And there do seem to be two common paths. So once you reach that level, either you typically go into management, or you become that more like principal and then onward and upward, whatever is after principal. I don't even know what's after that, [laughs] but the titles that come after principal. So there's management, and then I've seen the other very strong contributors, so Aaron Patterson comes to mind. And I feel like those people then typically will migrate to places where they get to contribute to a language or to a framework. And I think it comes down to the idea of impact because both of those provide a greater impact. So if you go into management, you can influence and affect a team of individuals, and you can increase the value created by that team. Then you've likely exceeded the value that you would have created as your own individual contributor. Or, if you contribute to a language or a framework, then your technical decisions impact a larger community. So I think that would be another good thing to reflect on is what type of impact am I looking for? What type of communities do I want to have a positive impact on? And that may spur some inspiration around where you want to go next, the things that you want to focus on. CHRIS: Yeah, I think one of the things we're picking up in that that Joël mentioned in his question is the idea of there is the individual contributor path. But then there's also the management path, which is the typical sort of that's the progression. And I think, for one, naming that the individual contributor path and the idea of going to principal dev and those sorts of outcomes is a fantastic path in and of itself. I think often it's like, well, you know, you go along for a certain amount of time, and then you become a manager. It's like, those are actually distinctly different things. And people have different levels of interest and aptitude in them, and I think recognizing that is critically important. And so, not expecting that management just comes after individual contributor is an important thing. The other thing I'll say is Charity Majors, who we had on the podcast a bit back, has a wonderful blog post about the pendulum swing called The Engineer/Manager Pendulum. And so in it, she talks about folks that have taken an exploration over into manager land and then come back to the individual contributor path or vice versa sort of being able to move between them, treating them as two potentially parallel career tracks but ones that we can move between. And her assertion is that often folks that are particularly strong have spent some time in both camps because then you gain this empathy, this understanding of what's the whole picture? How are we doing all of this? How do we think about communication, et cetera? So, again, to name it, like, I think it's totally fine to stay on one of those tracks to really know which of those tracks speaks to you or to even move between them a little bit and to explore it and to find out what's true. So we'll include a link to that in the show notes. And we'll also include a link to the previous episode a while back when we had Charity on. But yeah, those I think are some critical thoughts as well because those are different areas that we can grow as developers and expand on our impact within the team. And so, we want to make sure we have those options on the table as well. STEPH: Absolutely. I love where teams will support individuals to feel comfortable shifting between experiences like that because it does make you a more well-rounded contributor to that product team, not just as an engineer, but then you will also understand what everybody else is working on and be able to have more meaningful conversations with them about the company goals and then the type of work that's being done. So yeah, I love it. If you're in a place that you can maybe fail a little bit, hopefully not in a too painful way, but you can take a risk and say, "Hey, I want to try something and see if I like it," then I think that's wonderful. And take the risk and see how it goes. And just know that you have an exit strategy should you decide that you don't like that work or that type of work isn't for you. But at least now you know a little bit more about yourself. Overall, I want to respond directly to something that Joël highlighted around how do you know what are high-value areas for you to improve? And I think there are two definitions there because you can either let the people around you and your team define that high value for you, and maybe that really resonates with you, and it's something that you enjoy. And so you can go to your manager and say, "Hey, what are some high-value areas where I can make an impact for the team?" Or it could be very personal. And what are the high-value areas for you? Maybe there's a particular industry that you want to work on. Maybe you want to hit the public speaking circuit. And so you define more intrinsically what are those high-value areas for you? And I think that's a good place to start collecting feedback and start looking at what's high value for you personally and then what's high value to the company and see if there's any overlap there. With that, I think we've covered a good variety of things to explore and then highlighted some of the different ways that you and I have also considered this question. I think it's a fabulous question. Also, I think it's one, even if you're not at that senior level, to ask this question. Like, go ahead and start asking it early and often and revisit your answers and see how they change. I think that would be a really powerful habit to establish early in your career and then could help guide you along, and then you can reflect on some of your earlier choices. So, Joël, thank you so much for that question. STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Nikki - You have a varied background between being a security engineer, consultant, manager, etc. What made you decide to focus more on the compliance aspects of cybersecurity?Chris - It is often said "Compliance doesn't equal Security". Why do you think this phrase has taken hold, do you think its accurate and how do we evolve beyond it? Nikki - Based on some of your posts about compliance - one specifically about implementing frameworks and guidance from NIST and the CMMC standards - do you think there's a need in the industry to focus more on implementation guides or do you feel like organizations are to complex to create guides? Chris - On the topic of compliance frameworks, we seem to be so reactionary, with new frameworks coming after incidents etc. and organizations struggle to keep up. Do you think we have a framework sprawl problem?Chris - On the topic of 800-171 and CMMC, there's a lot of talk on the topic of affordability and cost and the impact to the small businesses in the DIB, which has already seen massive consolidation. What are your thoughts on this, and how do we balance compliance/security with the need for a robust DIB of suppliers?Nikki - What do you think the future of compliance looks like? CMMC and otherwise - do you foresee more legislation around compliance coming down the pike?
CHRIS NEWBOLD: Hello, well-being friends, and welcome to The Path To Well-Being in Law podcast, an initiative of the Institute for Well-Being in Law. I'm your co-host Chris Newbold, executive vice president of Alps Malpractice Insurance and most of our listeners know that our goal is pretty straightforward. We want to introduce you to thought leaders doing meaningful work in the wellbeing space and within the legal profession. In the process we want to build and nurture a national network of wellbeing advocates intent on creating a culture shift within the profession. I am always pleased to introduce my co-host Bree Buchanan. Bree, how's it going? BREE BUCHANAN: It's going great, Chris. How is your spring starting off? CHRIS: It's a little colder in Montana than I would like, but the warm weather is on the way. So I'm certainly looking forward to that. So a lot going on, obviously, in the wellbeing world and super excited to continue with kind of thoughtful discussion here on the podcast. We're going to continue. I think our series here on diversity, equity and inclusion and the intersection of DEI with well-being and super excited to be welcoming Lia Dorsey to the podcast. Bree, would you be so kind to introduce Lia to our listeners? BREE: I would love to. So we are so delighted to bring to you Lia Dorsey today, and she is a thought leader in the movement to advanced diversity and a driver for inclusive change. As the chief diversity equity and inclusion officer at Ogletree Deakins, she's responsible for the development and execution of the firm's diversity equity and inclusion strategy. Ms. Dorsey collaborates with firm leadership, practice group leaders and business resource groups to expand in advance efforts in the recruitment development promotion and retention of diverse talent. BREE: Ms. Dorsey previously served as the head of diversity and inclusion at Denton's, U.S. There she was responsible for the strategic oversight, design and implementation of this very large terms, diversity and inclusion initiatives. Before that she served as the director of diversity and inclusion at Eversheds Sutherland and has also held senior positions at DLA Piper and Buchanan Ingersoll & Rooney. All of those names that many of us know. Lia's also president emeritus, I told her the best place, best position to have, of the Association of Law Firm Diversity Professionals. She's a sought after presenter in panelist on a broad range of topics covering diversity, equity, and inclusion at conferences across the country. Lia, welcome. We are so glad you're here with us today. LIA DORSEY: Thank you, Bree. Thank you, Chris, for having me. I am thrilled. Thrilled to be here. BREE: Yeah. Thank you so much. So Lia, I'm going to ask you the question that we ask all of our guests, because I think it just is well, so interesting. So what are some of your experiences in your life that are drivers behind your very clear passion for the work around DEI? LIA DORSEY: Great, great question. I like to start by saying that I've always been an inclusionist, if you will, although it didn't have a term back in the day as I was growing up and I'll just kind of share just a really funny story. I used to get in trouble a lot as a child, because I would give away my toys to my friends who didn't have them. So I would always just share, I would always just give. I was always that compassionate person and I think my parents appreciated it until I gave away my brand new pink and white huffy bike with the [inaudible 00:03:54] and then I think after that, it's "Okay, I think we need to kind of reign this in and pray." But all jokes aside I've always been a giver. I've always been a giver and I live by the verse, "To whom much is given, much will be required," and I seriously take that to heart. LIA DORSEY: I've long supported those from different backgrounds and environments. I've been a volunteer for a long time. I mentor, especially now in my role, I think it's very, very important for me to reach back and pull others forward. But for me, this was just what you did as a good person, right, it was never about shine or the accolades. You helped people who need it. I'm still an inclusionist, but now I like to refer to myself as a disruptor for good. BREE: All right. LIA DORSEY: Just today I heard someone else describe herself as a professional troublemaker, and I think I'm going to borrow that one as well, because at the end of the day to do this work, you have to be brave and you have to be bold. I'm also clear that it's not for everyone, that this is by far the hardest job that I've ever had, but it's also the most rewarding and I honestly can't see myself doing anything else. BREE: Yeah. I hope that I can be a professional disruptor at some point, but it does take a lot of courage. Absolutely. So good for you. CHRIS: It does. Lia, tell us a little bit about... One of the things I was impressed about kind of how your professional journey has kind of taken shape, is you've had the ability to move in and out of different cultures within the legal profession, which I just find is really fascinating. Tell us about your journey in the area of diversity, equity and inclusion and in the time that you have been a disruptor for good, how have things changed over that time? LIA DORSEY: Yeah, so I have been in law firms for a very, very, very long time. Although, I didn't start out in DE&I. I actually started out on the business side of the law firm for years and at one particular firm, we didn't have anyone at that time leading DE&I in an official capacity. So I raised my hand, again, I would love to volunteer and that's a reoccurring theme with me. But two years later, I found myself with almost two full time jobs. So the job that I was hired to do and the job that I was meant to do, and that was the DE&I role. So that same firm really saw how passionate I was about DE&I work and just how happy I was doing it. They actually created a role for me as the director of DE&I at that firm and as they say the rest was history. LIA DORSEY: But when we think about what changed over time, I think we've seen DE&I become more of a strategic focus in priority for firms. Even before the events of 2020, I think, we started to see firms dedicate more resources to DE&I, like creating full time positions, moving away from DE&I being embedded in HR or being seen as a compliance requirement from the GCs office. So we had really started to see kind of an elevation of DE&I and the role. Then 2020 happened and we'll talk about that a little bit more, but what we saw was even more of a cohesion around DE&I. We saw leaders speaking up, stepping up. We saw a heightened level of awareness. LIA DORSEY: People became aware of issues that weren't on their radar in the past. I think the murder of George Floyd was a pivotal moment. I'm often asked how that moment was so different because sadly George Floyd, wasn't the first Black man to be murdered at the hands of police and sadly he wasn't the last, but I think the difference that the world was at home, watching it happen and people who thought things like this didn't happen were now outraged, right. But that rage led to empowerment. We have to do something. We have to say something. LIA DORSEY: So we're seeing a lot of folks speaking up more because they aren't afraid and they're making demands for change. All of that is great and that is a big change, because I would say before 2020, I don't think that you would've seen people speaking up and standing up the way that we're seeing it now. I think all of that is great, but I also think that we're in an inflection point, right, because there are forces in this world who don't want things to change. The thought is the system has worked in one way for so long, so why change it? But the only way to move forward successfully is to change and I'm one of those change agents that's working to try to make this world a better place. BREE: Absolutely. Wow, absolutely. That's so wonderful. So [inaudible 00:08:48], I've seen you... Lia, I'm sorry, seen or heard you talk about your experience over time here. What have you seen now that legal employers are doing right in this area? What are some good examples and we'll get to weaving in the intersectionality of well-being, but right now let's stick with the DEI work. What do you see as going right here? LIA DORSEY: Absolutely. I think there's much more focus and intention being put around the advancement and retention of diverse talent, specifically minority talent. Recruiting is still a focus of course, but we realize that diverse talent need meaningful support once they join the firm. So for the listeners, I just challenge them to ask themselves, how are you investing in your diverse talent? Are you having real conversations around the development of your talent because we really need to make serious and meaningful investments. I've always said that the talent is there, but the opportunities aren't always there and the opportunities that I'm talking about are things like introductions to key clients and the ability to develop client relationships. It's getting the high value work. It's being able to tap into the resources for business development and the list goes on, because we all know that these are the types of things that can really impact someone's career. LIA DORSEY: So with that as the backdrop here, a few things that I'm kind of seeing that are having a meaningful impact today. We're seeing the creation of formal DE&I sponsorship programs. So we know the difference between mentorship and sponsorship, right. So a mentor talks to you and a sponsor talks about you and the best DE&I sponsorship programs that I've seen have leaders of the firms and that's the board, it's the managing partner, the executive committee and what I like to call the front page of the [inaudible 00:10:36] sheet lawyers. But that group of people are actually the ones serving as the sponsors. That has two great benefits. One is that it shows the stakeholders that the leaders are invested in the diverse attorney's development and they aren't pushing it off on someone else to do. So we know that law firms are top down organizations, right, but having those at the top who are actively engaged in the DE&I work has a profound impact. LIA DORSEY: The second thing is that that group of people are in a position to make sure that the lawyers are getting those opportunities that are referenced, right. So they actually have the work and they can make those key introductions. So I think sponsorship programs are definitely on the rise and I think that they can be very, very effective and can lead to retention. LIA DORSEY: The second thing that I'm seeing is kind of a focus on culture overall, because we know that culture is a differentiator, right. It's the reason people stay or go. A survey by McKenzie found that the majority of employees have considered the inclusiveness of companies when they're making career decisions. I like to say this, if you ask the people at the top of the organization to describe the culture, I'm sure that what they would say is probably different than what those who aren't at the top would say. I mean, and this is what we like to call that perception gap that often exists between the leaders and the employees and it's how do we get everyone to kind of experience the culture in the same way? So just think about it and ask yourself is your culture by default or by design. BREE: Wow. LIA DORSEY: Right. So we've been talking a lot about the great resignation and the she session, but now we're... I love that term. I mean, it's sad, but it's still a good term. BREE: I love it. I hadn't heard it before. CHRIS: I hadn't either. That's a good one. BREE: She session. Yeah. LIA DORSEY: The she session. Yeah. But now we're talking about the great reboot or the great realization and the reality is that the world is different since the start of the pandemic. People are expecting to work differently and they want companies to kind of meet more in their needs. So this is really an opportunity for firms to reimagine their workplace and their culture. Then the last thing I would say here is the inclusion of staff, right. Inclusion really is inclusion for all and not for some, but we know that law firms typically focus on the benefits of the lawyers and now we're seeing staff being introduced into that conversation, which is long overdue, but it's definitely necessary. If I can just touch just quickly on the things that don't work. BREE: Yeah. Please, please. CHRIS: Yeah. For sure. Lessons learned. LIA DORSEY: When companies don't make DE&I a priority. So when they still think it's just a nice to do, or if the efforts are just performative and they're doing it because their clients are kind of forcing them to do it. So you have to make it a priority. It has to be part of the overall firm strategy. And then if your leaders aren't engaged, and I'll talk a little bit more later about the difference between commitment and engagement, if they aren't engaged, then you're probably not going to have a lot of success. BREE: Right, right. So important. CHRIS: It seems Lia that you're, I feel like in your tone that you are optimistic that the level of engagement and particularly leadership leaning in, is increasing. Is that fair? LIA DORSEY: It is. It is definitely increasing. There were people who just because they could have been checked out of this conversation for such a long time, and now they are checking into the conversation. But I'll also say that just because you're checking in it doesn't always mean that you know what to do and know what to say and that's where folks like me and people who do this kind of work can really help with that. But I am definitely encouraged and I like to look at life as glass, half full with the way that things are progressing and the level of interest by certain stakeholders. It's really encouraging. CHRIS: Yeah. Because I know we're going to talk about this a little bit more, but I just think it's so fascinating how, as you know, even in the work that Bree and I do on the well-being front, so much of what maybe I'm going to say, not the easy part, but building awareness and educating others is one element to it. But ultimately action and taking on systemic barriers become probably the harder part of advancing social change and being a catalyst for cultural shifts, right. Sometimes it takes years, sometimes decades, right, to effectively be able to do that. But I find such interesting similarities in the efforts to advance both one DEI on a track, one well-being on a track and then the intersection of the two, which I think is even more interesting because some of the challenges are obviously unique and differentiated that... Really interesting. CHRIS: You said earlier in the podcast, Lia, that the hardest job that you've had, and that a lot of this has to do with, is the fact that you are trying to get people to change, right, and evolve their thinking and ultimately act in appropriate and effective ways. What works here and how do you get people to not necessarily... Well, I would call it evolve their attitudes and actions as they think about what the right work culture is and what ultimately is the right thing to do. But also, advances kind of where the firm is as a business entity. LIA DORSEY: Yeah. That's such a great question. You can't do this work thinking that you will be able to get people to change, right. There's a great cartoon clip of someone addressing a crowd of people asking who wants to change and everyone raises their hand and then they ask, who wants to change and then all the hands go down, right. That graphic perfectly sums up what it's like doing this work. A lot of people are committed to DE&I and they care and they have good intentions, but not many folks are actively engaged and I said I would talk about the difference between the two. LIA DORSEY: I think right now you'd be hard pressed to find a leader who would actively come out and say that they aren't committed to DE&I, but it's more difficult to get them to actually engage in the work. It's hard to get folks to willingly use their influence and internal capital to help someone else, especially if that individual isn't like them. So they don't look like them and you're not part of my in group, but that said, if we won't change, we cannot sit on the sidelines, right. So I believe that in action or neutrality is complicity. Action is courage and courage is a habit. It's a muscle that you build over time. It's consistently committing to something, knowing that at times you may get it wrong or you may be uncomfortable. I mean, look, sometimes I still get it wrong and I do this for a living, but- BREE: Thank you for saying that, Lia, thank you for saying that. Oh, my gosh. LIA DORSEY: It's just, it's continually showing up and engaging and if you get it wrong, you get up and you continue to try again. So, we spent a lot of times educating our stakeholders and raising awareness around DE&I. What does it mean to really be an ally or an upstander? What do those terms mean, right. What does equity really look like in a law firm? How do you work across difference? How do you have courageous and meaningful conversations with others who are not like you, right. What is bias? How does it show up in your interaction with others? LIA DORSEY: So once you understand some of these issues, hopefully that'll lead to greater empathy and then hopefully that will lead to action. Just in closing, I'll say, so instead of focusing solely on changing minds, focus on changing your systems and changing your processes and changing your policies, because that's also where a lot of this bias breeds, which sometimes folks don't want to change their minds because they're set on something. As my friend, Michelle Silverthorne, she is a popular DE&I and culture consultant says, if you change the system, you'll change the world. So I spend a lot more of my time focusing on changing the systems and then the hearts and minds will follow. BREE: Absolutely. That's just absolutely brilliant. Yeah. I've always thought that if you get the right form in place, things will follow. You ask the right questions on the right forms- LIA DORSEY: Absolutely. BREE: ... And that starts to shift the culture. Yeah. Yeah. So important. So Lia, I'm wondering if you could tell us a little bit, we're going to take a break here in just a second, but you are just coming off as president of the association of law firm, diversity professionals, and I can really hear the polish of your message, which I'm sure that you honed over the time as president. What has that group been focusing on? What are you guys focusing on now? LIA DORSEY: Oh, absolutely. So, for those who may not know, ALFDP is an association of law firm professionals working in the DE&I space in the United States, Canada, and in the U.S. I'm sorry, UK. The association is around what, 16 years old at this point. As you mentioned, I served as president for two years. I served as VP before that. I'm technically still on the board and it is just an absolutely amazing organization just to be able to connect with people who are trying to solve the same problems, right, and achieve the same goals. So let's try to put our heads together and solve it together. But essentially we equip our members with the tools, tips, and talking points that they need to advance DE&I within their own firms and to help get the buy in and the resources that they need and resources is sometimes talent and then sometimes it's money. LIA DORSEY: Another great thing that we do is to collaborate with other DE&I focused organizations as well. Last year, we collaborated with Thompson Reuters and the ACC Foundation on a white paper called the Pandemic Nation: Understanding its impact on lawyers from underrepresented communities. It was a great white paper. I encourage your listeners to download it. It essentially was an in depth look at the impact of the pandemic on the careers and lawyers from underrepresented communities. What's great about that research even now is that it really points out the challenges and the opportunities of those historically excluded lawyers, which is really, really important. Particularly the opportunities part as all of us are slowly returning to the office. LIA DORSEY: There are things that organizations can keep in mind and I'm cleared about saying returning to the office and not returning to work because trust and believe for the past two years, we have been working hard, harder than we ever have. So I really want to remind people, we're talking about returning to the office, but ALFDP keeps on top of the primary issues that everybody is dealing with and then we try to find resources and give tips and tools to help solve some of those challenges. So great, great association and I'm so glad and honored to have been able to lead it. BREE: So, Lia, what is the web address for that in case our listeners are interested in checking it out? LIA DORSEY: Absolutely, www.alfdp.com. CHRIS: Lia, I have to imagine that, and this would be an encouraging sign that your membership has expanded significantly of late. Is that the case? LIA DORSEY: Oh, my gosh. Yes. CHRIS: Right. I mean that's a sign that, again, people are leaning in, that they're looking for a community that can provide national resources to be able to aid them. I mean, this is an organization that I wasn't aware of, but again, gives me cause for optimism that there are change agents that are coming together and sharing best practices that are ultimately going to advance our profession. LIA DORSEY: Absolutely. The membership grows every day and a lot of that is because a lot of these firms are creating a lot of DE&I roles and positions and sometimes they're elevating existing people. So the interest in DE&I overall, and in ALFDP, specifically, is just amazing. CHRIS: Awesome. Well, this is a great place... Let's take a quick break and hear from one of our sponsors and then we'll come back and I'm really excited to kind of start to talk about the intersection of DEI and well-being. So we'll be right back. — Advertisement: Meet Vera, your firm's virtual ethics risk assessment guide. Developed by ALPS, Vera's purpose is to help you uncover risk management blind spots from client intake to calendaring, to cybersecurity and more. Vera: I require only your honest input to my short series of questions. I will offer you a summary of recommendations to provide course corrections if needed, and to keep your firm on the right path. Generous and discrete, Vera is a free and anonymous risk management guide from ALPS to help firms, like yours, be their best. Visit Vera at alpsinsurance.com/vera. — CHRIS: All right, we are back with Lia Dorsey, chief diversity equity and inclusion officer at Ogletree Deakins. This is an area that I've been just very excited to delve into, because I know that our board at the Institute for Well-being in Law, you can't talk about well-being unless you're integrating and considering elements of diversity, equity and inclusion as well. Lia, I'd love just to hear more about in your experience, how do they intersect, right, and how do we think about them, I guess not as two tracks, but two tracks, obviously intertwined. LIA DORSEY: Yeah. Mental health and DE&I are definitely closely connected. I would say right now, mental health and wellness and burnout, are very common topics in the workplace today and I think it's great that we're starting to normalize conversations around those topics. Asking for help now is not seen as a weakness and people are having dialogues and they're willing to talk about their personal experiences and their struggles. When we talk about DE&I, diversity, equity and inclusion, there's also another letter that's joining that and that's the B for belonging and that's really the psychological safety that folks feel. Honestly, we weren't talking about these things before, but it is just so important that we're having these conversations now and the fact that all of us spend the majority of our time at work, prioritizing mental health in the workplace is really, really a must. LIA DORSEY: Many of us are managing work related stress and experiencing diminished mental health because of the pandemic and the racial injustice crisis. This is, it's taken a toll on people and more so for those from diverse backgrounds in communities. We talked about, since 2020, there's been a much needed spotlight on racial justice, but it's also highlighted the serious lack of dialogue, addressing specific mental health needs and challenges for those from underrepresented communities. I think just kind of navigating and adapting to those challenges is really important because there's a lot of stress that that community is experiencing because of what's going on and that stress can worsen and it can cause health problems, it can lead to increased mental health conditions and so much more. LIA DORSEY: On the diversity piece of it, folks from diverse backgrounds often face increased bias and microaggressions and other stressors that impact their mental health and that psychological safety. So I think for the intersection between the two, it's really important to focus on that sense of belonging, because it's critical for overall mental health and well-being, because the more we feel like we belong, the more we feel like we can be ourselves, right. BREE: Absolutely. LIA DORSEY: [inaudible 00:27:16] like we have that support, then we can cope effectively with the stress and the difficulties in our lives. So firms and companies really need to prioritize creating this type of culture that supports, not only just mental health and wellness, but that inclusion and that belonging piece as well. I think another way that they intersect, and it's a way that a lot of people don't think about often is when you think about benefits. So really, evaluate the benefits that you are offering at your company. Are they inclusive? Are your wellness benefits available to everyone, or just... And now that we're talking about possibly, returning back to the office, look at your flexible working policies. Who gets to continue to work a hybrid schedule, who's asked to come back into the office, and when you examine that, you'll see how that too impacts diversity. So as we're trying to all come up with these solutions around mental health and wellbeing, it's really important to keep some key DE&I concepts in mind, as well. BREE: Absolutely. I just love what you're saying there around inclusion and belonging. One of the things that I teach about when I talk is also the idea, and this is where I touch upon the intersectionality here, and I talk about the impact on mental health outcomes of what's called thwarted belongingness. I don't know if you've heard that phrase before, but it's also something that's been studied in a precursor or predictor or suicidality. I mean, it's really serious. I think about, I use the example of, if you can't think of anything else, remember what it's like to be a little kid and everybody's being chosen for the basketball team and you're the last one, that was my experience, [inaudible 00:29:04], and just how awful that feels. Can you try to touch into that and have some empathy or compassion? So, yeah. That's brilliant. LIA DORSEY: Absolutely. If I may, there's one thing when you just shared that story and I was the person who got picked last too, I don't have any [inaudible 00:29:19]. But it just made me think about this amazing quote and I'm sure a lot of folks have heard at this point, but it's by Bernie Myers, and it says, "Diversity is being invited to the party and inclusion is being asked to dance." It's that last part that is just so important, right, because it's not good enough just to be seen, right. It's also about being included, being involved, feeling that sense of belonging and so that made me think of that quote, as you just shared [inaudible 00:29:48]. BREE: Yeah, that's great. I'm afraid I also the person [inaudible 00:29:51] not being asked to dance, but anyway, that's another story. So Lia, I really want to dig to hear a little bit about a theme that has run through our conversations here, is about the role of leaders in the profession. Of course there are the CEOs of the big law firms, some of the ones that you have worked for. There's other leaders of the profession that I think really have a responsibility here are the judges and the state bar presidents, people that really are at the forefront of the profession. So what thoughts do you have about how we influence these leaders, that if you care about wellbeing, you have to care about diversity, equity, inclusion, and vice versa. How do you get them to pay attention and to take action to become, I think you were saying, it's not commitment, it's engagement. LIA DORSEY: Absolutely. Absolutely. Deloitte conducted a survey. I believe it was last year, which found that DE&I and employment health and well-being are top priorities for CEOs. I thought that was very, very telling, right. So leaders really need to prioritize their employees total well-being, and that includes their physical, mental, emotional health and it also includes work life balance. But in order to do that leaders must understand and address the unique challenges that their underrepresented employees face. We talked about this a little earlier, so that's the bias, the microaggression, it's the health disparities potentially, it's different mental health treatment and outcomes. Sadly I think that list goes on. So I think you can influence and encourage them, if you will, by just making sure that they understand that this is really important for every single employee, right, and some of the current support programs that they have in place, it's just not enough. LIA DORSEY: Firms have to really take a strategic and a holistic approach to mental health and wellness because there isn't one solution to the problem or silver bullet, but it's really a series of actions that they need to take. Then you need to tie this some kind of way to retention and culture, because we talked about this at the top, culture plays a big role in mental health, right it's about those safe spaces. It's about inclusion, because we know that inclusive workspace are more engaged and productive, but if you can really help your employees feel like they truly, truly belong, then you can help them achieve that greater sense of satisfaction in health and wellness and that can impact retention. LIA DORSEY: So just to kind of sum this up, leaders really need to prioritize their people and they need to create a workplace that fully supports every single employee, even the one from historically excluded groups and they must address those needs. If they want to create connected and inclusive workplaces, they have to address mental health because it's not an option to continue to ignore it. BREE: That's right. CHRIS: Yeah. Well, said. Let's look forward a decade and if we were to do a good job around evolving and changing attitudes and encouraging engagement and affecting hearts and minds, Lia, how will the legal profession be different? LIA DORSEY: Well, for starters, I hope the profession will be more diverse 10 years from now. It's a shame that after all these efforts and initiatives and research and data that the legal profession is still struggling with diversity. I want to see more of this diversity at the top as well. We all know that representation matters and I'd also like to see more accountability, right. How do we hold our leaders to account? I would love to see firms and I'm sure a lot of people are not going to like this, but I would love to see firms link the compensation to the advancement of DE&I, because our corporate partners are already doing this, right. I always say that diversity isn't black and white, it's green, because when you start talking about money, people listen. [inaudible 00:34:10]. So I would love to see that link because I think that will get us more change faster and sooner. BREE: Right. LIA DORSEY: So that's what I would say. CHRIS: Okay, and I'm just curious as we think about one thing, law schools obviously have a role here to play as a kind of a pathway into the profession. I'm just curious on your impressions on how they're doing relative to some of these challenges. Obviously, a lot of our work on the well-being front kind of starts with how folks come into the profession. I got to think that there's some direct corollaries there. LIA DORSEY: It is. One of the things that I've seen, which I think is great, is that our law schools are talking about DE&I earlier in the process or period, because it was a time when they weren't. So the fact that they're introducing and they're talking about these topics in law school, I think is great. Then we're also seeing a connection with some of the law school offerings and partnering with firms and other diversity associations that are out there, NCCA, the DFA and LCLD are three that come to mind. They're really just trying to make sure that particularly the diverse students are well prepared for a full enriched career in a law firm and really looking at how do we make sure that they know what to do and to make sure that they can ask the right questions and to find that mentor early and all of the things that come with that. So I'm really encouraged to see that type of partnering with some of the law schools and some of the other associations that are out there and with some law firms, as well CHRIS: As you said earlier, evolution here or progress I think has been slower than almost all of us believe could have been achieved. Are you optimistic about the future, and if so, what are some of the accelerator drivers that have you particularly excited for what's on the horizon? LIA DORSEY: Oh, Chris, I have to stay optimistic because if I don't, I'll go in my room, sit in the ball and cry. I just think it's really just my outlook. If I think about when I started in this field, not that long ago really, but when I started, we weren't even talking about some of the stuff that we're talking about even now and we were not as bold then as we are now. So, that definitely gives me hope. Some of the people that I see who are starting to do this work gives me hope. The fact that we're seeing managing partners and CEOs who are standing up and speaking up and actually putting that capital and using it to help advance it, all of that gives me hope. LIA DORSEY: I don't think that we will get there in my lifetime, but I am happy to be a person right now and a change agent who's really trying to plant the seeds that hopefully folks will continue to water. I tell people, this is a marathon. It didn't take us overnight to get into this and it's not going to take us overnight to get out of it. It's going to take some time, but I am very, very hopeful. CHRIS: Awesome. Well again, thank you so much for joining us. Whenever we can have a disruptor for good or a professional troublemaker on the podcast, we are all in and Lia, we certainly commend you for your longtime commitment, the impact that you're having, the willingness to serve and leadership structures and challenging the status quo, improving cultures, right. I mean, you are doing critical work, not just for your firm in particular, but well beyond in terms of improving this profession and the ability for this profession to ultimately serve the legal needs of the country, all of the legal needs of the country, right, not just certain legal needs of our country. So again, thank you so much for joining us on the podcast. BREE: Thank you. Wonderful. Wonderful. LIA DORSEY: Thank you, Bree. Thank you, Chris. I have enjoyed my time immensely and I would be happy to come back if ever you would have me. This has been great and I love this so much. I can talk about this all day. I feel like our time just flew by. So thank you. Thank you again for having me. CHRIS: It certainly did and we will be back in a couple weeks with one final installment of kind of our series on the intersection of diversity, equity, inclusion, and well-being. Thanks to all of our friends out there for listening in and if you have ideas, continue to reach out to Bree or I for suggestions on future speakers. BREE: Absolutely. CHRIS: So everyone be well out there. Thank you. BREE: Take good care everyone. Bye-bye.
Chris switched from Trello over to Linear for product management and talks about prioritizing backlogs. Steph shares and discusses a tweet from Curtis Einsmann that super resonated with the work she's doing right now: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one." This episode is brought to you by BuildPulse (https://buildpulse.io/bikeshed). Start your 14-day free trial of BuildPulse today. Linear (https://linear.app/) Curtis Einsmann Tweet (https://twitter.com/curtiseinsmann/status/1521451508943843329) Louie Bacaj Tweet (https://twitter.com/LBacaj/status/1478241322637033474?s=20) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time. Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start? BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month! And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more. So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed. CHRIS: Good morning, and welcome to Tech Talk with Steph and Chris. Today at the top of the hour, it's tech traffic hits. STEPH: Ooh, tech traffic. [laughs] I like that statement. CHRIS: Yeah. The Git lanes are clogged up with...I don't know. I got nothing. STEPH: [laughs] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What's new in my world? Actually, I have a specific new thing that I can share, which is, as of the past week, I would say, switched from Trello over to Linear for product management. It's been great. It was a super straightforward transfer. They actually had an importer. We lost some of the comment threads on the Trello cards. But that was easy enough to like each Linear ticket has a link back to Trello. So it's easy enough to keep the continuity. But yeah, we're in a whole new world, a system actually built for maintaining a product backlog, and, man, it shows. Trello was a bunch of lists and cards and stuff that you could link between, which was cool. But Linear is just much more purpose-built and already very, very nice. And we're very happy with the switch. STEPH: I feel like you came in real casual with that news, but that is big news, that you did a switch. [laughter] CHRIS: How are you going to bury the lead like that? You switched project management...[laughter] I actually didn't think it was...I'm excited about it but low-key excited, which is weird because I do like productivity and task management software. So you would think I would be really excited about this. But I've also tried enough of them historically to know that that's never going to be the thing that actually makes or breaks your team's productivity. It can make things worse, but it can't make you great. That's the thing that I believe. And so it's a wonderful piece of software. I'm very excited about it but -- STEPH: Ooh, I like that. It can make you worse, but it doesn't make you great. That's so true, yeah, where it causes pain. Well, and it does make sense. You've been complaining a bit about the whole login with Trello and how that's been frustrating. But I haven't even heard of Linear. That's just...that's, I mean, you're just doing what you do where you bring that new-new. I haven't heard of Linear before. CHRIS: I try to live on the cutting edge. Actually, I deeply try to not live on the cutting edge at this point in my life. That early adopter wave, no, no, no, that's not for me anymore. But I've known a few folks who've moved to Linear. And everyone that I've spoken to who has moved to it has been like, "Yeah, it's been great." I've not heard anything negative. And I've heard or experienced negative things about every other product management tool out there. And so, it seemed like an easy thing. And it was a low-cost enough switch in terms of opportunity costs or the like, it took the effort of someone on our team moving those cards over and setting up the new system and training, but it was relatively straightforward. And yeah, we're super happy with it. And it feels different now. I feel like I can see the work in a different way which is interesting. I think we had brought in a Chrome extension for Trello. I think it's like Hello Epics or something like that that allows...it abuses the card linking functionality in Trello and repurchases that feature as an epic management thing. But it's quarter-baked is how I would describe it, or it's clearly built on top of existing things that were not intended to be used exactly in that way. So it does a great job. Hello Epics does a great job of trying to make something like parent-child list management stuff happen in Trello. But it's always going feel like an afterthought, a secondary feature, something that's bolted on. Whereas in Linear, it's like, no, no, we absolutely have the idea of projects, of course, and you can see burndown charts and things. And the thing that I do want to be careful about is not leaning too much into management. Linear has the idea of cycles or sprints, as many other folks think of them, or iterations or whatever you want to call them. But we've largely not been working in that mode. We've just continued to work through the next up list; that's it. The next up list should be prioritized and well defined at the top and roughly in priority order. So just pick up the next card and work on it. And we just do that every single day. And now we're in a piece of software that has the idea of cycles, and I'm like, oh, this is vaguely interesting. Do we want to do that? Oh, but if you're going to do that, you probably do some estimation, right? And I was like, oh no, now we're into a place that's...okay, I have feelings. I got to decide how to approach that. And so, I am intrigued. And I wonder if we could just say like ten carts that's how many come into a cycle, and that's it. And we use the loosest heuristics possible to define how we structure a cycle so that we don't fall into the trap of, oh, what's our roadmap going to look like six months from now? JK, what's anything going to look like six months from now? That's not a knowable fact. STEPH: I was just thinking where you said that you're moving into sprints or cycles, and then there's that push, well, now you got to estimate. And I just thought, do you? Do you have to estimate? [laughs] CHRIS: We need a burndown chart through 2024, and it must be meticulously accurate down to the hour. STEPH: I think meticulously wrong is how that goes. [laughs] CHRIS: Which is the best kind of wrong. If you're going to be wrong, be meticulous about it. STEPH: Be thorough about it. [laughs] Yeah, the team that I'm on right now, we have our bi-weekly planning, and we go through the board, and we pull stuff in. But there's never a discussion about estimation. And I hadn't really appreciated that until just now. How we don't think about how long is this going to take? We just talked about, well, what's in-flight? And how much work do people still have going on? And then here's the list of things we can pull in. But there's always a list that you can go back to. Like, it's very...we pull in the minimum and knowing that if we run out of work, there's another place to go where there's stuff that's organized. And I just love that cadence, that idea of like, let's not try to make guesses about the future; let's just have it lined up and ready for us to go when we're ready to pull it in. Although I know, that's also coming from a very developer's perspective, and there are product managers who are trying to communicate as to when features are going to get out into the world. So I get that there's a balance, but I still have strong feelings and hesitations around estimating work. CHRIS: Well, I feel like there is a balance there. And so many things in history are like, well, this is an overcorrection against that, and that's an overcorrection against this. And the idea that we can estimate our work that far out into the future that's just obviously false to me based on every project I've ever worked on that has tried to do it. And it has always failed without question. But critically, there is the necessity to sync up work and like, oh, marketing needs to plan the launch of this feature, and it's a critical one. What's it going to look like? When's it going to be ready? You know, we're trying to go for an event, it's not just know...we developers never estimate anything past the immediate moment where like, that's not acceptable. We got to find a middle ground here. But where that middle ground is, is interesting. And so, just operating in the sort of we do work as it comes up is the easiest thing because no one's lying about anything at that point. But sometimes you got to make some guesses and make some estimations. And then it gets into the murky area of I believe with 75% confidence that in three weeks, we will have this feature ready. But to be clear, I said with 75% confidence that means one-quarter of the time; we will not be there at that date. What does that mean? What does that failure mode look like? Let's talk about that. And can you have honest, open, transparent, useful conversations there? That's the space that it becomes more subtle if you need to do that. STEPH: You're reminding me of a conversation that I had with someone where they shared with me some very aggressive team goals. And it was a very friendly conversation. And they're like, "How do you feel about aggressive goals?" And I was like, "Well, it depends. How do you feel about aggressive failure?" Because then once I know where you stand there, then we can talk about aggressive goals. Now, if we're being aggressive, but then we fail to achieve that, and it's one of those, okay, we didn't meet the goal that we'd expected, but everything is fine, and it's not a big deal, then I am okay. Sure, let's shoot for the stars. But if it's one of those, we are communicating these goals to the outside world, and it's going to become incredibly important that we meet these goals, and if we don't, then things are going to go on fire, people are going to be in trouble, and it's just going to be awful, then let's not set aggressive goals. Let's not box ourselves into a space where we are setting ourselves up to fail or feel pain in a meaningful way. I agree that estimations are important, especially in terms of you need to collaborate with other departments, and then also just to provide some sense of where the product is headed and when things may be released. I think estimations then just become problematic when they do become definite, and they're based on so many unknowns, and then when I don't know is not an answer. So if someone asked, "What's your estimate for this?" And the very honest real answer is I don't know, like, we haven't done this type of work before, or these are all the unknowns, and then someone's like, "Well, let's just put an estimation of like two weeks on it," and they just sort of try to force-fit it into being what they want, then that's where it starts to just feel incredibly problematic. CHRIS: Yeah, estimation is a very murky area that we could spend entire episodes talking about, and in fact, I think we have a handful of times. So with that, Linear has been great. We're going to see just how much or how little estimation we actually want to do. But it's been a very nice addition to the toolset. And I'll let you know as we deepen our usage of it what the experience is like, but that's the main thing that's new in my world. What's new in your world? STEPH: Well, before we bounce over to my world, you said something that has intrigued me that has also made me start reflecting on some of the ways that I like to work. And you'd mentioned that you have this prioritized backlog that people are pulling tickets from. And I don't know exactly if there's a planning session or how that looks, but I have recognized that when I am working with a team, and we don't have any planning session, if everybody is just pulling from this backlog, that's being prioritized by someone on the team, that I find that a bit overwhelming. Because the types of work being done can vary so drastically that I find I'm less able to help my colleagues or my teammates because I don't have the context for what they're working on. It surprises me. I'm like, oh, I didn't even know we're working on that feature, or I don't have the context for what's the problem that we're trying to solve here. And it makes it just a lot harder to review and then have conversations with them. And I get overwhelmed in that environment. And I've recognized that about myself based on previous projects that were more similar to that versus if I'm on a project where the team does get together every so often, even if it's high level to be like, hey, here's the theme of the tickets that we're working on, or here's just some of the stuff, then I feel much more prepared for the work that is coming in and to be able to context switch and review. And yeah, so I've kind of learned that about myself. I'm curious, are you similar, or how does that work for you? CHRIS: I'm definitely similar. And I think probably the team is closer to what you're describing. So we do have a planning session every week, just a quick 30-minute scan through the backlog, look at the things that are coming up and also the larger themes. Previously, Epics and Trello now projects and Linear. But talking about what are the bigger pieces of work that we're moving on, and then what are the individual tickets associated with that that we'll be expecting to work on in the next week? And just making sure that everyone has broad clarity around what that feature set is. Also, we're a very small team at this point. Still, we're four people total, but one of the developers is on a break for a couple of weeks this summer. And so there are really only three of us that are driving on the code. And so, with three of us working on the projects, we try very intentionally to have significant overlap between the various...like, we don't want any one person to own any portion of things at this point. And so we're doing a good amount of pairing to cross-pollinate and make sure everyone's...not everyone's aware of everything, but at least one other person is sufficiently aware of everything between the three of us. And I think that's been working well. I don't think we have any major gaps, save for the way that we're doing our mobile architecture that's largely managed by one of the developers on the team and a contractor that we're working with to help do a lot of the implementation. That's a known we chose to silo that thing. We've accepted the cost of that for now. And architecturally, the rest of us are aware of it, but we're not like in the Swift code writing anything because I don't know how to write Swift at this point. I'd love to learn it. I hear good things about the language. [12:26] So yeah, I think conceptually very similar to what you're describing of still want to have people be able to review. Like, I don't want to put up a PR and people be like, I don't know, that looks like code, I guess. I'm not sure what it does. Like, I want it to be very...I want us all to be roughly on the same page, and thus far, we are. As the team grows, that will become trickier to maintain because there are just inherently probably more things that are moving, more different feature areas and surface area that we're tackling in any given week, or there are different ways to approach that. I know you've talked about having a limited number of themes for a given cycle, so that's an idea that can pop up. But that's something that we'll figure out as we get further. I think I'm happy with where we're at right now, so yeah, that's the story there. STEPH: Okay, cool. Yeah, all of that resonates with me, and thinking about it a little more deeply in this moment, I'm realizing I think something you said helped me put this together where when I'm reviewing someone's change, I'm not necessarily just looking to see does your code work? I'm going to trust you that your code works. I may have thoughts about design and other things, but I really want to understand more what's the change to the product that we're making? What's the goal that we're looking to achieve? How are we measuring this? And so if I don't have that context, that's what contributes to that feeling of like, hard context switching of not just context switching, but now I have to level myself up to then understand the problem that's being solved by this. Versus had I known some of the themes going into that particular cycle or sprint, I would have felt far more prepared for that review session versus having to then dig through all the data and/or tickets or talk to someone. Well, switching back to what's going on in my world, I have a particular tweet that I want to share, and it's one that Joël Quenneville brought to my attention. And it just resonates so much with all the type of work that I'm doing right now. So I'm going to read the tweet, and then we'll link to it in the show notes as well. But it's from Curtis Einsmann, and Curtis wrote: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one." And that describes all the work that I'm doing right now. It's a lot of exploratory, a lot of data-driven work, and finding ways that we can reduce the time that it takes to run RSpec on CI. And it also ties in nicely to one of the things that I think we talked about last week, where we discovered that a number of files have a high runtime variance. And I've really dug into the data there to understand, okay, is it always specific files that have these high runtime variants? Are there any obvious contributions to what's causing this? Are we making real network calls that then could sometimes take a long time to return? And the result is there's nothing obvious. They're giant files. The number of SQL commands that are being run for each file varies drastically. They're all high, but it's still very different. There's no single fact about these files that has really been like, yes, this is what's causing these files to have such a runtime variance. And so while I've been in the data, I'm documenting it, and I'm making a list and putting it all together in a ticket so at least it's there to look at later. But I'm going to move on. It's one of those I would love to know what's causing this. I would love to address it because it would put us in an ideal state for how we're distributing tests, which would have a significant impact on our runtime. But it also feels a little bit like chasing my tail because I'm worried, like with some of the other experiments that we've done in the past where we've addressed tentpoles, that as soon as you address the issue for one or two files, then other files start having the same problem. And you're just going to continue to chase and chase, and I don't want to be in that. So upfront, this was one of those; hey, let's look at the data. If there's something obvious, let's address it; if not, move on. So I'm at that point today where I'm wrapping up all of that data, and then I'm going to move on, move on to the next thing. CHRIS: There's deep truth in that tweet that you shared at the start of this segment. The idea like if we knew the work that we had to do at the front, yeah, we would just do that, but often, we don't. And so, being able to not treat it as a failure when something doesn't work out is, I think, so critical. I think to expand on the idea just a tiny bit, the idea of the scientific method, failure is totally an option and is part of science. I remember watching MythBusters, and Adam Savage is just kind of like, "Failure is always an option," and highlighting that as part of it. Like, it's an outcome. You've learned something. You have a new data point. You can take that and then carry it forward with you. But it's rough in the moment. And so, I do think that this is a worthwhile thing to meditate on. And it's something that I've had to revisit a handful of times in my career of just like, man, I feel like I've just been spinning my tires all week. I'm like, we know what we want to get done, but just each approach I take isn't working for one reason or another. And then, finally, you get to the end. And then you've got this paragraph-long summary of all the things that didn't work in your PR and one-line change sort of thing. And those are painful, but they're part of the game. Like, that is unavoidable. I have not found a way to just know how to do the work upfront always. I would love that. I would sign up for whatever seminar was selling that. I wouldn't. I would know that that seminar is a lie, actually. But broadly, I'm intrigued by the idea if someone were selling that, I'd be like, well, I mean, pitch me on it. Tell me why I should believe you; I don't, just to be clear. But yeah. STEPH: This project has really helped me embrace always setting a goal or a question upfront about what I'm wanting to achieve or what I'm looking to answer because a number of times while Joël and I have been spelunking through data...And then so originally, with the saga, we started out with why doesn't our math match reality? We understand that if these tests are distributed perfectly across the CPUs, then that should cut the runtime in half. But yet, we weren't seeing that even though we had addressed the tentpoles. So we dug into understanding why. And the answer is because they're not perfectly distributed, and it's because of the runtime variance. And that was a critical moment to look back and say, "Did we achieve the goal?" Yes, we identified the problem. But once you see a problem, it's just so easy to dig in and keep going. It's like, well, now I want to know what's causing these files to have a runtime variance. But it's one of those we achieved our goal. We acknowledged upfront that we wanted to at least understand why. Let's make a second decision, do we keep going? And I'm at that point where, frankly, I probably dug in a little more than I should because I'm stubborn. But I'm recognizing that now's the time to back away and then go back and move on to the next high-priority item, which is converting for funsies; I'll share. The next thing is converting Test::Unit test over to RSpec because we have, I think, around 25 tests that are written in Test::Unit. And we want to move them over to RSpec because that particular just step in the build process takes a good three to four minutes. And part of that is just booting up Rails and then running the tests very fast. And we're underutilizing the machine that's running them because it's only 25 tests, but there are 86 CPUs to run it. So we'd really like to combine those 25 tests with the rest of the RSpec suite and drop that step. So then that should add minimal time to the overall build but then should take us down at least a couple of minutes. And then also makes it easier to manage and help folks so that way, there's one consistent testing framework that's in use versus having to manage some of these older tests. CHRIS: It's funny; I think it was just two episodes back where we talked about why RSpec, and I think both of us were just like, well yeah. But I mean, if there are tests and another, like, it's fine, you just leave them with the exception that if there's like 2% of our tests are in Test::Unit, and everything else is in RSpec, yeah, maybe that that conversion efforts seem totally worth it. But again, I think as you're describing that, what I'm hearing is just like the scientific method, just being somewhat structured in the approach to what's the hypothesis? And what's the procedure we're going to use to determine if that hypothesis is true or false? And then what do we do? And then what are the results? And then you just do that on loop. But being not just sort of exploring. Sometimes you have to be on exploratory mode. But I definitely find that that tiny bit of rigor of just like, wait, okay, before I actually do anything, what do I think is going on here? What's my guess? And then, okay, if that guess were true, what would I be able to observe in the world? Okay, here we go. And just that tiny bit of structure is so...it sometimes feels highly formal to go into that mode and be like, no, no, no, let me take a step back. Let me write down my thoughts. I'm going to have a little checklist and do the thing. But I've never regretted doing it. I will say that. I have deeply regretted not doing it. I feel like I should make a list of things that fit that structure. I've never regretted committing in Git ever. That's been great. I've always been able to unwind it, but I've never been able to not unwind it or the opposite. I've regretted not committing. I have not regretted committing. I have regretted not writing out my hypothesis or approach. I have not regretted doing it. And so, yeah, this feels like it falls firmly in that category of like, it's worth just a tiny bit of structure. There's a reason it is the scientific method. STEPH: Yeah, I agree. I've not regretted documenting upfront what it is I look to achieve and how I think I'm going to answer the question. That has been immensely helpful. Because then I also forget, like, two weeks ago, I'll be like, wasn't there a question around why this was happening, and I need to understand? And all of that was so context-heavy that as soon as I'm out of the thick of it, I completely forget it. So if I care about it deeply or if I want to be able to revisit it, then I need to document it for myself. It's given me a lot of empathy for people who do more scientific research around, oh my gosh, like, you have to document everything you do and then still be able to prove it five years from now or however long. I'm just throwing out numbers. And it needs to be organized enough that someone, if they do question your research that, then you have it there. My research documents would not withstand scrutiny at this point because they are still just more personal notes. But yes, it's given me a lot of empathy and respect for people who do run very serious research, experiments, and trials, and then have to be able to prove it to the world. Pivoting just a bit, there's a particular topic that resonated with both you and I; in fact, it's a particular tweet. And, Louie, I do apologize if I mispronounce your last name, but Louie Bacaj. And we'll include a link in the show notes to the tweet, but Louis shared, "I managed multiple engineering teams before quitting tech. Now that I quit, I can speak freely. Here are 12 things your manager may not be telling you, but I know for a fact will help you." So there are a number of interesting discussions and comments that are in this thread. The one thing in particular that really caught my attention is number 10, and it's "Advocate for junior developers." So they said that their friend reminded them that just because you don't have 10-plus years of experience does not mean that they won't be good. Without junior engineers on the team, no one will grow. Help others grow; you'll grow too. And that's the part that I love so much is that without junior engineers on the team, no one will grow because that was very thought-provoking for me. It's something that I find that I agree with deeply, but I hadn't really considered why I agree with that so much. So I'm excited to dive into that topic with you. And then, as a second topic to go along with that is, can juniors start with a remote team? I think that's one of the other questions when you and I were chatting about this. And I'm intrigued to hear your thoughts. CHRIS: A bunch of stuff there. Starting with the tweet, I love elements of this. Some of it feels like it's intentionally somewhat provocative. So like, without junior engineers on the team, no one will grow. That feels maybe a little bit past the bar because I think we can technically grow, and we can build things and whatnot. But I think what feels deeply true to me is truly help others grow; you'll grow too. The act of mentoring, of guiding, of training, of helping someone on their journey will inherently help you grow and, I think, change the way that you think about the work. I think the beginner mind, the earlier in the career folks coming into a codebase, they will see things fundamentally differently in a really useful way. It's possible that along your career, you've just internalized things. You've been like, yeah, no, that was weird. But then I smashed my head against it for a while, and now I understand this thing. And it just makes sense to me. But it's like, that thing actually doesn't make sense. You have warped your mind to match the thing, not, quote, unquote, "come to understand it." This is sounding too judgmental to people who've been in the industry for a while, but I found this of myself. Or I can just take for granted things that took a long time to adapt my head to, and if anything, maybe I should have pushed back a little more. And so, I find that junior engineers can be a really fantastic lens for the complexity of a project. Like, the world is truly a complex place, and that's just true. But our job as software engineers is to tame that complexity and manage it. And so, I love the mindset that can come or the conversations that can come out of that. And it's much like test-driven development is a pressure on the complexity of your code, having junior engineers join the team and needing to explain how all of the different features work, and what the overall architecture is, and the message passing under this and that, it's a really useful conversation to have. And so that "Help others grow; you'll grow too" feels deeply, deeply true to me. STEPH: Yeah, I couldn't agree more in regards to how juniors really help our team and especially, as you mentioned, with complexity and ¬having those conversations. Some of the other things that have come to mind for me as well around the importance of having junior developers on your team...and maybe it's not specifically they're junior developers but that you just have a variety of experience on your team. It's going to help you lean into a culture of learning because you have people that are at different stages of their career. And so you want an environment where people can learn together, that they can fail together, and they can be public about it. And having people that are at different stages of their career will lead, well, at least ideally, it'll lead to more pair programming. It's going to lead to more productive code reviews because then people can ask more questions around why did you choose this, or why are you doing that? Versus if everybody is at the same level, then they may just intuitively have reasons that they think someone did something. But it takes someone that's a bit new to say, "Hey, why did you choose this?" or to bring up some other ideas or ways that they could pursue it. They may bring in new ideas for, like, why has the team always done something this way? Let's think about new ways that we could do this. Or maybe this is really unfriendly, the way that we're doing this, not just for junior people but for people that are new to the team. And then there's typically less knowledge siloing because then you're going to want to pair the newer folks with the more experienced folks. So that way, you don't have this more senior developer who's then off in a corner working by themselves. And it's going to improve your communication skills. There's just...I realized I'm just rambling because I feel like there are so many benefits that go along with having a variety of people on your team, especially in terms of experience. And that just leads to such a better learning environment and, ultimately, better software and better products. And yet, I find that so many companies won't embrace people that are newer to software. They always want the senior developers. They want the 10x-er or whatever those are. They want the people that have many, many years of experience. And there's so much value that comes from mentoring the next group of developers. And it's incredibly frustrating to me that one, companies often aren't open to it. But honestly, more than that, as long as you're upfront and honest about like, hey, this is the team that we need right now to build what we're looking to build, I can get past that; I can understand that. But please don't then mislead people and say that you're a junior-friendly team, and then not be prepared. I feel like some teams will go so far as to say, "Yes, we are junior-friendly," and they may even tweak their interview process to where it is a bit more junior-friendly. But then, by the time that person joins the team, they're really not prepared. They don't have an onboarding plan. They don't have a mentorship plan. And then they fail that person because that person has worked hard to get there. And they've worked hard to bring that person onto the team, but then they don't have a plan from there. And I've seen it too many times. And it just frustrates me so much because it puts that junior person in such a vulnerable state where they really have to be an incredible self-advocate to then overcome those hurdles from a lack of preparation on that company's part. Okay, I think that's my event. I'm sure I could vent about this a lot more, but I will cut it off there. That's the heart of it. CHRIS: I do think, like, with anything else, it's something that we have to be intentional about. And so what you're saying of like, yeah, we're a junior-friendly company, but then you're just hiring folks, trying to find folks that may work at a slightly lower pay grade, and that's what that means to you. So like, no, no, that's not what this is. This needs to be something different. We need to have a structure and an organization that can support folks at different points in their career. But it's interesting to me to think about the sort of why of it. And the earlier part of this conversation, we talked about some of the benefits that can come organizationally from it, and I do sincerely believe in that. But I also believe that it is fundamentally one of the best ways to find really talented people early on in their career and be in a position to hire them where maybe later on in their career, that just wouldn't happen naturally. And I've seen this play out in a number of organizations. I went to Northeastern University for my college, and Northeastern is famous for the co-op program. Northeastern sounds really fancy. Now I learned that they have like a 7% acceptance rate for college applications right now, which is wildly low. When I went to Northeastern, it was not so fancy. So just in case anyone's hearing that and they're like, "Oh, Northeastern, wow." I'm not that fancy. [laughs] But they did have the co-op then, and they still have it now. And the co-op really is a differentiating thing. You do three six-month rotations. And it is this fundamental differentiator in terms of when you're graduating. Particularly, I was in mechanical engineering. I came out, and I actually knew what that meant in the world. And I'd used Outlook, and I knew what a water cooler was and how to talk near one because that's a critical thing to learn in the world. And really transformative experience for me. But also, a thing that I observed was many of my friends ended up working at companies that they had co-opted for. I'm one of those people. I would say more than 50% of my friends ended up with a position at a company that they had done a co-op rotation with. And it really worked out fantastically. That organization and the individual got to try things out, experience. And then, I ended up staying at that company for a number of years, and it was a wonderful experience. But I don't know that I would have ended up there otherwise. That's not necessarily the way that would have played out. And similarly like, thoughtbot has the apprenticeship. And I have seen so many wonderful developers start at that very early point in their career. And there was this wonderful structure around them joining the thoughtbot team, intentional, structured, supported. And then those folks went on to be some of the most talented developers that I've ever worked with at a wonderfully talented organization. And so the story of like, you should do this, organizations. This is a thing that you should invest in for yourself, not just for the individual, like, for both. Everybody wins in this case, in my mind. I will say, though, in terms of transparency, I currently manage a team of three developers. And we hired very intentionally for senior folks this early on in where we're at. And that was an intentional choice because I do believe that if you're going to be hiring more junior developers, that needs to be something that you do very intentionally, that you have a support structure in place, that you're able to invest the time in where they're at and make sure we have sort of... I think a larger team makes more sense to bring juniors into broadly. That's the thing that I'm saying out loud that I'm like, I should push on that a little bit. Is that true? Do I really believe that? But I think so, my actions obviously point to it. But it is an interesting trade-off space of how do you think about that? My hope is that as we grow as an organization, that we would then very intentionally start hiring folks in a more junior, mid-level to junior and be very intentional about how we support them, bring them into the organization, et cetera. I do believe it is a win-win situation for everyone when done with intention and with focus. STEPH: That's such an interesting bit that you just said because I very much appreciate when companies recognize do we have the bandwidth to support someone that's more junior? Because at thoughtbot, we go through periods where we don't have our apprenticeship that's open because we recognize we're not in a place that we can support someone. And we don't want to bring someone in unless we can help them be successful. I very much admire that and appreciate that about companies when they can perform that self-assessment. I am so intrigued. You'd mentioned being a smaller team. So you more intentionally hire senior developers. And I think that also makes sense because then you need to build up who's going to be in that mentorship pool? Because then people could leave, people could take vacations, and so then you need to have that support system in place. But yeah, I don't know what that then perfect balance is. It's like, okay, so then as soon as you have like five people available to mentor or interested in mentorship, it's like, then do you start bringing in the conversation of like, let's bring in someone that we can help build up and help them be successful and join our team? And I don't know what that magical number is. I do think it's important for teams to reflect to say, "Can we take on someone that's junior?" All the benefits of having someone that's junior. And then just being very honest and then having a plan for once that junior person does arrive. What does their career path look like while they've joined that team, and who's going to be that person that's going to help them level up? So not only make that choice upfront of yes, we are bringing someone on but let's also think about like the first six months of their work here at the company and what that's going to look like. It feels like an important step that a lot of companies fail to do. And I think that's why there are so many articles that then are like, hey, if you're a junior dev, here's all the things that you should do to be the best junior dev. That's fabulous. And we're constantly shoring up junior devs to be like, hey, here's all the things that you need to be great at. But we don't have as many conversations around; hey, here's all the things that your manager or the rest of your team should be great at to then support you equally as you are also doing your best to meet them. Like, they need to meet you halfway. And I'm not completely unsympathetic to the plight; I understand. It's often where I've seen with teams the more senior developers that have very strong mentorship communication skills are then also the teammates that get pulled into all the meetings and all the different projects, so then they are less available to be that mentor. And then that's how this often fails. So I don't think anybody is going into this intentionally, but yet, it's what happens for when someone is new and joining a team, and it hasn't been determined the next six months what that person's onboarding and career path looks like. Circling back just a bit, there's the question around, can juniors start with a remote team? I can go first. And I'm going to say unequivocally yes. There's no reason a junior can't start with a remote team. Because all the things that I feel strongly about come down to how is your team going to plan for this person? And how are they going to support this person? And all the benefits that you get from being in an office with a team, I think those do exist. And frankly, for someone like myself, it can be easier to establish a bond with someone that you get to see each day, get to see in person. You can walk up to their desk and can say, "Hey, I've got a question for you." But I think all those benefits just need to be transferred into a remote-friendly way. So I think it does ratchet up how intentional you have to be with your team and then onboarding a junior developer. But I absolutely think it's doable, and we should do it. CHRIS: You went with unequivocally yes as your answer. I'm going to go with a qualified maybe as my answer. I want this to be true, and I think it can be true. But I think it takes all the more intentionality than even what we've been describing. To shift the question around a little bit, what does remote work mean? It doesn't just mean we're doing the work, but we're separate. I think remote work inherently is at its best when we also are largely async first. And so that means more structured writing. The nature of the conversation tends to be more well-formed in each interaction. So it's like I read a big document, and then I pass it over to you. And at your leisure, you respond to it with a bunch of notes, and then it comes back to me. And I think that mode of interaction, while absolutely wonderful and something that I love, I think it fits really well when you're a little bit further on in your career when you understand things a little bit better. And I think the dance of conversation is more useful earlier on and so forth. And so, for someone who's newer to a team, I think having the ability to ask a quick question over and over is really useful to someone who's early on in their career. And remote, again, I think it's at its best when it's async. And those two are sort of at odds. And so it's that mild tension that gives me pause of like, something that I think that makes remote work great I do think is at least a hurdle that you would have to get over in supporting someone who's a little bit newer. Because I want to be deeply present for someone who's newer to their journey so that they can ask a lot of questions so that I am available to be interrupted regularly. I loved at thoughtbot sitting next to someone and being their mentor and being like, yeah, anytime you want, just tap on my desk. If I got my headphones on, that doesn't mean I'm ignoring you; it means I just need to make the sounds go away for a minute because that's the only way my brain will work. But feel free to just tap on my desk or whatever and grab my attention for a moment. And I'm available for that. That's an intentional choice. That's breaking up my continuity of the day, but we're choosing that for a reason. I think that's just a little harder to do in a remote context and all the more so if we're saying, hey, we're going to try this async thing where we write structured documents, and we communicate in these larger, more well-formed, communicates back and forth. But I do believe it can be done. I think it should be done. I just think it's all the harder for all of those reasons. STEPH: I agree that definitely makes it harder. But I'm going to push a little bit and say that when you mentioned being deeply present, I think we can be deeply present with someone and be remote. We can reduce the async requirements. So if you are someone that is more senior or more accustomed to the team, you can fall back to more of those async ways to communicate. But if someone is new, and needs more mentorship, then let's just set up time where we're going to literally hang out for a couple of hours each day or whatever pairing environment works best for them because pairing can also be exhausting. But hey, we're going to have a check-in each day; maybe we close out each day and touchpoint. And feel free to still message me and interrupt me. Like, you're going to just heighten your availability, even though it is remote. And be aware, like, hey, this person could message me at more times, and I'm okay with that. I have opted into this form of communication. So I think we just take that mindset of, hey, there's this person next to me, and I'm their mentor to like, hey, they're not next to me, but I'm still their mentor, and I'm still here for them. So I agree that it's harder. I think it falls on us and the team and the mentors to change ourselves versus saying to juniors, "Hey, sorry, it's remote. That's not going to work for you." It totally works for them. It's us, the mentors, that need to figure out how to make it work. I will say being on that mentor side that then not being able to see someone so if they are next to me, I can pick up on body language and facial expressions, and I can tell when somebody's stuck. And I can see that they're frustrated, or I can see that now's a good time for me to just be like, "Hey, how's it going? What are you working on? Or do you need help with something?" And I don't have that insight when I'm away. So there are real challenges like that that I don't know how to address. I have gone the obnoxious route [laughs] where I just message people, and I'm like, "Hey, how's it going? How's it going? How's it going?" And I try not to do that too much. But I haven't found a better way to manage that other than to constantly check in because I do have less feedback from that person that I'm working with unless they are just incredibly open about sharing when they're stuck. But typically, when you're newer to a team or newer to a career, you're going to be less willing to share when you're stuck. But yeah, there are some real challenges, but I still think it's something for us to figure out. Because otherwise, if we cut off access for remote teams to junior folks, I mean, that's where we're headed. There are so many companies and jobs that are headed remote that not being junior friendly and being remote in my mind is just not an option. It's something that we need to figure out. And it's hard, but we need to figure it out. CHRIS: Yeah, 100% on we need to figure that out and that that's on us as the people managing and structuring and bringing folks into teams. I think my stance would be like, let's just be clear that this is hard. It takes effort to make sure that we've provided a structure in which someone newer to a team can be successful. It takes all the more effort to do so in a remote context, I think. And it's that recognition that I think is critical. Because if we go into this with the wrong mindset, it's like, oh yeah, it's great. We got this new person on the team. And yeah, they should be ready to go in like two weeks, right? It's like, no, no, this is a different thing. We need to be very clear about it. This is going to require that we have someone who is able to work with them and support them in this. And that means that that person's output will likely be a little bit reduced for the period of time that we're talking about. But we're playing a long game here. Let's make sure we're clear on that. This is intentional. And let's be clear, the world of hiring and software right now it's not like super easy. There aren't way more software developers than there are jobs; at least, that's been my experience. So this is something absolutely worth investing in for just core business reasons and also good for people. So hey, it's a win-win. Let's do it. Let's figure it out. But also, let's be clear that it's going to be a little tricky along the way. So, you know, let's be intentional about that. But yeah, obviously do it, got to do it. STEPH: Wait, so I feel like we might have circled back to unequivocally yes. [laughs] Have we gotten there, or are you still on the fence? CHRIS: I was unequivocally yes from the beginning, but I couched it in, but...yeah, I said other things. You're right. I have now come around; let's say to unequivocally yes. STEPH: [laughs] Cool. I don't want to feel like I'm forcing you to agree with me. [laughs] But I mean, we just so rarely disagree. So we've either got to identify this as something that we disagree on, which would be one of those rare occasions like beer and Pop-Tarts. CHRIS: A watershed moment. Beer and Pop-Tarts. STEPH: Yeah, those are the only two so far. [laughter] CHRIS: Not together also. I just want to go on record beer and Pop-Tarts; I don't think would be...anyway. STEPH: Ooh, I don't know. It could work. It could work. CHRIS: Well, there's another thing we disagree on. STEPH: I would not turn it down. If I was eating a Pop-Tart, and you're like, "Hey, you want a beer?" I'd be like, "Sure," vice versa. I'm drinking a beer. "Hey, you want a Pop-Tart?" "Totally." CHRIS: Okay. Well yeah, if I'm making bad decisions, I'm obviously going to chain them together, but that doesn't mean that they're a good decision. It's just a chain of bad decisions. STEPH: I feel like one true thing I know about you is that when you make a decision, you're going to lean into it. So like, this is why you are all about if you're going to have a Pop-Tart, you're going to have the highest sugary junk content Pop-Tart possible. So it makes sense to me. CHRIS: It's the Mountain Dew theorem, yeah. STEPH: I didn't know this had a theorem. The Mountain Dew theorem? CHRIS: No, that's just my name. Well, yeah, if I'm going to drink soda, I'm going to drink Mountain Dew, the nonsense nuclear option of soda. So yeah, I guess you're describing me, although as you say it back to me, I suddenly feel very, like, oh God, is this who I am as a person? [laughs] And I'm not going to say you're wrong. I'm just going to spend a little while thinking about some stuff. STEPH: I mean, you embrace it. I think that's lovely. You know what you want. It's like, all right, let's do this. Let's go all in. CHRIS: Thank you for finding a wonderfully positive way to frame it here at the end. But I think on that note, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Welcome to Season 3! New episodes will be released througout the spring and summer of 2022. The first episode of season 3 features a conversation with Chris Mann '00. Chris has built his career around making a difference in the lives of others. He's joined in conversation with JP Cunningham '23. They discuss Chris' time at Holy Cross and how he has carried the HC mission to serve others throughout his life and career. Interview originally recorded in November 2021. -- Chris: And so, I think you're seeing companies really say, "This is about our values and being clear on what our values are." Because our most important stakeholders, our people are saying that that's what matters to them and that's what they care about. And so, I think we just think about business differently. Maura: Welcome to Mission-Driven, where we speak with alumni who are leveraging their Holy Cross education to make a meaningful difference in the world around them. I'm your host Maura Sweeney from the class of 2007, Director of Alumni Career Development at Holy Cross. I'm delighted to welcome you to today's show. This episode features Chris Mann from the class of 2000. Maura: Chris's career has spanned roles that have one thing in common, making a positive impact on people and communities. He graduated from Holy Cross with a psychology major and art history minor. With this foundation, he joined the Dana-Farber and Jimmy Fund team, and his career flourished. Skilled at fundraising, event planning, marketing, and communications, Chris flexed his talents and roles at New Balance, Cone Communications, Reebok, and City Year. Maura: At the time this podcast was recorded, Chris worked as the Senior Vice President of Development for City Year. At the time this podcast is aired, Chris will have assumed a new role at Bain Capital as the first Vice President of Community Affairs, leading their philanthropy, employee volunteerism, events, and sponsorship. Chris is joined in conversation by JP Cunningham from the class of 2023. Maura: Their conversation is far-reaching but starts with the transformative years that Chris spent at Holy Cross, his time on the track and field team, and serving as senior class president, as well as his experiences during immersion programs and running summer orientation helped shape who he is today. Better yet, he can count the ways that the Holy Cross Alumni Network has supported him through each step in his career. A proud alumnus, Chris exemplifies the impact that one person can make by committing their talents to mission-driven work. JP: Hello, everyone. Thank you all for listening. I'm JP Cunningham. I'm a junior here at Holy Cross. And I'm joined by Chris Mann. Chris, how are you doing today? Chris: Hey, JP. I'm good. Good to be here with you today. JP: Thank you. So, yeah, I guess with that, we'll get right into it. I wanted to start with a little bit before your time at Holy Cross. So, my first question is, during your college search, what were some of the factors that drew you to the college? And was it your top choice? Yeah, if you can touch on that. Chris: Yeah, absolutely. So, like most high school students, I was looking at a lot of different schools. I didn't quite know what I wanted. I was the first and oldest child in my family, so I hadn't any brothers or sisters go through the college application process before. And at the time, this was in the mid-'90s, there wasn't as much information. It was kind of the glossy books you got in the mail and things like that, and word of mouth. But I knew a couple of things. Chris: I knew living in Andover, Massachusetts and growing up there, I wanted to be close enough to home that I could get back and forth. So, that kind of kept me looking at New England colleges for the most part. And as I started exploring, I knew about Holy Cross's reputation from an academic standpoint, but also had a couple of people at my high school, Andover High School, that I remember really respecting and looking up to in some ways that had gone to Holy Cross a couple of years before me. Chris: So, Chris Sintros, who was a class of '98, and Christine Anderson, class of '99. And I think it just piqued my interest to say, "Hey, those are people that I think I want to be like, and they chose this school." I actually got really fortunate to end up at Holy Cross. It was one of, I think, five schools I applied to, and I was waitlisted. So, I actually didn't know that I was going to get in until right to the end, and was really relieved and excited when I got in off the waitlist. Chris: And it ended up being a great scenario because I came on campus as the only person from my high school going to Holy Cross in that class. And I was matched up with three roommates in a quad in my freshman year. And it really helped me build some relationships and a network right away in a new place, new environment. JP: Awesome. That's really cool. Yeah, I can kind of relate to that, too, because both my dad and my sister went here, and then a lot of just friends and older classmates at my high school went to Holy Cross. And they're all just role models. And I felt the same way like, wow, this seems like a good place to be and that's what drew me there, too. So, it's great. Chris: Yeah. And I would say too, in visiting the school and seeing it, I mean, I certainly fell in love with the classic New England brick college, IV and setting, and it's a beautiful campus, as you know. And so, that, I was really excited about. And I started to get more and more of a field just as I came to visit a couple of different times. Chris: And as you started to read in and hear about the college's mission, and talking about being men and women for and with others, that all started to really resonate for me and felt a little different compared to some of the other schools that I had been visiting, and I loved that. I also really thought that the size was right for me. I was somewhat of a shy kid. I think I was trying to figure out where my place was. Chris: And I liked the idea of being in a school that felt a little smaller and where I wasn't going to get lost in the shuffle. And I think that ended up being a really big thing for me over the course of the four years, too. JP: Yeah, I couldn't agree more. I feel like people might say it's cliche, but I feel like at Holy Cross, the sense of community, just being on campus that first time, at least for me too, visiting that first time, there's something about it that really draws you and makes you feel like, "Hey, this is the place for me." Yes. I guess moving into the next question, after you became a student here, what were some of the things you were involved in during your time on the Hill? And was there one that you were most passionate about? Chris: I got to do a lot of different things, which was to our earlier point, the benefit of going to a smaller school with a lot of opportunities. Off the bat, athletics ended up being a big thing for me, which wasn't something I had planned. I had done sports in high school all three seasons. Really, I was passionate about basketball and track and field, but hadn't expected to be able to do that in college. Chris: And I showed up on campus and I remember, I think it was probably the first week of school, I got a phone call from Larry Napolitano who was the captain of the track team just saying, "Hey, we saw you did track and field in high school. Would you be interested in coming out and joining the team?" And I said, "Yes", and it was one of the great experiences of my time on the Hill being able to be part of that team. Chris: I certainly wasn't a phenomenal athlete or setting any records, but being part of that team environment, getting a chance to get into the daily routine that athletes do I think really benefited me. The structure was really helpful. I think it prepared me for life after college and having a busy schedule of going from weightlifting, to workouts, to classes, to other things. Chris: And just the relationships you build with teammates and coaches and the life lessons of athletics were really valuable and it helps cement a lifelong practice of fitness and health that exists to this day. So, that was foundational. That was a big one. And then, later in my time at Holy Cross, my senior year, I ended up getting encouraged to run for student government. And I ended up being elected president of the senior class of 2000. Chris: And that was a really powerful experience for me, too, so having a broader role in leading fellow students and thinking about our voice on campus. And to be honest, putting myself out there more publicly to run and be elected was not something I was very comfortable with or used to. So, building up that courage and having people believe in me to do that was also really important. And I think it started to show me that maybe I could do some things that I hadn't previously been confident enough to do or thought I could do. Chris: So, that was another big experience. And same thing, balancing those commitments with academics, with athletics really prepared me for life after college and the working world. JP: That's great. Yeah. I feel like balancing all those activities, being a full-time student athlete while being the president of your class can only help you in the long run and having that structure to your schedule and balancing different activities. Because I don't play any sports, but just balancing activities week by week with the schoolwork and all that, it definitely... I feel like it can only help you for after you graduate. JP: So, yeah, going off that, I guess a little more shifting towards the academics. One of the great things about Holy Cross in liberal arts education in general is that you really have the opportunity to major in anything that piques your interest, and then go out and succeed in business or whatever field you choose. So, I know you're a psychology and art history major. Were there any specific skills that you developed from your course of study that have helped you in your professional career? Chris: Yeah, it's interesting. It was another case of I didn't know what I wanted to study. When I came to Holy Cross, I started taking a few different classes in different areas to try and understand what resonated with me and that was what attracted... the liberal arts education attracted me to Holy Cross as well because I didn't know what I wanted to do. Chris: And I found myself really intrigued in the early psychology classes that I took, whether it was Intro to Psychology, or we had some ones later, behavioral psychology and other things, that just fascinated me between the... both the science and the depth of that field, but then also the ways in which humans interact and the way in which our environment influences us just fascinated me. And I really found myself loving that. Chris: And then, on the flip side, I ended up getting a minor in art history, similarly, because I just found myself interested and passionate in the subject matter and human experience behind that. I wouldn't have thought at the time that either of those would translate into a career path or job. I wasn't going to be a psychologist. I certainly wasn't an artist, but I have found over time that I think there are some lessons in the specifics of that. Chris: And in my current job in previous iterations where I'm a fundraiser, and in essence, I sell people on City Year's mission and investing in City Year's mission, some of the experiences and the lessons from psychology come out there, and understanding how you engage and connect with and influence people. So, that is certainly there. Chris: But more broadly, I just think the liberal arts' approach and specifically Holy Cross and the rigor of the academics forced me to really get tight and concise with my thinking, with how to make an argument, with how to take in information, synthesize that and consolidate it and communicate in a really effective, clear way, both verbally, written, visually, et cetera. Those are things I lean on on a daily basis. And I don't think I appreciated it at the time. Chris: But in talking with friends and colleagues and others whose college experiences were very different, either giant lecture halls or other things, the time, the attention, the rigor of the academics was really valuable. And I don't think I realized it until much later. JP: Yeah, I agree. I feel like everyone... and that's also one of the things that drew me to the liberal arts education is the fact that people say, obviously, you study what's interesting to you, but then being able to develop those skills like critical thinking, communication, and just being able to use those skills effectively go a long way in the professional world. So, you touched on some of the activities you were involved in when you are here at Holy Cross. JP: And since you graduated, there have been a number of new programs, activities. For example, the Ciocca Center for Business, Ethics, and Society was established in 2006. Are there any programs or activities happening now that you've become aware of at Holy Cross that stand out to you or you wish were around when you were a student? Chris: I think the Ciocca Center would have been something I would have really enjoyed getting a chance to participate in. I think this idea of business and ethics and where those intersect, and how companies can have an impact on society has been the centerpiece of my career and the different jobs that I've had. So, I think I would have really enjoyed going deeper there in a more formal way, for sure. Chris: I also really appreciate what the college has done in the last few years as we think about diversity at Holy Cross and how is the Holy Cross experience accessible to all. That is, I think, one takeaway from my time. Certainly, we had some level of diversity when I was at Holy Cross, but it was not nearly what it needs to be and what it should be going forward. And I think particularly for fellow classmates that were of color or came from different backgrounds and the majority of students, I think it was a really challenging thing for them and continues to be. Chris: And so, I think the idea of having a college community that does have more representation, does have more diversity across all levels and spectrums of how diversity shows up is valuable because I think, to be honest, it creates a better learning environment, it creates better dialogue, it creates better understanding. And I think that was a challenge, to be honest, during my time at Holy Cross. Many of the students were just like me coming from the same families, communities, et cetera. Chris: And so, that's something that I've been very encouraged to see over the last few years. JP: Yeah, absolutely. I feel like as a student for me and talking to alumni like yourself and just other people I've spoken to, people just say it's awesome to see the way the college is changing for the better, both academically and socially, like you just touched on. Moving a little away from strictly Holy Cross, can you maybe run through your career or professional path starting after you graduated from the college? Chris: Yeah. So, I was really lucky, and this is an area where I talk to current students or students that are considering Holy Cross, and the network of alumni really stepped up and helped me start my career and pursue the opportunities I've had. And I've been really fortunate to come across Holy Cross graduates at every role, every organization that I've been in, which speaks to the power of even the network of a small school overall. Chris: So, I was trying to decide what I wanted to do after graduation. As we mentioned, I had done activities in track and field. I was big into sports, so I was thinking sports marketing and those areas. I also got a chance, while I was on campus, to do a couple of spring break trips via Habitat for Humanity and build some houses down in Tallahassee, Florida for two spring breaks in a row. Chris: That and an internship at the Special Olympics while I was a student started to spark my interest in having a job where I can actually give back and support causes I cared about, and earn beyond a paycheck feel like I was having an impact on a daily basis in my work. So, that was interesting to me. And we had also run and started summer orientations program, the Gateway Summer Orientation Program. Chris: I was fortunate to be part of that first summer orientation program as a leader and then later, one of the co-leads of it. And I found myself really liking and being attracted by events and the planning that would go into preparing for an orientation program or some other event, and then seeing that come together and seeing people have a great time interacting and being part of that event. So, I was looking at sports marketing. I was looking at event management. I was thinking about nonprofits and exploring different things. Chris: And I was talking with John Hayes, who's class of '91. And he was the director of Holy Cross Fund at the time. He was our advisor for our Senior Class Gift. And John said, "Hey, you should really go talk to my friend Cynthia Carton O'Brien now, a class of '93, who was working at Dana-Farber Cancer Institute and the Jimmy Fund." And so, he connected me to Cindy via informational interview. I went and learn more about Dana-Farber and the Jimmy Fund, and just loved the idea of it. Chris: It was a cancer hospital, obviously in Boston, doing amazing work for patients and their families, but also had this deep connection in history to the Red Sox. So, as a sports fan, I was excited about that. And I ended up applying for a couple of different jobs there coming out of school. And on the fundraising side, one was potentially to work in plan giving, so helping people think about their giving benefiting those beyond their lifetimes and resourcing the organization for the future. Chris: And then, the other one was going to be a rotational role, which was going to work on different areas of fundraising, the Boston Marathon Jimmy Fund Walk, donor advance and stewardship events, and then also cause marketing, which at the time was a fairly new thing that companies were starting to do. And so, I ended up getting that second job on the rotation. And it was just a phenomenal opportunity experience to get to learn different parts of fundraising and to work with some really, really great team. Chris: So, when I think about advice for people coming out of school and what to think about, I think finding a job where you can learn as much as possible and get exposed to as many different things as you can certainly really worked out for me. And it gave me a chance to understand what parts of fundraising and events that I really liked and what worked well for me. And I was also really lucky to work with just some amazing people. Chris: In particular, my first boss and my first teams on the Jimmy Fund Walk, which later included a couple of Holy Cross grads in the years after me that we hired as well, was just a perfect first start into the working world, for sure. JP: Definitely. So, you may have just answered this next question, but I'll still pose it to you. I know you talked about your experience with the Gateway's orientation. So, would you say that was something that from your time at Holy Cross that greatly influenced your post-grad experience and career? Or were there few other things? Chris: Gateways did influence me mostly in that I realized that I really enjoyed working in a team environment and it was with a lot of students from across different grades that I hadn't met or didn't know before. And I think that idea of working in a team that had some diversity in their experiences, et cetera, is definitely something that's resonated longer term and I've realized leads to a great work environment and a great end product in that Gateway's orientation. Chris: I definitely love the event planning piece of it. And so, I think that steered me towards my first job, for sure. As I got older, I realized I didn't love the always on and the stress of the event planning and so I've since moved to other areas. But I think the idea of that camaraderie and coming together to build something bigger than yourselves was really valuable for me. And I also loved being able to share my experiences with others and with other students. Chris: And so, getting a chance to really talk to people and help share my experience was something that I valued. I think it was probably an early stage mentorship. I don't think I realized it at the time, but I think that's what drew me to it was being able to work with students who were coming into a Holy Cross environment, nervous about it, not sure what to do, and really saying, "Hey, this is going to be a great experience for you. And here's all the reasons why or here are some things to look at." Chris: I realized I think later that that idea of being a mentor and having that mentoring relationship is something that I really value and enjoying doing. But again, I don't think I realized it at the time. But I think it was one of those foundational things, for sure, at least in the early jobs. JP: Absolutely. Yeah, that's awesome. I feel like it's cool to think back on the different ways certain events or activities that you took or spend so much time participating in can go such a long way in your life and the decisions you made, and things like that. Chris: I think so. I think other experiences, too, that I had probably more steered in that direction of what I wanted to do for career, I think having the opportunity to do an internship during my junior year with the Special Olympics of Massachusetts and help to do the marketing and recruitment for a Polar Plunge event that they did sparked an interest in, "Oh, you can do marketing, and you can do these types of business things that I want to do that have an impact for our cause." Chris: And Special Olympics was near and dear to my heart because my mom was a special education teacher. And so, I saw firsthand the power that that can have when you have inclusive opportunities for all young people, and give them a chance to participate in athletics and have those same experiences and lessons that I did from it was really valuable. So, I think the idea and the spark of having a job that can have an impact started there. Chris: And then, I had a summer experience in between my junior and senior years at Holy Cross, where I worked in an educational camp for kids called Super Camp and spent a few weeks on a college campus working with students that were struggling academically. And what we learned in the process when you get to meet these kids and work with them is that, in most cases, it wasn't because they didn't have the ability to learn or to do those that work. Chris: It was because there were other things going on in their lives that were either being a distraction or creating additional challenges that made it hard for them to show up in the education environment or in school in the way that they could or they should. And I think that in hindsight really is why I find myself loving the work that we do at City Year right now. And it's come full circle in that way because we see that talent is absolutely equally distributed and it's everywhere, but access and opportunity are not equally distributed. Chris: So, that's part of what we get to do at City Years is to say, "How can we make sure that every student gets the opportunities that they deserve to really tap into their talent and see success in their futures?" And I think that experience at Super Camp really gave me the first understanding of what education can look like when it works for everyone. JP: Yeah, absolutely. So, while we're looking in hindsight and reflecting on your experience post-Holy Cross, I know there's a lot to say about the strength of Holy Cross's Alumni Network. Could you tell a little bit about how that network has influenced your professional career? Chris: Yeah, it's influenced my professional career because I've been lucky to work with Holy Cross grads in every step of the way in every job almost that I can think of. So, at Dana-Farber Cancer Institute and the Jimmy Fund, we hired Joe Robertson, who was a track and field classmate of mine, class of '02, Rebecca Manikian in the year before, '01. So, I got to work with both of them on the Boston Marathon Jimmy Fund Walk and had a community and a shared experience with the two of them. Chris: Worked with Kristina Coppola Timmins at Cone Communications. And Rebecca and Joe also were ended up being Cone alumni at different points. And then, now, a huge number of Holy Cross grads, past and present, that I have worked through, including my current boss, AnnMaura Connolly, class of '86. So, I think at every step, I've seen Holy Cross alumni show up both in the work environment and help in the broader network. Chris: There's not a question that I would have or a connection I'd be trying to make that I couldn't reach out to somebody at Holy Cross and just say, "Hey, we share this background. Can you help?" And there's been countless times where I've had Holy Cross grads that I either know or don't know be willing to offer advice or make a connection, no questions asked and right away all the time. And I think that's fairly rare, at least in my experience. Chris: And it always surprises me how we'll be having a conversation and somebody will say, "Oh, they went to Holy Cross." It's amazing I think how people show up, particularly in the space that I'm in where you're working in the nonprofit field or in other jobs that are trying to have an impact on society. I think that's where the Jesuit teachings I think resonate for folks. And they really internalized that learning and those values, and I think it shows up in their career choices, and it certainly did for me. JP: Definitely. Yeah. Even for me as a student, I feel like something everyone can agree on is the strength of the Holy Cross alumni network. And something I always think about, even before I became a student here, just like walking around, wearing either a Holy Cross hat or that purple shirt, I was surprised and people would be surprised based on how many times you would get stopped, like, "Oh, you went to Holy Cross. I was a grad from this class." And I think that's something really special about that network. Chris: Happens all the time. And you see it in families, too. I mean, you're seeing it in your own with your sister being a grad. And I'm hopeful that my kids will end up being graduates as well. But I think you see that legacy in a lot of ways among families, among communities, where that becomes more than just an individual experience. It's a shared family experience, which is a pretty special thing. JP: Yeah, definitely. And even the fact that, like you mentioned, even just being a student, the fact that any alumni you either reach out to or you meet, they're just so willing to sit down and talk for as long as you need and give you advice or whatever the purpose is for that phone call or that meeting. They really just sit down and are willing to help in any way possible. So, I think that's something that's awesome about the college. JP: So, moving along, I think one of the great things about this podcast is that it highlights and showcases the different ways that Holy Cross mission of men and women for others can play into so many different careers and stories of different alumni. So, I guess just to start, what mission or values fuel your professional work today? Chris: Yeah. It's interesting, I think I've been fortunate to work at this intersection of companies and causes coming together to drive better business and greater good. And it's happened throughout my career and gone full circle starting on the nonprofit side at Dana-Farber Cancer Institute and the Jimmy Fund and moving over to the corporate side at New Balance Athletic Shoe and later Reebok, and then now in my current role at City Year. Chris: Seeing how companies can work with nonprofits and advising some of them on how to do that, when I was at Cone Communications and advising clients on those pieces, it's just always fascinated me that you can have a social impact. And it doesn't have to just be about charity, it doesn't have to be just about volunteerism or working in a nonprofit that there's all kinds of ways in which everybody can do that individually and collectively. Chris: Companies have a tremendous opportunity and tremendous power to be able to do that. And so, for me, I realized early on through those internships, experiences that I knew I was motivated by doing something kind of more than earning a paycheck, that I wanted to see that impact. Personally, I want to have a job that at the end of the day, I could feel like we were doing something bigger. And I think that was always a core value. Chris: I think, for me, that came from my parents. I think my example was seeing my mom be a special education teacher and work with students to give them that opportunity and to address some of that inequity and make sure that education was tailored to their needs and their situation, paired with my dad who was an executive in an enterprise rent a car for his whole career, high powered, highly growing business, and getting to see that side of it. Chris: And I think those two sensibilities really steered what I was looking for and seeing it as an example. I wanted to dig into business problems. I love the how do you think deeply about that? How do you try and solve those? How do you get somebody to buy your product or support your company or do something? So, the marketing and advertising and those pieces of it were fascinating to me intellectually, but I wanted to see an impact at the same time. Chris: And so, I think I was searching for that through each role of saying, "How do we combine those two things? And how does that show up?" In my time at the Jimmy Fund, it was really good for two things. I think my first job there was working a lot with families that were participating in the Boston Marathon Jimmy Fund Walk. And what I realized really quickly was, it was such a huge crash course in empathy and in building relationships and in listening. Chris: Because in most cases, I was just helping people that were participating in the event get registered, get their team organized and set up, get the T-shirts for the event, help them with their fundraising, things like that. But in most cases, I was talking with people that were either in the midst of the worst experience of their life because they were having somebody in their family facing cancer, or they were remembering the worst experience of their life and having lost somebody to cancer. Chris: And so, I think what I found is, you'd have a lot of conversations where people would get frustrated or they'd be angry or emotional, all rightfully so because they were dealing with really hard things. And I think I learned to be able to pick up on that and to connect with them and to try and find ways to encourage and support. And I think it was just a hugely valuable early experience in saying, "How do you connect with people and how do you build relationships?" Chris: "And how do you not take for granted both your own health and good fortune, but also how you'd be there when somebody else is struggling and understand what they're dealing with? And can you lift that load in some small way?" And I certainly was not doing anything significant in that regard and in that role, but I could make their day a little bit easier or solve a problem for them, et cetera. I started to really get excited about the ability to do that. And I found that was really motivating for me. Chris: So, the idea of having a purpose and being able to help somebody in a process during that day was, I think, started to become foundational. I think it also gave me a lot of perspective. You could be having a rough day in your job or something else going on. You could walk down the hall to the Jimmy Fund clinic and see the kids there that are coming in for treatment. It puts it in perspective pretty quick on your challenges and what's tough in your life when you're seeing that with a kid. Chris: So, for me, I think it helped build an immense sense of both opportunity to have an impact but then also an immense sense of gratitude for how fortunate I was. And I think those were two foundational pieces of that experience. And then, later, the second big lesson that I learned and this sparked the longer term career path was, I started to work more with the companies that were participating in the Jimmy Fund Walk, either that were sponsoring the event in different ways or they were getting their employees actively walking and fundraising. Chris: And that gave me a different side of it. It gave me exposure to stuff that I hadn't thought of, which was why would businesses do these types of things? Why would businesses want to have some sort of impact socially, which at the time was still relatively, I wouldn't say uncommon, but it wasn't as clear and upfront as it is today. Philanthropy was something that companies did on the side. It was nice to do because they wanted to be good citizens. But it wasn't a business strategy. Chris: It wasn't something that people were asking them about on a daily basis. It wasn't something that they thought about as part of their broader work as an organization and in their community. And so, that just fascinated me was like, why would companies want to do this outside of a classic kind of capitalist structure where they just have to add value for shareholders in the old Adam Smith lessons and things like that? Chris: And what I realized was, there was so much potential and so many resources that companies could bring to bear to help solve social issues. They had incredible skill and knowledge and power behind what they were doing in a lot of cases, really sophisticated ways to do things as businesses. Two, they had amazing people that they can deploy to have an impact in different ways, whether that was volunteering their time or giving access to their customers, things like that. Chris: And then, three, they can really tell a powerful story. Many companies can reach huge numbers of people and customers in a way that nonprofits can't and don't have the dollars or the access to be able to do. So, they could raise awareness and shine a light on different issues and get people to engage and support in a way that no nonprofit could ever hope to do. And I just became fascinated by that, on what a company could potentially do to have an impact in their community. Chris: And so, I think that job gave me two foundational experiences that I think have started to show up in each of the subsequent jobs that I started to have and really got me on that path. So, I think that's where the kind of being men and women for others started to show up for me was it was like a light went on, like, "Oh, this is how I can do that. This is where I can kind of have that be part of my daily life." JP: Yeah, that's amazing. I think what stuck out to me there was the perspective that you gained and you're sharing with us today is going back to at work or at school, you could be having a really bad day and that's that. I mean, obviously, no one enjoys having a bad day and it happens. But being able to just realize that oftentimes it could be way worse, and there's people, there are children and other people struggling, and they may be having a way worse day than you, I think that's a really important perspective for people to develop and take with them day by day. Chris: Yeah, I think so. Now, we have to acknowledge that that's easy for me to do as a white male, heterosexual, affluent, man of privilege in every possible dimension you can probably think of. I've had every advantage I could possibly have. And so, I think it's easy to say, "Have gratitude and appreciate those things when your life is what my life has been." And that doesn't mean we haven't had challenges and I haven't face things that have been tough, but I think it does give you a bit of a perspective. Chris: And I think gratitude and appreciation for those advantages and those experiences I've had is something that's driven a lot of the work for me and the why. But I would say within that, it's not uncommon, people come to try to have a social impact in many ways because of either guilt or a feeling of charity, like, "This is something I should pay it back. I should give back," and I certainly did. I think that was my perspective. I've been given a lot of opportunity. Chris: I owe it to others to give back in that way. I think when you start to do the work and you start to get proximate and really work on different issues, whatever it is, whether it's education or hunger or any way in which racism shows up in all of our systems, you start to realize that you move on the scale from charity to social justice, and really saying, "This isn't about me giving back or appreciating the opportunities I've had. This is about changing a system that is not just." Chris: "And it's my responsibility to play a deeper role there and to do what I can with the resources I have to drive some change there." So, I think you move from charity to social justice as you start to get proximate and more exposed to issues. And I think Holy Cross planted the ideas behind it and the early experiences, whether it was Habitat or other areas where I could start to see and get exposed to that. Chris: But I think later in my career and particularly at City Year, I started to see that more clearly and I think that's why my career has moved more in that direction. JP: Definitely. Yeah. So, I think you also, with those remarks you made, answered the next question I had, but I wanted to just emphasize. Is there something specific that drives you to work hard each and every day? And my takeaway from all you've just said is, I feel like the common theme of impact and purpose. That's what I picked up on, just whether it's you impacting someone or something, or the company you're working for, or just being able to realize the impact that someone else is having or that greater company is having on a specific cause. JP: That was my takeaway. And I think that's awesome just from a professional standpoint, being able to live by those themes of purpose and impact. That's really great. Chris: I think that's right. I think purpose and impact is the right way to frame it. I do think about that, hopefully, every day. Am I having a purpose and am I having an impact? In the day to day, I think you don't probably get up and get out of bed and think about that immediately. But I do think, as I thought about how I want to work and what jobs I want to take and what organizations I want to be at, I think in those times of reflection, certainly grounding back into purpose and impact has absolutely been the question I asked myself. Chris: Where can I feel connected and closest to a purpose? And where can I have the greatest impact in either my experience or in an organization that's working on a really hard problem? So, certainly, when I thought about coming to City Year and in my most recent role, that's absolutely what I was thinking about is, I had missed being close to the impact in a way that I had at Dana-Farber. Chris: And even at New Balance where I was on the corporate side but working closely with a lot of our nonprofit partners, I got to see that impact on a daily basis. When I moved into Cone Communications and advising nonprofit clients and business clients on their programs and their impact, I loved it. It was mentally fascinating and rigorous and an amazing training ground on all kinds of things around strategy and marketing and communications. Chris: Really tremendous skills and experience. But I found myself too far away from the people that we were serving, and I missed that. I wanted to get closer and back to that. And I think that's what drew me back to the nonprofit side at City Year was a chance to really work among people that were having that level of idealism and impact on a daily basis. Chris: And I also felt like it was a chance to take experiences and skills that I gained from other jobs and put them to really good use in helping, so you think about how we work with companies. Yeah. And I think the working hard piece to our earlier conversation, I think the rigor of Holy Cross academically and then all the other things that I got to be involved in really built that work habit in to where you show up and you do the work every day. Chris: And I think good things happen if you consistently spend the time and put in the effort. And again, I would say I had great examples, whether it's my parents or whether it's coaches and others, that really ingrain that work ethic and constantly trying to move forward for something bigger, whether it was a team that you were part of or whether it was the organization and the issue you were trying to support. JP: Definitely. Yeah. So, I guess to shift gears a little bit here, I wanted to talk about the Boston Marathon. Correct me if I'm wrong here, but you ran the Boston Marathon not once, not twice, but three times. Is that- Chris: Four actually. JP: Four, okay. So, the Boston Marathon, four times. At least in my opinion, being able to run the marathon one time is one heck of an achievement. So, could you tell me a little bit about what drove you to do that again and again and again and again? Chris: Yeah, yeah. It was working at Dana-Farber Cancer Institute really was the big thing in our first event. And that I got to work on the Boston Marathon Jimmy Fund Walk. I got exposed to the course because there was a fundraising walk along the route of the Boston Marathon. And we'd have thousands of people walk and fundraise for Dana-Farber along the route. So, I got to know the marathon course, its history. Chris: I got a really good opportunity to work with people like Dave McGillivray, the director of the Boston Marathon, and get to know him and his amazing team and learn from them. And just started to fall in love with that event. I would volunteer at the marathon and see it. And as a former track and field athlete, I wasn't a distance runner by any means, but I started to get it into my head that it would be a really challenging athletic experience. And so, that was interesting. Chris: To be honest, it was my wife that steered me in that direction. She ran the marathon first a couple of times for Dana-Farber and fundraise for them. And so, I got to see her experience doing that. And I'm kind of a competitive guy, so I decided that I wanted to do it myself. And I couldn't just let her have all the fun. So, I did, I signed up and ran for Dana-Farber. I actually got a chance to run that first marathon with my wife who, God bless her, waited for me and dragged me along those last few miles because I was struggling, and she was kind and carried me along. Chris: And then, I had a chance to do it a couple more times, which was great, including when I didn't finish, which was a huge disappointment and a physical struggle. But I got to come back in another year and completed, and it's some of my greatest memories and experiences of participating in that event and being part of fundraising for Dana-Farber, for City Year as part of that. The marathon is a really special event for Boston. Chris: And I think what you learn in that event is that people are always surprised and super like you were complimentary about being able to run that marathon. I fully believe that most people can run a marathon, and I've seen it firsthand on the course. I think what it gets to is our earlier conversation about how do you go pursue your goals and do those things. And anybody that's run a marathon can tell you that the race day is the reward. Chris: It's the thing at the end, it's the countless hours, the 16 weeks before where you're going and you're running three, four, five, six, depending on what your training schedule is, days a week. And putting in countless miles in good weather, bad weather, darkness, snow, rain, cold, your ability to get up and do that each day and keep consistently growing the mileage and keeping the training, that's what leads to the marathon and the success at the end. Chris: So, it's really about, can you do that work on a daily basis? And can you progress over time by sticking with it through the ups and the downs? And then, I was really lucky to train with great groups of people each time. And I think that's another lesson of it is, it's pretty hard thing to go train by yourself and go run a marathon by yourself. Most people that do it have done their training with a group of friends and other people that are running that helped motivate them, support them, and inspire them. Chris: And then, day off, all the people that are out there are cheering you on, supporting you, helping you get to that day. It's truly a team effort. So, I just got to get the rewards of doing it four times. JP: Yeah, that's an awesome achievement. And I have a ton of respect for you and anyone who does that. In fact, one of my buddies here at Holy Cross, Colman Benson, he's a sophomore, and he ran this past marathon. And just seeing him go through that training earlier in the fall, I'd be like, "Oh, what are you doing tomorrow?" He's like, "Oh, I'm running 12 miles in the morning, then I'm going to class." And I just think that's very impressive and definitely an awesome achievement. Chris: Yeah, it's not too late, JP. You can start training, too. JP: Yeah. So, I read in a previous interview that one of your most memorable achievements is your support of Susan G. Komen for the Cure while you're with New Balance. Can you speak a little to that? Chris: Yeah. So, after my first couple jobs at Dana-Farber Cancer Institute and the Jimmy Fund, I mentioned I found myself just becoming so fascinated by what companies could do. And I realized that I really wanted to experience it from a company's perspective. I wanted to get over to that side of the work. Around that time, I also decided that I wanted to go deeper into business. I was working with companies. Chris: I was asking them to support us, but I didn't really understand business in a deep way. And so, I ended up going back to graduate school at night to get my MBA while I was working at Dana-Farber. And I ended up making the switch over to New Balance and taking a job there really that was the opposite or the flip side of what I had been doing at the Jimmy Fund. Chris: So, instead of asking companies to support us and asking them to sponsor and have their employees participate in our events, and have an impact in that way, I was helping to guide New Balance's investment in different nonprofits in the community and thinking about how we showed up with our dollars, with our products, with our people to support those efforts. And so, the job was to manage what New Balance called their cause marketing work at the time. Chris: I sat in the marketing department at New Balance. I was measured in the same ways that other marketers were on driving awareness of New Balance's brand, consideration of our product and trying on footwear and apparel and things like that, and then ultimately sales of that product, which was great. And I loved it because I got a chance to really get into the marketing and science of that, which was fascinating, and do it at a brand and in a field of athletic footwear and apparel that I was personally passionate about as a runner and as an athlete. Chris: So, best of both worlds there. And it was just a great opportunity to take what I knew from the nonprofit side and bring that sensibility into the corporate environment into how we showed up and work with our nonprofit partners, whether it was Susan G. Komen for the Cure or Girls on the Run, which was our other major partner. And I just loved it. And I think that really crystallized, this is the career path for me. Chris: I can work with cool products and in areas that I really liked, but I can have an impact in that way. And it just opened my eyes to what was possible for companies. New Balance was such a special place because it was a privately held, family-owned company, had a tremendous number of people that I worked there for years. It really felt like a community of people in ways that the Jimmy Fund and Holy Cross actually felt very similar to me, and that's what I loved about being there at the time. Chris: And we got to do some really cool things, whether it was working on all the different Komen events. I had a chance to meet Joe Biden, President Biden, when he was vice president at the time at an event for Komen and New Balance, which was amazing. We got to do great things, marketing our products, and attending different events, and meeting celebrities. I went on The Ellen Show to give away million dollars for breast cancer research and got to have the big chat out there and hand that to Ellen. Chris: So, amazing, unique experiences that I wouldn't have other ever anticipated getting a chance to do as a result of that job. It's a really special company. And later, I got a chance to really go deep and work with Girls on the Run after my time at New Balance. After I left New Balance, I had a chance to join the board of Girls on the Run and serve on their board and chair their board for a few years. Chris: And to get to work with that amazing nonprofit that focuses on women's leadership development and girls empowerment through a running curriculum and really social-emotional skill building curriculum was just an amazing experience to, again, work for another world-class nonprofit and get a chance to see it grow. So, another really fortunate opportunity for me. JP: Yeah, that's incredible. That seems like such an overall special, I guess, group of things that you got, meeting the president and going on The Ellen Show. That's awesome. So, I guess, it seems like it's hard to top those experiences. But has anything changed in terms of your most memorable milestone since then in your career? Chris: I think you start to look at what are the skills and experiences and most importantly, the relationships you build over your career. And each of those are really cool memories and experiences. But I think what matters is the relationships that you start to have and build over time. So, when I think about those different jobs, it's more about the people that I got a chance to work with and get to learn from. Chris: And I think City Year as my current job and organization now for the last eight years, that's what I start to think about and focus on is how have I gotten the chance to work with and learn from really great people, and continued. I think, even in this kind of midway through my career and later in my career, I feel like I'm still learning and growing on a daily basis, and getting better both at what I do tangibly functionally in my work. Chris: But also as a manager, as a boss, as a co-worker, as a parent, I think you start to pick up those lessons. And I think for City Year in particular, it's by far the most powerful place that I've ever seen as far as helping people really build connection to one another and to help us really explore who we are and how do we show up as our full selves at work on a daily basis. And how do we do that for other people, whether it's our co-workers or whether it's the students we work with in the schools we serve in. Chris: I think that's the amazing lesson and opportunity of City Year. So, I would say I hope I haven't hit the highlights of the careers. I got a lot of work left to do. And I think we've got a lot more to accomplish and learn. So, I'm excited about that. JP: Definitely. The best is yet to come. All right. So, now, to shift over, I know earlier, you talked about the idea of cause marketing and how that plays into your career. And I know that's been around for quite some time now and is becoming increasingly popular and being leveraged by businesses and nonprofits. So, for those who are listening who might not know a lot about it, could you speak a little about cause marketing and what that means to your career, past, present and future? Chris: Yes. It's interesting, you've seen a real change over the decades in how companies think about their responsibility and impact to society. And early on, it was very much about volunteerism and employees coming out doing different things. Or it might be about the company writing a check and the CEO handing it over to an organization. There wasn't really a business strategy. It was, "Hey, we recognize we're part of this community. We want to support our community and we find ways to do that." Chris: And then, what you started to see late into the '90s, early 2000s is companies started to read realize this could actually have a deeper business impact. People want to support companies that are doing good things in their communities. And we can tell that story via our marketing, our public relations efforts, via sponsorships and other things, kind of classic marketing and sales approaches. And so, they started to integrate cause into that. Chris: And so, you start to see opportunities like buy this product, we'll donate XYZ. And then, you started to see buy one, give one like TOMS and other new models of cause marketing come in. But in the early days, it was still very much kind of a business strategy using cause to drive it. So, it was, "We know people care about this cause. And if we talk about being associated with it, it would get them to buy our product or get them to take this action." Chris: And what we've seen over the last decade plus is that's really evolving and going deeper. I think what we started to see, particularly when I was working at Cone Communications and advising clients, we started to say, "What's unique about your company and the work that you do, the industry that you're in, the expertise that you have? And how could you connect your philanthropy to an issue that is aligned with your business?" Chris: "So, if you're in the pharmaceutical industry or other areas, how do you align with health and determinants of health? If you're working in other areas, like cable and telephone and others, how do you think about connectivity and digital connectivity being something that you can provide and connect to?" And so, how do you align the strategy and the impact you can have with your business so that those two things are working in harmony in reinforcing one another? Chris: And so, I think there was an understanding that it can actually drive business. And it's not just a nice thing to do that's over on the side, it's an important strategy to drive business. And so, during my time at New Balance and Cone and later at Reebok, I think we were more in that era of saying, "How do we integrate it into the business? And how do we really see it as a unique business driving strategy?" Chris: Now, I think you're in an even different environment, both with young people like yourselves coming into work and into the environment and being aware of social issues in a way that is deeper and more common than I think it was maybe of my generation and earlier, really wanting to have a purpose at work, and looking at your companies and saying, "How are you helping me do that?" And I only want to be here if I'm having a chance to put my passion and my values front and center in a way that was different than I think previous generations thought about work. Chris: And then, two, I think we're realizing, particularly over the last two years with the pandemic, with the murder of George Floyd, certainly the cracks in our system and how it is not equitable, how racism really shows up across all kinds of dimensions to prevent others from having opportunity that they should, and saying, "That's not okay." And people are saying, "We expect to both individually have an opportunity to affect that." Chris: "And we expect companies to be vocal and to step up and to show what their values are. And if you're not, then that's not going to be a company that I'm going to invest my time in personally as an employee. Or I'm not going to invest my dollars in as a customer." And I think you're seeing a whole new era of companies leading and being vocal in a lot of ways around social issues and taking a stand. Chris: And if they're not, people kind of questioning what's going on and why not. So, I think it's been really impressive and powerful to see. There's a lot that still needs to be done, right? There's a tremendous amount of inequity even within companies. And we see examples every day of bad behavior or other things that companies need to do better and need to do differently. Chris: But I will say, in working with many different Fortune 100 companies on a daily basis, the understanding of issues, the way they talk about social issues, the way they talk about their own diversity, equity, and inclusion and belonging efforts within the company is a huge sea change compared to what I saw even five, 10 years ago, which gives me a lot of hope for where we're going. I think we're realizing that capitalism is an amazing system of value creation. It's done tremendous things to grow and build our company. Chris: And the kind of American dream did a tremendous number of things, certainly for my family and many others, but that that's no longer the case for everyone and it probably never was, to be honest. And so, how do we own that and how do we address that? And I think companies are wrestling with that in a more authentic way. And I hope they continue to do that. It's part of what I think my life's work is, is to try and help companies do that. JP: Yeah, definitely. I feel like that, in my opinion, that idea of cause marketing is something that's... I feel like that's got to be something that's just going to become, I guess, take over in terms of marketing. And just seeing it present today, I guess I've been seeing it firsthand with the new Worcester Red Sox at Polar Park in terms of sports marketing. Their whole thing is... I think the program is like In Debt to a Vet. JP: So, they're marketing that product of going to the game and all. And then, every strike out at home, they donate X amount of money to veterans. And then, they also have just other organizations like fighting food insecurity and things like that. So, I feel like I've just been learning more and more about that. And I feel like that's got to be something like revolutionary in terms of marketing and business today. Chris: Yeah. And do you find yourself deciding who to buy from and who to work with as a result of that? Do you see it show up in the decisions you make? JP: Yeah. Definitely, I feel like these days, I see, even buying clothing and things like that, some... off the top of my head, I can't think of any. And shoes too, especially I've been seeing. They advertise the materials they make their shoes out of and stuff like that. And X percent of the money they take in goes to this cause or that cause. So, yeah, I've definitely been seeing it become more and more present today. Chris: I think it's true. I think as a marketer, and I don't even like the term cause marketing anymore because it feels so transactional, and we're well beyond that. I mean, it is a strategy that is useful and valuable, and company should still do. But I think what you've seen is now that you interact with a company and their products and a brand all the time, whether it's in social media or online or in other places, it used to be such a tightly controlled thing. Chris: You kind of created a marketing message, you put it out there in a campaign. You spent weeks developing it and controlling the advertising message and putting it out there. That's just not how we market and how customers engage anymore. It's year round, minute to minute brand building and engagement. It's a very different thing. And so, what you've seen is companies have to evolve to respond to that and say, "Okay, we need to be talking about not just cause marketing, but it's about what are our values." Chris: "And how do those show up in every action that we do, because it's not just the messaging that we put out from a marketing or an advertising standpoint. It's how somebody experienced us in the store, or an interaction they had with an employee, or something our CEO said, or some way they experienced our product." And it's 24-7-365. And so, I think you're seeing companies really say, "This is about our values, and being clear on what our values are." Chris: Because our most important stakeholders, our people are saying that that's what matters to them and that's what they care about. And so, I think we just think about business differently. JP: Absolutely, yeah. And actually, even aside from just that marketing aspect, the whole idea of impact investing and companies just needing to evolve now based on ESG and sustainability and things like that, it's just becoming more and more just the norm. And I feel like more and more businesses have no choice but to evolve and match what other businesses are doing because that's such a pressing topic in today's time as well. Chris: A hundred percent. And you have to, to compete, to succeed. And all the data tells you that companies that invest and do deep things and are high performing when it comes to the environmental, social, and governance measures outperform other companies and succeed. So, it's not just a nice thing to do, an important thing to do for the planet, a good thing to do. It's an imperative. If you want to continue to build a business and have it thrive, you have to lean in those areas. JP: Definitely. So, could you speak about the back and forth relationship you've seen between business and nonprofits throughout the span of your professional career? Chris: Absolutely. That's a great question. I think to our earlier conversation, early on, I think it was more transactional. It was kind of checkbook philanthropy. And we developed some relationships, and hopefully we get some money. And what we've seen, certainly in my time at City Year and why I was excited to come to City Year and work on it, is that changed. And companies were increasingly looking at a much deeper and holistic way to support issues. Chris: And so, they wanted certainly the branding and the visibility, and being able to talk about themselves as being good citizens, and for nonprofits to help validate and help them have opportunities to do that. They wanted to have employees actively volunteering and spending time, whether that was doing different kind of done-in-a-day volunteer projects or weeks of service, days of service, things like that. Chris: Or deeper ongoing skills-based volunteerism where I can share my expertise in marketing or somebody can share their expertise in web design or other things with the nonprofit and help that nonprofit build its capabilities or its skills. And really being able to set ambitious goals, which is what we're seeing a lot of companies do now, and to say, "This is what we care about from a social impact standpoint. Here's how we're going to try and have some impact. And here's some ways we're going to hold ourselves accountable and measure against it." Chris: And so, now, nonprofits are more partners in that process. And certainly, there's a dynamic of where the dollars come. And we certainly are trying to raise money from companies and have contractual pieces of what we do. But in many ways, we're sitting at the table with our corporate partners, and they view us as experts in the space that help them, at least for City Year, understand education, understand urban education, understand racial issues and how those show up in the education space, and are looking for our help and our guidance on how they can have a deeper impact. Chris: And we often think collaboratively and advise and coach them on some of the things they're thinking about. And in many cases, they can offer tremendous support to help us do different things. We've been fortunate to work with Deloitte Consulting as an example at City Year for decades now, and have benefited from having pro bono case teams and others really come and think about how do we grow City Year as an organization. Chris: So, I would say it's much less of a transactional thing and much more of a collaborative partnership, which has been amazing to see. And I think that's the part that I've been fortunate to have worked on the nonprofit side, the corporate side, the agency side, and seeing that from all angles that I think it hopefully helps me be a better partner to our colleagues. But I think there's such a willingness to say, "These are huge social issues that cannot be solved by any individual nonprofit, any individual organization." Chris: And we have to come together and figure out how we work collectively on them to change them. So, I think the level of expertise sharing, information sharing, and collaboration is greater than it's ever been. So, I'm excited about that. JP: Cool, yeah. Thank yo
About ChrisChris Harris is Vice President, Global Field Engineering at Couchbase, a provider of a leading modern database for enterprise applications that 30% of the Fortune 100 depend on. With almost 20 years of technical field and professional services experience at early-stage, open source and growth technology companies, Chris held leadership roles at Cloudera, Hortonworks, MongoDB and others before joining Couchbase.Links Referenced: couchbase.com: https://couchbase.com LinkedIn: https://www.linkedin.com/in/chris-harris-5451953/ Twitter: https://twitter.com/cj_harris5 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the stranger parts of running this show is when I have a promoted guest episode like this one, where someone comes on, and great, “Oh, where do you work?” And the answer is a database company. Well, great, unless it's Route 53, it's clearly not the best database in the world, but let's talk about how you're making a strong showing for number two.It sounds like it's this whole ridiculous, negging nonsense or whatever the kids are calling it these days, but that's not how it's intended. Today's promoted guest is Chris Harris, who's the Vice President of Global Field Engineering at Couchbase. Chris, thank you for joining me and I really hope I got it right and that Couchbase is a database company or that makes no sense whatsoever.Chris: It's great to be on the show, and thank you for the invitation. I'm looking forward to it. Yeah, we're a database company. That's exactly what we do.Corey: I always find it interesting when companies start pivoting from a thing that they were and, “What do you do?” “We build databases.” [unintelligible 00:01:29] getting out of that space it's, “What do you do?” “We're a finance company.” And then there's a period of time in which they start reframing what they do. It's, “We're a data platform.” Or, “We're now a tech company.”Really? Because I don't get that sense in any meaningful perspective. Couchbase was founded as a database company. You went public last year—congratulations on that—and now you continue to say, “Yes, we're a database company,” rather than an everything trying to eat the world all at the same time, mostly ineffectively, company. So, what kind of database are you folks?Chris: So, if you look at the database world, you can see—I've been in the space for quite some time now, a good few years, and I've had the privilege, if you like, of being at other database companies, been in the analytics space, and I'm here at Couchbase. But if you look at the history over the last—let's just not go back all the way that far, but let's go back to, like, ten years ago, everybody was building their applications on traditional relational databases. And what you saw is that the Oracle and MySQL, as traditional databases of the world. And then… probably at the time, we realized that with, talking ten years ago where we had this demand for high throughput of data, next generation of applications will be built, and then people realized the traditional database architectures weren't going to cut it, if you like. And it spawned this industry.You know, a big NoSQL market was created. And you have document databases, and then you have graph databases, and then you have analytics databases, and you have search databases, and then you have every sort of database you could possibly think of type database that's out there in the world.Corey: You have so many kinds you need to keep track of it all inside of the database.Chris: That's what you have to do, right? [laugh]. But the interesting thing is it became different types of database. And even see this in many of the code providers today, right, that you have multiple different types of databases no matter what you're trying to do, right? So, we kind of went—Couchbase kind of took a step back and went, okay, we were originally a cache, right, this is where we came from, and then kind of built that into a document database, and then kind of went to the market and went hold on here, rather than it being let's call it a noSQL versus SQL discussion, why can't it just be a database, right?Why can't you have a SQLite interface on top of a modern architecture? Why can you do that, right? Why can't you have the flexibility and architectural [unintelligible 00:04:16] of a JSON-based database with the interface of—with SQL, and then analytics built on top of that, right? So, why can't you have the power of SQL on the next generation architecture? So, that's kind of where we fit in the world.Corey: When we talk about origin stories and where things come from, well, let's start with you. I guess the impolite version of the question is, “Why on earth would you be in a space like this for so long?” But you've been on a lot of interesting places doing somewhat similar things. You were at Cloudera, you were at Hortonworks until you apparently heard a who or whatnot, you were at MongoDB, you were at VMware, you were at Red Hat. And that's going reverse chronologically, but it's clear that you're very focused on a particular expression of a particular problem. Why are you the way that you are? Only pretend that's a polite question.Chris: “Why am I the way that I am?” Well, first of all, I love technology, right? That's the key. And I think many of us in the industry would definitely say that, right? I started off in core engineering, building—I know some people today wouldn't probably remember this, but when you had Chip and Pin where your credit card and you have to type it in and put in a pin number, that was created originally in the UK, and then went off and built e-commerce websites for retailers.Well, that then turned into—was a common theme that I kept seeing is that lot of the technology that we're using was open-source technology. And that kind of got me into the open-source movement, if you like, and I was lucky enough to then join Red Hat when they built middleware frameworks, so got into that space there. And then did a lot of innovation in the middleware space. Went to SpringSource and we did some great work there in the Java Development Framework space. But what became interesting is that—you still see it today—like, in this innovation happening in that middleware space and there's some great innovation happening, right?There's all this stuff with Lambda and serverless architecture that's out there, but they always came back to, we've got the database, this thing that is in the architecture if it goes down, you're stuffed, right? This is where the core value of your company is sitting. So, then that got me interested to see what innovation was happening in this space. And as I say, I got into this field in the early stages of NoSQL, where there was that spawn of new database technologies being created. And then from there, it was like, “Okay, let's get into what was happening in the analytic space.”Again, I'm still in the Hortonworks, and Cloudera space, that's all open-source. But it came down to this is different types of databases that were required different types of skills. And then I started talking to the team here, who was like, “How can we take as great innovation and leverage the skills I already have?” And I thought that was an interesting point.Corey: In the interest of full disclosure, I tend to take the exact inverse approach to the way that you did. When I was going through the worlds of systems administrator, than rebadged as DevOps, or SRE, or systems engineer, production engineer, whatever we're calling ourselves this week, I was always focused primarily on stateless things like web servers, or whatnot because it turns out that—this should be no surprise to longtime listeners of this show—but I'm really bad with computers. And most other things, too; I just brute force my way through it. And that's hilarious when you keep taking down web servers you can push a button and recreate. When you do that to a database or anything that's stateful, it leaves a mark.And if you do it the wrong way, just well enough, you might not have a company anymore, so your DR plan starts to look a lot more like updating your resume. So, I always tried to shy away from things that played to my specific weaknesses that would, you know, follow me around like a stink. You, on the other hand, apparently sound—how to frame it—you know, good at things, and in a way that I never was. So you're—ah, you see a problem, you're running towards it trying to help fix it; I'm trying to how do I keep myself away from making the problem worse is my first approach. It seems like you have definitely been focused on not just data themselves—I mean at some level, [if it was a 00:08:55] pure data problem, it feels like we'd be talking a lot more about storage, but rather how to wind up organizing that data, how to wind up presenting that data, and the relationship that data has to other things that are going on. I'm not speaking in the sense of a traditional relational database, necessarily, but the idea of how that data empowers businesses and enables them to do different things. Is that directionally a fair synopsis of how you see it?Chris: I think the [unintelligible 00:09:21] thing is what I would agree with. What makes it really interesting to me is what we enable people to do with that data, and being able to build, kind of, really fascinating innovation applications that are affecting their underlying businesses, right, from it could be health care, it could be airlines, financial services, some really high, interesting use cases that people are doing that are leveraging the database to be able to drive that level of innovation. Because it's very difficult; I can build some sophisticated application, but if I can't get the performance out of my database, I have a pretty poor experience to my users in today's world. Because, fortunately or unfortunately, people aren't very patient, right? If you have a website that doesn't return very quickly, a customer's gone like minutes ago. You literally got to instantly respond to someone. That's a challenging problem.Corey: It absolutely is. Something that I found as I've talked to a bunch of different companies operating in different ways is the requirements on data stores are generally very different depending upon primarily latency and performance. There's only so long people that are going to watch the spinning circle of doom on a website spin before they realize they're going to go somewhere that has its act together. Conversely, for a lot of business intelligence and analytics queries, there are an awful lot of stories where the thing that people care about is that we actually have to have the results of this query by noon on Thursday. And there are very different use cases for that, and some companies seem to be focused very much on, “We're going to solve both of those use cases extremes and everything in between with the same product offering,” and others tend to say, “Okay, this is the area of the market we're going to focus on.”You could also say that this is an expression of the larger industry question of do I want, more or less a one-size-fits-most database that's general-purpose, or do I want very specific purpose-built databases based upon the use case and the problem? Where do you find yourself on that spectrum?Chris: I find myself on that spectrum is that there's—if you want to describe it at a high level and we can break it down, there's operational-type databases, where I'd say Couchbase fits where you're talking about, I've just built an application; I'm talking to the live user, right, this is what I care about, and when I'm talking about speed and performance here, I'm talking about something that returned within milliseconds of response time. I'm playing an online game, or I'm doing online betting on a sports game. That has to be pretty much instant, right? If we're playing multiplayer games and you're doing something, then I want to be able to see what you're doing straight away, right? People don't expect it to lay there.If you're looking at streaming—people do this with Couchbase—streaming the Olympic Games or Super Bowl in the US, and you want to be able to be there, that whole profile management of that user has to be instant, have that stream to you has to be instant. People use telephone calls and use Couchbase to do, behind the scenes, profile management, right, so they know who you are who's making that call. That's an operational database problem. That's not a traditional analytical problem, right? So, there's a whole other space in the database world for analytics, right, which is bringing all the data together into one place, and I'll help you do data science, AI, machine learning, be able to crunch and compute large volumes of data. If I get back to you, rather than a week in an hour, that's great, but that's not operational. That's analytical.Corey: In data center environments, it's an argument to be made for going in a bunch of different directions; we're going to use a bunch of different data stores to store all these things. Because, generally speaking, the marginal cost of moving data from one of your data storage systems to another one of your data storage systems, one rack row over is fairly small, whereas in cloud, effectively, there are no real capacity constraints anymore until you can get the bill, but that's the entire problem where a lot of the transfer for these things is metered per gigabyte. So, there's a increased desire on a lot of architectural pressures, to wind up making sure that where the data lives, it stays. And whatever it is that you do with that data, it should be able to operate on that data in a way that fits your performance characteristic requirements in the place that it currently is. And on the one hand, I can definitely see that driving a lot of decisions people have made.The counterargument is that it feels a little weird when the cost constraints of how the cloud providers—mostly you, AWS—have decided to build these things out. And that, in turn, is shaping your entire approach to not just your architecture, but your systems design of how data winds up working its way through your lifecycle. It's frustrating, on some level, especially given that they themselves offer something like 15 distinct managed databases offerings but more announced all the time. It becomes very difficult not only to disambiguate between all of them but to afford moving data from one to the next.Chris: The affordability is an interesting discussion, right, because you can look at it from a billing perspective and go absolutely, there's a challenge associated to that. Then is a question of where is my data because it's spread across all these different services; that's another challenge. And then you have the challenge of, okay, the cost associated to having developers build applications against all these different types of services because they all require different APIs and different ways of programming. So that's, there's a cost associated everywhere.Corey: Oh, by far and away, the most expensive part of your AWS, or any cloud spend, is not the infrastructure itself; it's the payroll expense associated with the people working on it. People always cost more than infrastructure. If not, something very strange is going on.Chris: But then you look at it, and you go, okay, if that's the case, I kind of use the analogy, right, that it's like a car, where everyone is talking these days about the electric car [that's going 00:16:05] on that path, right? Now, I should be able—if I was getting an electric car—think of it now, I actually have one—that I can get in the car and I can drive it like any other car. I know what a steering wheel is, I know where my pedals work, it looks and feels like a normal car. But architecturally it's fundamentally different how it operates. So, why can you apply that same thing, that same analogy to a database, right?So, why can't I have the ability from an operational perspective, [unintelligible 00:16:42] talk about operational databases, not necessarily, I don't know, full-blown analytical databases, but operationally being able to say I can store the data in an enterprise database; I can use that to leverage my SQL skills like I have before, and also use it to have a document store under operational analytics, to eventing, to full-stack search, key things that people want to do operationally, but keeping the data together in one database, like an iPhone. I want a database to have these capabilities; I don't want to have all these different types of devices that are everywhere. I want, you know, my iPhone to be able to go to have the capabilities that I'm using. Or my car, to feel like I'm driving a car; doesn't matter if the underlying architecture of the engine changed. That's great, I want the benefit, but I want to be able to drive it in the same way that I've driven any other car out there. And that's kind of trying to solve multiple problems that because you're trying to solve the issue of skills.Corey: It's one of the hard challenges out there, and I think your car analogy can even be extended a bit further because in the early days of the automobile, you were more or less taking some significant risk by driving a car if you weren't also mechanically inclined and to fix it yourself. And in time, we've sort of seen that continue to evolve where they mostly work, and now they work really reliably. And then you take it even a step beyond that, and all right now I'm just going to pay a car service so someone else has to deal with the car and a driver, and I don't have to deal with any of that aspect. And it feels like there are certain parallels, similar to that, toward the end of last year, 2021, you folks, more or less moved away from you can have it in any color you want, as long as you run it yourself—more or less—into offering a fully-managed database-as-a-service cloud option called Capella, which, on the ads for this show, I periodically sing because if you didn't want me to do that, you would not have named it Capella. Now, what was it that inspired you folks to say, “Hm, we could actually offer this as a managed service ourselves?”It's definitely a direction a lot of companies have gone in, but usually, they have to wait to be forced into it by—let's be serious for a second here—Amazon launching the Amazon Basics version of whatever it is themselves and, “Okay, well, they validated our market for us. Let's explore it.”Chris: If you look at that, you go Couchbase has been around for a good few years now selling, as you point out, high-performance databases to large-scale enterprises, on real mission-critical, people call it tier-zero type applications, high-performance applications. And these are some of the most fascinating, most innovative type of applications that I've been involved with through my career. Now, how can we take that capability, provide it to the mass market if you'd like, to be able to give it to people that don't need to have a large number of people out there managing their own infrastructure, being able to understand how to finely tune that underlying infrastructure to get the level of performance that you need from high-performance databases. Now, there are use cases for doing that, so it's not one or the other. It's not that you have to go all-in.There are particular companies out there that, for the economics reasons, for the use case reasons that are running today on-premise, and there's a rational reason for why they do that, right? But for a lot of people out there, whether they're leveraging the cloud, there's an opportunity here to take the power of the database, allow us to then manage it for people, take away that complexity of it, but being able to give them the power so they can leverage their skills, take advantage of Couchbase far easier than ever have been able to in the past. It's opened up a bigger market for us, to summarize your question.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of “Hello, World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself, all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free? This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: One way that I tend to evaluate where a given vendor sees themselves—and it's sort of an odd thing to do, but given that I do fix AWS bills for a living, it probably makes sense—I wind up pulling up the website, I ignore the baseline stuff of the, “This is what Gartner says,” and here's a giant series of scrolls. I just go for the hamburger menu and I look for, “All right, where's the pricing information?” Because pricing speaks a lot. And there are two things I generally try to find. One is, is there a free trial that I can basically click and get started working with?Because invariably, I'm trying to beat my head off of a problem at two in the morning, and if it's, “Oh, talk to a salesperson,” well as a hobbyist, or as an engineer who does not have signing authority for things, but it's talk to sales, I realize, “Oh, yeah. One, I probably can't afford it. Two, it's going to be a week or so before I can actually make progress on this, and I'm hoping to get something up by sunrise, and it's probably not for me.” Conversely, the enterprise tier should always have a, “Call for details,” because that is a signal to large enterprise procurement departments and buyers and the rest were it's, “Oh, we will never accept default terms. We always want them customized. We also don't believe in signing any contract without at least two commas in it.”Great. So, being able to speak to both ends of the market is one of those critical things that you folks absolutely nail that. What I like is the fact that if someone has a problem that they're experimenting with at two in the morning, they can get started with your database-as-a-service platform—Capella; or however you want to sing it—and they don't need to wind up talking to you folks directly, first. There's no long-term commitments, there's no [unintelligible 00:22:39] of the infrastructure themselves. There's no getting hounded for the rest of their days over making a purchase for something that didn't pan out.To me, that's always been the real innovation and breakthrough of cloud is that I can spend a few hours some evening kicking around an idea, and if it doesn't work, I can turn it off and spend 17 cents on the process, whereas if it does work, I can keep scaling up without at some point having to replace all of the Raspberry Pi's and popsicle sticks, I build things with real enterprise-grade stuff. There's a real accessibility and democratization that is entered into it. So, I'm always excited when I see companies that are embracing that model. Because, yeah, I'm a grumpy old sysadmin because it's not like there's a second kind of sysadmin, but—and I have a particular exposure and experience level with these things that I can't expect modern developers to work on. They have an idea, they want to launch something, and they just need a database to throw things against and put data into, and ideally get it back again when they query later. And that empowers them to move forward.They're not in this because they really want to run virtual machines themselves and get those set up and secured and patched and hardened, and then install the software on top of it, and, “Why is it not working? Oh, security groups, how you vex me again. I'll just open you to the entire world,” and so on. And we know where that path leads. So, it's nice to see that there is an accessible option there.Conversely, if you come at this with an approach if we are only available in our hosted cloud environment, well now those big enterprise companies that have, you know, compliance concerns are going to have some thoughts for you, none of them particularly pleasant in some cases. So, I like the fact that you're able to expand your offering to encompass different user personas without also, I don't know, turning what has historically been a database into now it's an LDAP server, and trying to eat the world, piece by piece, component by component.Chris: It's interesting that you say that because I think there's a number of things that you're touching on that were to me, if you look at us as a company in particularly this space, there's a lot of focus around the community and the open-source community. And I think there's an element of how do you make it accessible to people as a community as a whole? And then you kind of go down the path of, “Okay, let's allow people,”—as a developer, let's think of it this way, right, the ultimate thing they want to do, and you touched upon it there, is they want to build an application. They get passionate about building the application or maybe even in the weekend, and they got this funky idea that they're going to literally knock some code out.And I remember my fond memories of being an application engineer of being able to sit down for hours just been able to put my ideas into code and watch it execute. The last thing that I want to do is get to the point where I get the database and go, oh, here we go. This is going to take me a bunch of hours, now, and I'm going to set it all up and do other stuff. And I almost literally want to be able to click a few buttons—Corey: You know what I want to do tonight? Feel really dumb as I tackle a problem I don't fully understand. Gr—I'd love smacking into walls that point out my own ignorance. It's discouraging as hell. I'm right there with you.Chris: Yeah, you don't want to do that, right? So, you almost want to make like the database disappear for people, right? You want to be able to just say, like, “Here's your command. Off you go. Bring the data back. Bring it back in full. Allow it to scale.”Because you want that developer to have that experience of not breaking their flow. And what do you want them to be able to be so excited about the application and innovation that they've built, that they want to go and show that teammates? They want to say, “Look at the great thing I built over the weekend. Look at this, this is amazing.” Right?And then be able to get all the teammates pretty excited about what they built in a way in which they can try it out really easily, right? They can take this little thing that they built into the database, click some buttons, and off we go, right? And now your development team is super excited about some of the great innovation that you have. But you also have to have the reverse. You have to have the architecturally sound, so then when you get to the architect, if you like, who is looking at the bigger picture of what's the future going to look like? Is it the right technology? Is this something that we can bring into the organization? And you know, this is a cool bit of application you just built me, but you know, is this realistic that I can deploy this thing?And this is where you start going back into it still has to have high performance, the security has to be there, the scalability has to be there so that I can potentially—I can start small and grow this thing horizontally as I see the requirements coming. There are different set of requirements architecturally, so we're looking at—you know, as a company, our key focus is how do you drive that developer community so that you give the people the freedom to build the next generation of applications in the simplest way [unintelligible 00:27:35], say with free trials, click some buttons, have the database up in minutes, but also then being able to have that capability in the underlying database to take it to the architect. That's what our core focus is every day.Corey: I agree with everything that you're saying. You're making an awful lot of great points, but for me, the proof in the pudding is the second thing that I tend to look at on your website after the pricing page, and that is your list of customers. Because it's always interesting when someone talks about how they're revolutionizing everything, and this is the way to go, and everyone who's anyone is doing these things. And then you look at their customer page and either they don't have one, which is telling, or the customers on that page are terrifying in that, “Wow, that sounds like a whole bunch of fly-by-night startups whose primary industry is scamming people.”You have a bunch of household blue-chip names as well as a bunch of newer companies that are very clearly not what people think of as legacy—you know, that condescending engineering term that means it makes money. It's across the board, it is broad-spectrum, and it is companies that absolutely know exactly what it is that they're doing when it comes to these things. That to me is far more convincing than almost anything else that can be said because it's—look, you can come on and talk to me about anything you want about your product, and I can dismiss it and, “Yeah, whatever. Great.” But when I start talking to customers, as I did prior to recording this episode, and seeing how they talk about you folks, that to me is what reaffirms that, okay, this is actually something that has legs and is solving real customer problems.Because early stage, it's, “We have this idea for this company we're going to build that it's going to be great.” “Awesome. Go talk to more customers.” That is a default, safe piece of advice generically you can give to anyone. And it's easy to give and hard to take.I've been saying this for years, and I still screwed it up and we started trying to launch a SaaS product here called DuckTools. Yeah, it turns out that we didn't talk to enough customers first about what they're actually trying to achieve, and we assumed we knew the answers. It's an easy mistake to make. What I really appreciate is—about a Couchbase in particular—is not just the fact that you have all of these customer references, but the fact that each one talks about what the value to the business is not just in terms of, “Oh yes, now we can query data and there was no way for us to do that before.” Of course, people have found ways to do that since business started.Instead, it's much more about this is how it made it more efficient, more optimal, how it unlocked possibilities and capabilities for us. That alone tells me that there definitely is significant value that you're delivering to customers. In my own business, whenever I think I've seen it all, I have to do is talk to one more customer and learn something new. What have you seen in recent memory, from a customer, that surprised you about how they're using Couchbase?Chris: You look at that, and you can see—I could probably talk for hours on different types of customers, but it's the ones that you can literally see in your life and you can reflect to, right? So, if you taken one of the biggest airlines that are out there today, they're completely changing, kind of, the whole experience. And our whole experience of and how do I get feedback? Because Couchbase's customers, [unintelligible 00:31:01] customer, right, is what they're thinking about, right? They're an airline.So, these passengers; fine. But how many times have you got on a plane, and you see all these people, literally, there's obviously the passengers, and then there's the cabin crew, and then there's the people on the ground, and then there's the pilot, and for the sake of the discussion, the staff that are there are literally passing paper back and forth to each other. And surely there a better way to do this. And for someone who likes to solve complex technical problems, you go, “Wow, this is going to be a bit of a challenge.” Because if you want to collect feedback from an aeroplane in the air, [laugh] right, and you want to connect that to the ground data that people are having in terms of maintenance data, you want to do that across the world, in multiple different time zones, that's pretty tricky problem to try to go solve, right?So therefore, how do you get a database that is able to work remotely and on what people would call the edge; let's just call it in this case in a device that's literally a cabin crew member is carrying around with them that's not connected because there is no connection because I'm in the middle of the air. But I want to pair it with the other cabin crew members that are around, right, in flight, and then when I land, I want to sync that data backup to the maintenance people. So, you need a database that's able to operate on a device with no connection, and then being able to synchronize backup to a cloud database that is then collecting data from all the other flights around the world.Corey: Synchronization sounds super easy until you actually try and do it, and then, “Oh, wow.” It's like, you could cut to pieces by the edge cases.Chris: And then people go, “Well, there's no problem. There's internet everywhere these days.” Yeah, sure there is. [laugh]. You get disconnected all of the time.Corey: Not to name names. This is very evocative, an earlier episode of this show I had with Tyler Slove, who's a senior manager over at United Airlines, about specifically how they're approaching a lot of their own reimagining and the rest. It's a fascinating use case, and as someone who's a bit of a travel geek himself—you know, in the before times—that's always an area of intense interest because it's… I'm sorry, I'm still a little boy at heart; it's magic to me. You get on a plane, you go somewhere else, close the doors, it opens it up, and you're on the other side of the world. And now there's internet on it? Oh, my God, who would have imagined such a thing?Chris: Uh-huh. But that's changing the experience for people. It's just really fascinating.Corey: Completely. And it's empowering and unlocking that experience you're talking about of being able to sync between the crews, about handling all this stuff behind the scenes. Everyone loves to complain about airlines because no one knows really how to run the massive logistical part of an airline. But the WiFi was a little bit slow or the food was cold; well, that's something I know how to complain about Twitter.Chris: [laugh].Corey: It becomes this idea of almost a bikeshed problem expression, where it's, “Oh, yeah. I'm just going to complain about things I can wrap my head around.” Yeah.Chris: I was talking to somebody recently, and they were—swapping topics a little bit—and they were like, so—they were talking about innovation on some new web application that they built. And I literally have to explain them, and I said, “Well, if you think of it, the underlying whole technology stack that's behind this for high-scale e-commerce, it's sophisticated, right, because people will literally walk away from a page, an application, a mobile app, if they don't get an instant response time. And that request has to literally travel, physically, quite a fair amount of distance, talk to multiple different types of technology, answer to that question, then come back to you instantly.” The sheer amount of technology that's involved here of moving that data around is a complicated architectural problem to fix. A database only plays a small part of that. You can't be the slowest player in the party.Corey: No. And that is always the challenge is that when you're looking at different use cases, there's always a constraint, and how that constraint winds up manifesting in different ways, if it's not the thing that's slowing things down, it's also not where the attention goes. If you have a single thing like, the database for example, slowing things down, everyone cares about improving databases, people focusing on, “Well, we're going to improve the JavaScript load time on the website,” that's not the problem. Find the bottleneck and focus on it. And although I'm generally a fan of picking a database and using that as a general-purpose thing until it makes sense not too—much like I am cloud providers—[audio break 00:35:54]Corey: —journey personally, where's the best place to find you?Chris: Clearly, if you want to find more about Couchbase, you can obviously go to couchbase.com. You kindly pointed out you can go and look at the trial for Capella and try out the tech. You're more than welcome to do that as a free trial.If you want to contact me particularly, you find me on LinkedIn; I'm Chris Harris at Couchbase. You'll find me [unintelligible 00:36:26] with Chris Harris in general and probably find lots of them. In the UK, Chris Harris is a famous racing driver. That's not me; it's someone else. So, find me on LinkedIn, I'm sure it won't be that difficult to find what you find. Or you can find me on Twitter.Corey: And we will of course, but links to all of that into the [show notes 00:36:43]. I really want to thank you for being so generous with your time today. It's always appreciated to talk to people who actually know what they're doing.Chris: You're more than welcome. It's been great to be on the show. Thanks, Corey.Corey: Chris Harris, Vice President of Global Field Engineering at Couchbase. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment. I'm going to wind up using all of those angry comments, at one point, as a database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Chris got a bike. Specifically, he bought a bike to use in a triathlon he signed up to participate in. Now he needs to name the bike, and speaking of naming things, a more technical topic that he talks about is the Crispy Brussels Snack Hour. Steph talks about Rescue Rails projects and increasing developer acceleration. They answer a listener question asking, "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec?" This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Translate frustrations into professional corporate (https://twitter.com/MeanestTA/status/1509936432625897474) Learn Hotwire by Building a Forum (https://twitter.com/afomera/status/1512287468078264322) parallel_tests (https://github.com/grosser/parallel_tests) parallelsplittest (https://github.com/grosser/parallel_split_test) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Oh, but I recently learned that Robert Downey Jr. in the Marvel movies he's snacking a lot, maybe not Iron Man, but something...oh no, he's stacking a lot. And I'd read that he was snacking a lot on set, and so they just built it in to where like, sure, you can snack as your character while you're doing stuff. CHRIS: [laughs] STEPH: And I think that's so cool because I find that I am eating every time I show up to record with you. So I would like the same special star treatment as Robert Downey Jr., [laughs] and I just get to eat during each Bike Shed. [laughs] CHRIS: All right. [chuckles] My understanding is also that he was wildly the highest paid of all the actors, so I think that should also come along with this. STEPH: [laughs] CHRIS: Yeah, there's a lot that we can sort of layer on here, but it makes sense to me, and I'm fully on board. STEPH: You're an excellent agent. Thank you for fighting for my higher pay. [laughter] CHRIS: You are welcome. STEPH: What a good co-host you are. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. One of these days, I'm going to say, "I'm Chris Toomey," and then I'm just going to see how you roll with it, although now I'm ruining it, I should have just gone for it. [laughs] CHRIS: Nothing can prepare me for this despite the fact that you're telling me in this moment. In that future moment when you do it, I will still be completely knocked out of whack. Just like for anyone out there listening, the thing that Steph would normally have said instead of what [laughs] she just said was, "What's new in your world?" STEPH: [laughs] CHRIS: And I contractually require that that is the only way she starts this question to me because I get completely lost. She's like, "How are you doing?" I just overthink it, and I get lost, and then we end up in a place like this where I'm just rambling. STEPH: Every podcast contract you have from here on out must begin with hey, Chris, what's new in your world? [laughs] I will still get to that question. I just also had to tell you my future joke. I'm going to play that. Hopefully, you'll forget, and one day I will resurface. CHRIS: I can pretty much promise you that I'm going to forget it. [laughter] STEPH: Excellent. Well, to make sure I stick within the Chris Toomey contract guidelines, hey, Chris, what's new in your world? CHRIS: What's new in my world? Now I just want to spend a lot of time putting together my rider. There can be no brown M&M's in the bowl. No eye contact, please. And I can only be addressed with this one question which is, to be clear, very not true, Steph. And I always record with a video because we actually like to have human faces attached to things. Anyway, I'm going to tighten this all up. When we get to the technical segment of my world, I'm going to tell you about Crispy Brussels Snack Hour, so just throwing that out there as an idea. But before we do that, I'm going to share a fun little thing which is I bought a bike, which is exciting. It's not that exciting. People have bikes. This is exciting for me. But the associated thing that is more exciting/a little terrifying is I'm going to try and run a triathlon. I'm going to try and run, swim, and bike a triathlon as they go, specifically a sprint triathlon for anyone out there that's listening and thinking, oh wow, that sounds like a thing. The sprint is the shortest of the distances, so that's what I'm going to go for. But yeah, that's a thing that I'm thinking about in my world now. STEPH: I know next to nothing about triathlons. So what is a sprint in terms of like, what is the shortest? What does that mean? CHRIS: I think there actually maybe even shorter distances but of the common, there's sprint, Olympic. I want to say half Ironman, and then Ironman are the sequence. And an Ironman, as far as I understand it, I think it's a full marathon. It's like a century bike ride or something like that. It's an astronomical amount of everything. Whereas the sprint triathlon being the shortest, I think it's a 3.6-mile run, so a little over a 5K run, a 10-mile bike ride, and a quarter-mile swim, I want to say, something like that. But they're each scaled down to the rough equivalent of a 5K but in each of the different events. So you swim, and then you bike, and then you run. And so I'm going to try that, or at least I'm going to try to try. It's in September, and now is not September. So I have a lot of time between now and then to do some swimming, which I haven't done...like, I've swum but not in a serious way, not in an intentional way. So I got to figure out if I still know how to swim, probably get better at biking, and do a little bit of running, and it's going to be great. It's going to be a lot of fun. I'm super excited about it. Only a little terrified. STEPH: I think this is where as your co-host coach, which you have not asked me to be, where I would say something about there is no try, to mimic Yoda. [laughs] CHRIS: Yep, yep. Yep. Do or do not. Sprint or sprint not. There is no trying. Oh, were you making a try pun there? STEPH: I didn't go that far, but you just brought it home. I see where you're going. [laughs] CHRIS: This is pretty much what I do professionally is I just take words, and I roll them around until I find something else to do with them. So glad that we got there together. STEPH: Well, I'm really excited to hear about this. I don't know anyone that's trained for a triathlon. I think that's true. Yeah, I don't think I know anyone that's trained for a triathlon. So I'm curious to hear about how that goes because that sounds intense, friend. CHRIS: I think so. None of the individual segments sound that bad but stitching them all together, and I think the transitions are some of the tricky parts there. So yeah, it'll be fun. It's something I find...I used to never run; that was the thing. Like, deeply true in my head was that I'm not a runner. This is just a true fact about me. And then I ran a 5K one year for...it was like a holiday 5K fun run with friends. And every bit of the training leading up to it was awful. I did Couch to 5K. I hated it. My story in my head of I'm not a runner was proven with every single training run. Man, did I hate it. And then something magical happened on the day that I actually ran the race, and it was fun. And I was out there, and there was the energy of being in this group of people. But it was competitive and not competitive in this really interesting way. And then it ended, and we were just hanging out in a parking lot, and they gave us beer. And I was like, well, this is actually delightful. Maybe I actually like this thing. And so I've run a bunch of different races. And I've found that having a race to train for, and by train, I just mean some structured attempt at running, has been really enjoyable and useful for me. So yeah, this is just ratcheting that up a tiny bit. I've done a couple of half marathons is the high watermark so far. It's a good distance. But I don't know that a full marathon makes sense; that's a real commitment. And I'm looking to move laterally rather than just keep getting more complex in my running. So we're trying the shortest possible triathlon that I know of. STEPH: I am such a believer that exercise should be fun, so I love that. Like, I'm not a runner, but then you get around people, and it's exciting. And then there's that motivation, and then there's a fun ending with beers that totally jives with me. Because sure, I can go to the gym; I can lift weights, I can make myself exercise. There's some fun to it. But I strongly prefer anything that's more of like a sport or group exercise; that's just so much more fun. Well, super cool. Well, I'm excited. I would ask you all the details about your bike, but I know nothing. Do you want to share details about your bike? There may be other people that are interested. CHRIS: Oh yeah, my bike. I went to the bike store, and I said, "Could I have a bike, please?" And then they toured me around and showed me all the fancy...they were like, "This is our most modest entry-level bike." And then they kept walking around and showing me fancier bikes. And I was like, "Can we go back to that first one? That one seemed great." STEPH: [laughs] CHRIS: Because it got all of the checkboxes I was looking for, which is basically it's a bike. So actually, the specifics on it are it's a hybrid bike, so like a mix between road, and I don't even know the other road bikes I know of, and maybe it's trail. But I don't think it's meant for going on the trail. But for me, it'll be fine for what I'm trying to do as far as I understand it. It's technically a fitness hybrid, which I was like, oh, fancy. It's a fitness bike; look at me go. But it was basically just like, I would like a bike. General-purpose hybrid seems like the thing that makes sense. So I got a hybrid bike. And that's where I'm at. Oh, and I got a helmet because that seems like a smart move. STEPH: Nice. Yeah, the bike I own is also one of those hybrids where it's like…because when I moved to Boston...and lots of people have the road bikes, but their tires are just so skinny; it made me nervous. And so I saw one of the hybrid bikes, and I was like, that one. That looks a little more steady and secure, so I went with that one even though it's heavier. Do you have a name for your bike? Are you going to think of a name for your bike? CHRIS: I didn't, and I wasn't planning on it. But now that you've incepted me with this idea that I have to name my bike, of course, I have to name my bike. I'm going to need a couple of weeks to figure it out, though. We're going to have to get to know each other. And you know, something will become true in the universe for me to answer that question. But as of so far, no, I do not have a name for the bike. STEPH: Cool. I'll check back in. Yeah, it takes time to find that name. I feel you. CHRIS: [laughs] Yeah, don't make up a name. I have to find what's already true and then just say it out loud. Speaking of naming things and perhaps doing so in a frivolous way, as I mentioned earlier, the more technical topic that I want to talk about, oddly, is called Crispy Brussels Snack Hour. [laughs] So, within our dev team, we have started to collect together different things that don't quite belong on the product board, or at least they're a little more confusing. They're much more technical. In a lot of cases, they are...our form handling is a little rough. And it's the sort of thing that comes up a lot in pull requests where we'll say, "I feel like this could be improved." And we're like, "Yeah, but not in this pull request." And so then it's what do you do with that? Do you put a tech debt card in the product board? You and I have talked about tech debt cards plenty of times, and it's a murky topic. But we're trying within the team to make space and a way and a little bit of process around how do we think about these sorts of things? What are the pain points as a developer is working on the system? So to be clear, this isn't there is a bug because bugs we should just fix; that's my strong feeling, or we should prioritize them relative to the rest of the work. But this is a lower level. This is as a developer; I'm specifically feeling this sort of pain. And so we decided we should have a Trello board for it. And they were like, "Oh, what should we name the Trello board?" [laughs] And I decided in this moment I was like, "You know, if we're being honest, I've named everything very boring, very straight up the middle. We don't even have that many things to name. So we have zero frivolous names within our team. I think this is our opportunity. We should go with a frivolous name. Anybody have any ideas?" And someone had worked on a team previously where maybe it was a microservice or something like that was called crispy Brussels, like, crispy Brussels sprouts but just crispy Brussels. And so I was like, "Sure, something like that. That sounds great." And then they ended up naming it that which was funny, and fun, and playful in and of itself. But then we were like, "Oh, we should have a time to get together and discuss this." So we're now exploring how regularly we're going to do it. But we were like, let's have a meeting that is the dev team getting together to review that board. And we were like, "What do we call the meeting?" And so we went around a little bit, but we ended with the Crispy Brussels Snack Hour. STEPH: That's delightful. I love the idea of onboarding new people, and they just see on their calendar it's Crispy Brussels Snack Hour, come on down. [laughs] CHRIS: It's also got an emoji Brussel sprout and an emoji TV on either side of the words Crispy Brussels Snack Hour. So it's really just a fantastic little bit of frivolity in our calendars. STEPH: [laughs] That's delightful. How's that going? I don't think we've tried something like that explicitly in terms of, like you said, there are discussions we want to have, but they're not in the sprint. They're not tech debt cards that we want to create because, like you said, we've had conversations. So yeah, I'm curious how that's working for you. CHRIS: Well, so we've only had the one so far; it went quite well. We had a handful of different discussions. We were able to relatively prioritize this type of work within that. But one of the other things that we did was we had a conversation about this process, about this meeting, and the board. And whatnot. So we identified a couple of rules of the road or how we want to approach this that I think will hopefully be useful in trying to constrain this work because it's very easy to just like; nothing's ever perfect. And so this could very easily be a dumping ground for half-formed ideas that sound good but aren't necessarily worth the continued effort, that sort of thing. So the agenda for the meeting as described right now is async between meetings. Any of us can add new cards, ideally stated as problems and not solutions. So our form handling could use improvement. And then in the card, you can maybe make a suggestion of I think we could use this library or something like that. But rather than saying use this library or move to this library, we frame in terms of the problem, not necessarily the solution. And then, at the start of the meeting, any individual can champion a card so they can say, "Here's the thing that I really want everyone to know about that I've been feeling a lot of pain on." So it's a way for individuals who have added things to this to add a little bit more detail. Then using Trello as voting functionality, we each get a couple of votes, and we get to sprinkle them across different cards, and then using that now allows us collectively to prioritize based on those votes. And so the things that get voted up to the top we talk about; we prioritize some amount of work coming into the sprint. If it's actually going to turn into work, then it'll go onto the product board because ideally, it's moved from problem space to more of solution space even if the solution, the work to be done is do a spike on XYZ library or approach to form handling or whatever it is. But so ideally, it then moves on to the other board. The other thing that I felt was important is it's very easy for this to be a dumping ground for ideas. So my suggestion is at the end of the meeting, we sort by date, and we prune the oldest things. So it's like, if it's still hanging around and we haven't done it yet, and it's not getting voted up, then yes, we might feel some pain but not enough. It's not earning its place on this board. So that's my hope is we're weeding the Brussel sprouts garden that we have at the end of the meeting. That's roughly what we have now. We really only had the one, so that idea of pruning will probably come in later on. And it may be that this doesn't work out at all, and this ends up being tech debt cards that get stale and don't capture the truth. But I'm hopeful because there's definitely...there's a conversation to be had here. It's just whether or not we can make sure that conversation is useful and capturing the right amount of context and at the right points in time and all of that. STEPH: Yeah, I like it. I like the whole process you outlined. You know what it made me think of? It sounds like a technical retro, not that retros can't be technical; we bring up technical stuff all the time. But this one sounds like there was more technical discussion that was still looking for space to bring up. So the way that you mentioned that people add their thoughts, that it can be done async, and then you vote up, and then as things get stale, you remove them and focus on the things that the team voted for, that's really cool. I've never thought of having just a technical-specific retro. CHRIS: Yeah, definitely informed by retro. But again, just that slight honing the specific focus of this is just the dev team chatting about deeply dev-y things and making a little bit of space for that. I think the difficulty will be does this encourage us to work on this stuff too much? And that's the counterbalance that we have to have because this work can be critically important. But it can also be a distraction from features that we got to ship or bugs that are in the platform or other things like that. So that balancing act is something that I'm keeping in mind, but thus far, the way we structured it, I'm hopeful. And I'm interested in exploring it more, so we'll see where we get to. And I'll certainly report back as we refine the Crispy Brussels Snack Hour over time. STEPH: I feel like the opposite is true as well, where you have these types of concerns and things that you want to bring up. And even if they're on the board, once you get to sprint planning, there's a lot of context and conversation there that maybe the whole team doesn't have. It doesn't feel like the right moment to dive into this because you're trying to plan a new sprint. So then that stuff gets bumped down to the bottom or just never really discussed, or it gets archived. So I feel like the opposite is totally true, too, where you have this stuff, but then it never gets talked about because sprint planning is not the right place. So yeah, I'm really intrigued to see how that balance works out for y'all as well. CHRIS: Yeah, I think it's an exciting time, and we'll see where it goes. But like I said, I'm hopeful on it. But yeah, bikes, triathlons, and crispy Brussels, that's my world. Mid-roll Ad: Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: I have a couple of fun things that I want to share and then something that's a little more in the techie space. The first one is there's a delightful Twitter thread that caught my attention recently that I just want to share; totally not tech-related. But this person shared a thread talking about how they help everyone on their team who's older than they are, making sure that the slang that they're using is correct in its context. And so they provided some funny examples. And then, in return, they also will translate this person's frustrations into professional corporate-speak, and it's such a good thread. So if you need a good laugh, I will make sure to include a link in the show notes. The slang is really funny, but it's actually the translation of frustrations into professional corporate-speak that that's the part that resonated with me. That was really good. [laughs] CHRIS: You shared this with me outside of this conversation, and I've read through them. Listeners out there, do not sleep on this. I highly suggest reading through this thread because it is fantastic. STEPH: The other thing that I saw is Andrea Fomera, who is a Rails developer and creates a fair amount of content...I haven't been through some of that content, but I know there's content around Rails. And specifically, there is a newer course called Learn Hotwire by Building a Forum. And she has made this totally free, and I just think that is so cool. And she shared that on Twitter, so I'll be sure to include a link in that to the show notes because Hotwire is something I haven't used yet. And so I saw this free course, and I think it would be fun to dabble and go through the course. And I know there are some other people at thoughtbot that have used it and seem really happy with it or interested in using it as well. Is that something that you've used? CHRIS: I have not. I skipped over Hotwire in my adventures. I'd found Inertia and was quite happy with that. And then, in that way that, I sometimes limit the amount of things that I'm allowed to explore on the internet in hopes of actually getting some work done; I have not spent much time. But enough folks that I deeply respect are very excited about Hotwire that it remains in the like; I would love to have an afternoon just to poke around with that. So I may take a look at this, although I don't know, I'm probably still in my moratorium. I'm not allowed to look at new frameworks for a little while time period. But I hear great things. STEPH: That's fair. That's also what I've heard. I've heard great things. So yeah, I just figured I would share that in case anybody else is interested in looking for a course that they could take and also dabble at Hotwire. The other thing that's on my mind is more the type of projects that I'm really getting a lot of joy from. Because I've known about myself for a while that greenfield projects are nifty, but they're not my thing. They're not the thing that brings me a lot of joy. It's just kind of nice. You got your own space, and you're building from the ground up, cool, cool, cool. But this one, I found that the projects that I'm really starting to gravitate towards are what I've heard someone else call Rails Rescue projects. So those are the projects where they have been around for a while, or they've just been built in a way that the data modeling structure makes it really hard to implement new features. Maybe there's a lack of test coverage that makes it really risky to ship new work or to make changes. There are lots of bug reports and errors that the team is fighting with. So then that type of work comes down to where you're trying to either increase stability for the application and for users and/or you're looking to increase developer acceleration. And I really, really liked those projects. That's the type of project that I've been a part of for...I think my last couple of clients have been in that way. I don't know that they would describe it that way, that it's a Rails Rescue project. But if I can see that opportunity where I see there's a stability issue or developers are feeling a lot of pain in one area, then that's the portion of the application, the portion of the team that I'm going to gravitate towards. Or like the current work that I'm doing where we're really focused on testing and making some improvements there or reducing that pain that the team is feeling around how long CI takes to run or the flakiness because then you're having to re-verify your CI runs. I like that work. It's a bit slow and frustrating, so it does seem to require a patient person. You also have to have lots of metrics that are guiding you because you can have a lot of assumptions around I'm going to make this improvement, but it's going to take effort to get there. And it'd be great if I can validate that effort upfront. So I feel like a lot of my time is spent more around metrics, and data, and excel sheets than necessarily coding. I don't know if that's great, but it's part of the work. There's a balance there. So I just found that interesting. I don't think I would have thought this is something I was interested in until now that I've been on these projects for a while. And I've started noticing a theme where I really enjoy them. Although I realize looking back at former Stephanie days when I was going through Launch Academy and learning to code, I really thought I wanted to be in DevOps. DevOps seemed like the cool kids' corner. They knew how the internet worked. They knew what was happening. They were making it live. And I just thought it seemed really cool. For the record, it is still a cool kids' corner. But I have also learned that the work-life balance isn't great with DevOps because you just never know when you're going to be on call. And that really stood out to me as something that I didn't want to do. And I do like building some features. But essentially, it's that developer acceleration that I really liked because they were the ones that were coming and often building tools and making it easier for then people to then ship their code and get it out into the world and triage. And so I liked the fact that their users were developers versus the people using the application as much, although, I guess, technically both. But the people they were often striving to help the most was the internal team, and that resonated with me. So I guess I have eventually found my way into that space. It wasn't through DevOps, but it is now through this idea of projects that need some rescuing. CHRIS: I love that you've spent enough time now to figure out what it is that draws you in the work and the shape of projects that is meaningful to you. Interestingly, I find myself not on the opposite side of things...you know, we're always looking for a disagreement, and this isn't a disagreement, but this is a thing on which we differ a surprising amount because I do like the early-stage stuff, the new, the breaking ground, all of that exciting whatnot. But how do I not make this a more complicated statement? I appreciate that you have the point of view that you do. I think the world needs more of what you're doing than the inclination that I have, like; I want to start something bright, and fresh, and new, and I can see so much progress immediately in front of me. And this is amazing. But the hard, meaningful work like maintenance, and support, and legacy, and rescue where necessary is such a critical aspect of the work. I see this in open source so often where there are people who are like; I made an open-source project; this is great. I hacked for a bunch of weekends, and look; I made a thing. And then the support burden builds up. And open source can be this wildly undervalued thing overall. And the maintenance of open source is even more so, and you have this asymmetry between the people that are using it and don't think that their voice is one of the thousands that are out there requesting a new feature or anything like that. The handful of people that I see out there in the world that come along later in the lifespan of an open-source project and just step in to do maintenance, my goodness, is that heroic work, just quiet, necessary heroic work. And what you're describing feels sort of similar but at the project level. And I don't know; I'm sort of like silent. I'm out loud on a podcast, not silently at all judging myself because I'm like, I feel like you're doing the thing over there. That seems like a good thing. But I also like my early projects... [laughs] STEPH: I think they're...I mean, we need each other. I need you to start the code, and the applications for them to then need some help down the road [laughs] to [crosstalk 24:30]. CHRIS: But I need to do a bad enough job that we have to be rescued by you. STEPH: [laughs] CHRIS: Hey, don't you worry, friend, I'm doing a terrible...no, I think I'm doing an okay [laughter] job. Hopefully, I'm avoiding those traps, but it's hard to know when you're writing legacy code, you know. STEPH: It is hard for the reasons we were talking about earlier. Like, those technical discussions build-up, and then if you don't really have a space to then address it, then it just keeps getting sidelined until you suddenly get to this point of it's either we come to a grinding halt because we can't ship work, or we find ways to start bringing this into our process. And so that's the other part of the Rails Rescue projects is often looking at the team's process and figuring out, okay, instead of hiring consultants to come in and then try to help with this, how else can they also integrate this into their own project? So then, once thoughtbot lives, they now have ownership of this, and they can carry it forward as well. There is an aspect of this work that I'm still working on, and it comes around to the definition of work because if you go into a team or a project that's like, hey, we really need help with X. We really need help with addressing all these errors. Or we really need help improving developer happiness or getting test coverage in place. Finding out exactly how you're going to tackle that, are you going to join a team of the other developers? Like, are you looking for more of a mentorship? Like, hey, we're going to work alongside your team to then mentor them to then bring this into their own process and their own habits, so then they feel empowered to address this in the future. Are we doing this more as a triage where then we have a specific goal or two that then we're going to meet? And then once we get stuff out of this on fire state, then maybe we start pairing with other people. Or are we going to work closely with the people who are fighting fires with the bug reports and the errors? There are a bunch of different ways that you can tackle that. And I think it really helps define the success of that engagement and then your outcomes because otherwise, I feel like you can get distracted by so much. Because there's so much that's going to try to get your attention that you want to work on and fix. So you have to be very upfront about there are different areas that we can work on. Let's figure out some metrics together that we're really going after to then help define what does success look like for this first iteration of our work? And then what's the long-term plan for this work? Then how do we keep it going forward? How do we empower the team to keep this work going forward? And that's an area that I've learned just from trial and error from being part of these projects. And I'm very interested in still cultivating that skill and figuring out what's the area that we're focused on? CHRIS: There's something that you said in there that I want to hone in on, which is the idea of you've learned from going on so many of these different projects, and you're carrying forward ideas that you have. But I think more generally, there's something interesting in what you were just saying there around you've worked on a bunch of different projects at different organizations with certain things that they were great at, with certain things that they struggled with at different sizes. And you're able to bring all that experience to bear on each project. But I think also taking a step back, as you were describing, you're like, I think I've figured out what it is that I like and the type of projects that I want to do. I cannot say enough good things about working in a consultancy for a while because, my goodness, you get to try out a bunch of different stuff. And A, you get to learn a ton about how to do the work, and how to communicate, and different technologies and all of that. But you also get to figure out what it is that you might want to double down on and lean into in terms of the work. That's definitely a big part of my story. Seven years at thoughtbot, I tried a lot of different stuff, worked at a lot of different companies. And I would describe it as I found a lot of things that I didn't want. And then there's that handful of things that I really did want, and I was able to then more intentionally pursue that. So for anyone out there that's considering it, working at a consultancy is fantastic, or at least it has fantastic elements to it. It also can be complicated as you talk about finding organizations and having to, you know, if you're brought in for a certain job, but when you get there, you're like, "Ooh, I know you want me to fix bugs, but actually, I think I just need to work with your team because they're the ones writing the bugs. And why are they writing the bugs" "Well, because the salespeople are selling things, and then we have timelines." Like, we got to start at the very top of this whole pyramid and fix it. And so it can be very complicated. But there's so much that you can learn about yourself in the process, in the work, and I adored that portion of my career. STEPH: Yeah, I totally agree. Anytime someone mentions, they're like, "Oh, consultancy work. What's that like?" And I remember it was a couple of years ago I mentioned I was working for a consultancy, and they were like, "Oh, you must travel a lot." I was like, "No, [laughter] I stay put. I just work from an office in Boston." But I remember that caught me off guard because I hadn't considered that I was supposed to travel, but that makes sense that you think of consultants that travel. But when I meet people or talk to people, and they're like, "Oh, you've been at thoughtbot for five-plus years, and how's that going? And what's it like to be at a consultancy?" And exactly what you just said, it's the variety that I really like and getting to try on so many different hats and see how different teams and processes work and then identify like, oh, that worked really well for that team, or this isn't working well for that team. I have really enjoyed that. And it can be a roller coaster because you have to get really good at onboarding. You have to go through that initial phase of like; I swear I'm smart. I will get up to speed quickly, and I will learn things. But it's a period that you just have to go through with each team that you join, but you do it twice a year, maybe three times a year. And so you get comfortable with that over time. So there are definitely some challenges that then have to fit your personality and things that work for you and bring you joy. And I completely understand that it's not for everybody, just kind of I really enjoy product work, but I also really enjoy being able to move around to different teams and help folks. CHRIS: I love the idea that as a consultant, your job is to just walk through airports and high-five every Accenture billboard in it and just go up to the wall and pay your respects. But no, no, that is not our version of consulting. [laughs] STEPH: That's why I have so much time for The Bike Shed. It's because I'm just, you know, I'm in different airports high-fiving signs. And then this is my real job; Bike Shed is my real job. CHRIS: Oh, that would be fun. STEPH: [laughs] You know, I have such a fondness of Bike Shed that now something interesting has happened where someone was like, "Oh, you're bike-shedding." And they're not being mean, but they're just like, "Oh, we're totally bike-shedding," or "This is dissolving into bike-shedding." And I'm like, oh, bike-shedding, hooray. And I'm like, oh, wait, bad. [laughter] And I have to catch myself each time. CHRIS: Yeah, we've taken away a lot of the meaning. Well, I mean, have we or do we live up to it every single week? Who can say? But I, too, have a fondness for this phrase, perhaps not aligned with what it is actually meant to signify. STEPH: On a slightly different tech-related note, there is a gem that I'm really excited to check out. I saw it mentioned on the paralleltests gem, which is what helps you run your tests in parallel, and it's what we're currently using. But you can group your tests in different ways. And right now, we're using the runtime strategy where essentially then we use the output from RSpec where we know how long each file took to run. And then paralleltests will then use that data to then figure out, okay, how should I split up your test file? So then try to balance them as evenly as possible. We're at that point, though, where we've talked about tentpoles, so we have certain files that, say, take 10 minutes; other files will only take two minutes. And that balance is really throwing off our ability to then bring down the CI build time. So on paralleltests, there's reference to another gem called parallelsplit_test, where then you can run multiple test scenarios that are in one file but then split them out across different processes or different machines. And that is exactly what I want in my life right now. I haven't checked it out yet, so I feel like I'm giving a daily sync update of like, I'm going to go off and explore this thing. I will report back and see how it goes. [laughs] In the past, I usually try to say, "I've tried this thing, and this is how it went," nope, opposite today. I am sharing the thing I'm going to try, and then hopefully, it goes well. CHRIS: Well, either way, we should definitely report back. That's the truth. I like that you're leading us into this and giving us a preview. But then yeah, we'll see where we get to. That does sound like the thing you want, though. So I hope it goes well. STEPH: Yeah, we've learned at this point where we are splitting work across different machines that until we address some of those tentpole concerns, adding more machines won't help us because then a machine's going to run as long as the longest file. So we've been doing some manual work to split up those files. That's not the best, but it does help you see some results. So then, at least you know you're making progress. So now we really need to find a way to automate that because we don't want someone to have to manually figure out where are the tentpoles, split those files up, commit that, and then keep track of, like, do we have another tentpole on the horizon? We really need a gem or something to help us automate that process. So yeah, I will be happy to report back. MIDROLL AD: And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free. That's studio3t.com/free. STEPH: Pivoting just a bit, we have a listener question. This question comes from Steve Polito. And Steve wrote in, "Longtime listener, first-time thoughboter." Yay. Yay is my addition. Anything that goes up in voice is probably my addition, [laughs] just so people know. All right, back to what Steve said. "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec? Not only does Rails provide a fully functional test suite by default," looking at you Minitest, "but it's also well-documented and even provides the ability to run system tests. Rails is built on the principle of convention over configuration. And it seems odd to me that so many developers want to override such a fundamental piece of framework." Thanks in advance, [singing] Steve Polito. Steve, I hope it's okay I sang your name [laughs] because we're here now. That is an awesome question. I'm going to give what may be less of an awesome answer which is, well, one; Steve highlights that people will then replace Minitest with RSpec. I haven't done that. I haven't actually gone into a project and said, "Okay, we need to replace your test suite and bring in RSpec instead." But if I'm starting out a project, I do have a heavy preference for RSpec, and frankly, that's just from experience. Like, that's what I was raised on, to say it in that way. [laughs] RSpec is what I know; it's what I'm used to. It's what, even when I joined thoughtbot, was just the framework that we used for all of our testing and what we focused on so heavily. So frankly, for me, it's just a really strong bias. I know it's something that I'm really good at. I know it's something that works really well. I know it's well-documented. I know it's also very accessible for other people to use. But actually replacing it on a different project, I don't think I would do that. I'd have to have a really strong reason, or maybe if we haven't actually started testing anything yet, to then replace it because that feels a bit aggressive to me. But then it just depends on the situation, I suppose. But yeah, overall, I just default to RSpec because that's what I'm accustomed to, and it's the testing framework that I know. CHRIS: Yeah, I think my answer is largely the same. It's the thing that I've worked with by far the most. Similarly, I've been on projects that were using Minitest, and therefore I used Minitest because it's definitely not worth the effort to switch. But in a lot of...well, I will say this, I've much less experience, and this may be less true over time. But there were many things that drew me to RSpec, and that continues to be interesting to me in the RSpec world. Even things as small as the assertion syntax, assertequal is the method that's, you know, this is how you do an assertion in Minitest, and it's assertequal expected, actual. That's the order of the arguments. It's expected first and then actual. That makes sense, probably with the expected, but I would get that wrong constantly. I do get that sort of thing wrong. They're just positional arguments that there's nothing about this that tells me which way to go. And so it's very easy to get failure messages that are inverted, and so it's just this tiny little thing. But with RSpec, we end up with expect and then in parentheses, the thing that we are expecting to equal the other thing, and it just reads a little more honestly. It fits within the Ruby mindset in my world. I want my code to be as expressive as possible, and Minitest feels much lower level to me. It feels more, you know, assert as a word is just...I'm not asserting. That just feels so formal. And so these are, again, to be clear, very, very small things, but they all add up. And there's a reason that we're using Ruby overall. And there's a reason that we're using Rails is this expressiveness is a big part of it for me, so I'll cling to that. I'll hold on to that as something that's true. Also, Rspec's mocking support, rspec-mocks as the library, I found to be really fantastic, and I've grown very comfortable working with it. And I know how and where to use that. I also have so much built-up knowledge, like the idea of when to use let and not use let in RSpec. It's just this deep thing that I know about. I'm sure there's an equivalent in the Minitest world, but I would have to have a different understanding in argument, and that conversation would just feel different. I think the other thing that's worth saying is this is a default for us at this point that I personally have not felt the need to reconsider. When I've worked on projects that have used Minitest, I certainly wasn't called to it. I wasn't like, oh, this seems really interesting; I'm going to lean into this more. I was like, I miss RSpec. And some of that is, again, just familiarity. But at the end of the day, we only have so much time to do things. And so, I firmly stand by my not reconsidering my testing option at this point. Like, RSpec does the things that I want. It does it really well. Critically, I'm able to build a system and write a test suite and maintain that test suite over time and have it tell me the truth as to whether or not my application should be deployed to production. That is the measure. That's the thing that I care about. I think it's maybe a little bit slower than Minitest, but I'm fine with that. I have solutions to that problem. And the thing that I care about is when the test suite is green, do I feel confident deploying? RSpec has helped me for years on that journey. And I've never questioned whether or not I should go back to the drawing board and revisit that consideration. So initially, it was probably because it was the thing that we were all using, and then that is for me why it has stuck around. And I love RSpec. I think how many episodes have we just said, "Thanks, RSpec," as a little aside? So we do love it in a deep way. STEPH: Probably not enough episodes have we said that. [laughs] Yeah, I like what you said where you haven't felt the need to switch over or to move away from RSpec. And I wonder, looking back at some of the earlier projects that I joined that were using RSpec, I don't know if maybe they chose RSpec at that time because RSpec had more of those features built-in, and Minitest was still working on those. Maybe they were parallel at the time; I'm not sure. But I like what you said about you just haven't had a need to go back and change. At this point, if I switched over to Minitest, it would definitely be a learning curve for me, which is totally fine. But yeah, I'm just happy with it, so I stick with it. And I also appreciate that idea that, yeah, unless you're new in a project, I wouldn't encourage someone to then switch over to something else unless I feel like there's just a lot of pain for some reason with the current testing setup. There has to be a reason. There has to be a drive. It can't be just a personal bias of like, I know this thing, so I want to use it. There's got to be a better reason that benefits the whole team versus just a personal preference. But overall, I think it comes down to for us; it's just a choice because it's the familiar choice. It's the one that we know. But I think Minitest and RSpec are both so widely supported. I was thinking about that convention over configuration. And yes, Rails ships with Minitest, but RSpec is so common that I don't feel like I'm breaking convention at that point. They're both so widely supported and used that I feel very comfortable going with either option. And then it's just my personal preference for RSpec. So thanks, Steve, for sending in that question. And for anyone else that has a question that you would love to share with Chris and I, you can reach us in a couple of different ways. You can reach us on Twitter via @_bikeshed. You can also go to the website, bikeshed.fm/content. We will drop some links in the show notes. But if you go there, then you can send a question or also email us directly at hosts@bikeshed.fm. And we're running a little low on listener questions, so we would love to have a listener question from you. And we would love to talk about anything that y'all want to talk about, okay, within reason, you know, triathlons, Brussel sprouts, things like that. All of that falls within the wheelhouse. CHRIS: Normal stuff. STEPH: Normal stuff, yeah. CHRIS: And to be clear, despite the fact that Steve did recently become a thoughtboter, you don't have to be a thoughtboter to send in a listener question. [laughs] In fact, it's much more common to not be a thoughtboter when sending in a listener question. But we'll take them from anybody. We're happy to chat with you. STEPH: On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Being pregnant is hard, but this tapas episode is good! Steph discovered and used a #yelling Slack channel and attended a remote magic show. Chris touches on TypeScript design decisions and edge cases. Then they answer a question captured from a client Slack channel regarding a debate about whether I18n should be used in tests and whether tests should break when localized text changes. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Emma Bostian (https://twitter.com/EmmaBostian) Ladybug Podcast (https://www.ladybug.dev/) Gerrit (https://www.gerritcodereview.com/) Gregg Tobo the Magician (https://astonishingproductions.com/) Sean Wang - swyx - better twitter search (https://twitter.com/swyx/status/1328086859356913664) Twemex (https://twemex.app/) GitHub Pull Request File Tree Beta (https://github.blog/changelog/2022-03-16-pull-request-file-tree-beta/) Sam Zimmerman - CEO of Sagewell Financial on Giant Robots (https://www.giantrobots.fm/414) TypeScript 4.1 feature (https://devblogs.microsoft.com/typescript/announcing-typescript-4-1/) The Bike Shed: 269: Things are Knowable (Gary Bernhardt) (https://www.bikeshed.fm/269) TSConfig Reference - Docs on every TSConfig option (https://www.typescriptlang.org/tsconfig#noUncheckedIndexedAccess) Rails I18n (https://guides.rubyonrails.org/i18n.html) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. There are a couple of new things in my world, so one of them that I wanted to talk about is the fact that being pregnant is hard. I feel like this is probably a known thing, but I feel like I don't hear it talked about as much as I'd really like, especially in sort of like a professional context. And so I just wanted to share for anyone else that may be listening, if you're also pregnant, this is hard. And I also really appreciate my team. Going through the first trimester is typically where you experience a lot of morning sickness and fatigue, and I had all of that. And so I was at the point that most of my days, I didn't even start till about noon and even some days, starting at noon was a struggle. And thankfully, the thoughtbot client that I'm working with most of the teams are on West Coast hours, so that worked out pretty well. But I even shared a post internally and was like, "Hey, I'm not doing great in the mornings. And so I really can't facilitate any morning meetings. I can't be part of some of the hiring intros that we do," because we like to have a team lead provide a welcoming and then closing for anyone that's coming for interview day. I couldn't do those, and those normally happen around 9:00 a.m. for Eastern Time. And everybody was super supportive of it. So I really appreciate all of thoughtbot and my managers and team being so great about this. Also, the client team they're wonderful. It turns out growing a little human; I'm learning how hard it is and working full time. It's an interesting challenge. Oh, and as part of that appreciation because…so there's just not a lot of women that I've worked with. This may be one of those symptoms of being in tech where one, I haven't worked with tons of women, and then two, working with a woman who is also pregnant and going through that as well. So it's been a little bit isolating in that experience. But there is someone that I follow on Twitter, @EmmaBostian. She's also one of the co-hosts for the Ladybug Podcast. And she has been just sharing some of her, like, I am two months sleep deprived. She's had her baby now, and she is sharing some of that journey. And I really appreciate people who just share that journey and what they're going through because then it helps normalize it for me in terms of what I'm feeling. I hope this helps normalize it for anybody else that might be listening too. CHRIS: I certainly can't speak to the specifics of being pregnant. But I do think it's wonderful for you to use this space that we have here to try and forward that along and say what your experience is like and share that with folks and hopefully make it a little bit better for everyone else out there. Also, you snuck in a sneaky pro-tip there, which is work on the East Coast and have a West Coast team. That just sounds like the obvious correct way to go about this. STEPH: That has worked out really well and been very helpful for me. I'm already not a great morning person; I've tried. I've really strived at times to be a morning person because I just have this idea in my head morning people get more stuff done. I don't think that's true, but I just have that idea. And I'm not the world's best morning person, so it has worked out for many reasons but yeah, especially in helping me get through that first trimester and also just supporting family and other things that are going on. Oh, I also learned a pro-tip about Twitter. This is going to seem totally random, but it was relevant when I was searching for stuff on Twitter [laughs] that was related to tech and pregnancy. But I learned...because I wanted to be able to search for something that someone that I follow what they said but I couldn't remember who said it. And so I found that in the search bar, I can add filter:follows. So you can have your search term like if you're looking for cake or pregnancy, or sleep-deprived and then look for filter:follows, and then that will filter the search results to everybody that you follow. I imagine that that probably works for followers too, but I haven't tried it. CHRIS: I like the left turn you took us on there but still keeping it connected. On the topic of Twitter search, they apparently have a very powerful search, but it's also hidden, and you got to know the specific syntax and whatnot. But there is a wonderful project by Shawn Wang, AKA Swyx, on the internet, bettertwitter.netlify.com is the URL for it. I will share a link to his tweet introducing it. But it's a really wonderful tool that just provides a UI for all of these different filters and configurations. And both make discoverability that much better and then also make it easy to just compose one of these searches and use that. The other thing that I'll recommend is, I think it's a Chrome plugin. I'm guessing is what I'm working with here like a browser extension, but it's called Twemex, T-W-E-M-EX. And there's a sidebar in Twitter now, which just seems wonderful and useful. So as I'm looking at a Swyx post here, or a tweet as they're called on Twitter because I know that vernacular, there's a sidebar which is specific to Shawn Wang. And there's a search at the top so I can search within it. But it's just finding their most popular tweets and putting that on a sidebar. It's a very useful contextual addition to Twitter that I found just awesome. So that combination of things has made my Twitter experience much better. So yeah, we'll have show notes for both of those as well. STEPH: Nice. I did not know about those. This may cause someone to laugh at me because maybe it's easier than I think. But I can never remember that advanced search that Twitter does offer; I have to search it every time. I just go to Google, and I'm like, advanced Twitter search, and then it brings up a site for me, and then I use that as the one that Twitter does provide. But yeah, from the normal UI, I don't know how to get there. Maybe I haven't tried hard enough. Maybe it's hidden. CHRIS: It's like they're hiding it. STEPH: Yeah, one of those. [laughs] CHRIS: It's very costly. They have to like MapReduce the entire internet in order to make that search work. So they're like, well, what if we hide it because it's like 50 cents per query? And so maybe we shouldn't promote this too much. STEPH: [laughs] CHRIS: And let's just live in the moment, everybody. Let's just swim in the Twitter stream rather than look back at the history. I make guesses about the universe now. STEPH: [laughs] On a different note, I also discovered at thoughtbot in our variety of Slack channels that we have a yelling channel, and I had not used it before. I had not hung out there before. It's a delightful channel. It's a place that you just go, and you type in all caps. You can yell about anything that you would like to. And I specifically needed to yell about Gerrit, which is the replacement or the alternative that we're using for GitHub or GitLab, or Bitbucket, or any of those services. So we're using Gerrit, and I've been working to feel comfortable with the UI and then be able to review CRs and things like that. My vernacular is also changing because my team refers to them as change requests instead of pull requests. So I'm floating back and forth between CRs and PRs. And because I'm in Gerrit world, I missed some of the updates that GitHub made to their pull request review screen. And so then I happened to hop in GitHub one day, and I saw it, and I was like, what is this? So that was novel. But going back to yelling, I needed to yell about Gerrit because I have not found a way to collaborate with someone who has already pushed up changes. I have found ways that I can pull their changes which then took a little while. I found it in a sneaky little tab called download. I didn't expect it to be there. But then the actual snippet it's like, run this in your terminal, and this is then how you pull down the changes. And I'm like, okay, so I did that. But I can't push to their existing changes because then I get like, well, you're not the owner, so we're going block you, which is like, cool, cool, cool. Okay, I kind of get that because you don't want me messing up somebody else's content or something that they've done. But I really, really, really want to collaborate with this person, and we're trying to do something together, and you're blocking me. And so I had to go to the yelling channel, and I felt better. And I'm yelling again. [laughs] Maybe I don't feel that great because I'm getting angry again talking about it. CHRIS: You vented a little into the yelling channel; maybe not everything, though. STEPH: Yeah, I still have more to vent because it's made life hard. Every time I wanted to push up a change or pull down someone else's changes, there are now all these CRs that then I just have to go and abandon, which is then the terminology for then essentially closing it and ignoring it, so I'm constantly going through. And if I do want to pull in changes or collaborate, then there's a flow of either where I abandon mine, or I pull in their changes, but then I have to squash everything because if you push up multiple commits to Gerrit, it's going to split those commits into different CRs, don't like that. So there are a couple of things that have been pain points. And yeah, so plus-one for yelling channels, let people get it out. CHRIS: Okay, so definitely some feelings that you are working through here. I'm happy to work together as a team to get through some of them. One thing that I want to touch on is you very quickly hinted at GitHub has got a bunch of new things that are cool. I want to talk about those. But I want to touch [laughs] on an anecdote. You talked about pushing something up to someone else's branch. You're like, oh, you know, I made some changes locally, and I'm going to push them up. I had an interesting experience once where I was interacting with another developer. I had done some code review. They weren't quite understanding where I was. They had a lot of questions. And finally, I said, you know what? This will just be easier. Here, I pushed up a commit to your branch, so now you can see what I'm talking about. And I thought of this as a very innocuous act, but it was not interpreted that way. That individual interpreted it in a very aggressive sort of; it was not taken well. And I think part of that was related to I think of Git commits as just these little ephemeral things where you're like, throw it out, feel free. This is just the easiest way for me to communicate this change in the context of the work that you're doing. I thought I was doing a nice favor thing here. That was not how it went. We had a good conversation after I got to the heart of where we both were emotionally on this thing. It was interesting. The interaction of emotion and tech is always interesting. But as a result, I'm very, very careful with that now. I do think it's a great way as long as I've gotten buy-in from the person beforehand. But I will always spot check and be like, "Hey, just to confirm, I can just push up a commit to your branch, but are you okay with that? Is that fine with you?" So I've become very cautious with that. STEPH: Yeah, that feels like one of those painful moments where it highlights that the people that you work with that you are accustomed to having a certain level of trust or default trust with those individuals, and then working with someone else that they don't have that where the cup is half-full in terms of that trust, or that this person means well kind of feelings towards a colleague or towards someone that they're working with. So it totally makes sense that it's always good to check and just to be like, "Hey, I'd love to push up some changes to your branch. Is that cool?" And then once you've established that, then that just makes it easier. But I do remember that happening, and yeah, that was a bit painful and shocking because we didn't see that coming and then learned from it. CHRIS: I do think it's an important thing to learn, though, because for me, in that moment, this was this throwaway operation that I thought almost nothing of, but then another individual interpreted it in a very different way. And that can happen, that can happen across tons of different things. And I don't even want to live in the idealized world where it's just tech; we're just pushing around zeros and ones; there's no human to this. But no, I actually believe it's a deeply human thing that we're doing here. It's our job to teach the computers to be a little closer to us humans or something like that. And so it was a really pointed clarification of that for me where it was this thing that I didn't even think once about, no less twice, and yet someone else interpreted it in such a different way. So it was a useful learning situation for me. STEPH: Yeah, I totally agree. I think that's a really wise default to have to check in with people before assuming that they'll be comfortable with something that we're comfortable with. CHRIS: Indeed. But shifting back to what you mentioned of GitHub, a bunch of new stuff came in GitHub, and you were super excited about it. And then you went on to say other things about another system. [laughs] But let's talk about the great things in GitHub. What are the particular ones that have caught your eye? I've seen some, but I'm intrigued. Let's compare notes. STEPH: So this is one of those where I hadn't seen GitHub in quite a while, and then I hopped in, and I was like, this is different. But some of the things that did stand out to me right away is that on the left-hand side, I can see all of the files that have been changed, and so that's a really nice tree where I just then immediately know. Because that was one of the things that I often did going to a PR is that I would see what files are involved in this change because it was just a nice overview of what part of the applications am I walking through? Are there tests for this? Have they altered or added tests? And so I really like that about it. I'm sure there's other stuff. But that is the main thing that stood out to me. How about you? CHRIS: Yeah, that sidebar file tree is very, very nice, which I find surprising because I don't use a file tree in my editor. I only do fuzzy finding to jump to files. But I think there's something about whenever GitHub had the file list; these are all the files that are changed. I'm like, this is just noise. I can't look at this and get anything out of it. But the file tree is so much more...there's a shape to it that my brain can sort of pattern match on. And it's just a much more discoverable way to observe that information. So I've really loved that. That was a wonderful one. The other one that I was surprised by is GitHub semantic code analysis; stuff has gotten much, much better over time subtly. I didn't even notice this happening. But I was discussing something with someone today, and we were looking at it on GitHub, and I just happened to click on an identifier, and it popped up a little thing that says, "Oh, do you want to hop to the references or the definition of this?" I was like, that is what I want to do. And so I hopped to the definition, hopped to the definition of another thing, and was just jumping around in the code in a way that I didn't know was available. So that was really neat. But then also, I was in a pull request at one point, and someone was writing a spec, and they had introduced a helper just like stub something at the bottom of a given spec file. And it's like, I feel like we have this one already. And I just clicked on the identifier. I think it might have actually been a matcher in RSpec, so it was like, have alert. And I was like, oh, I feel like we have this one, a matcher specific to flash message alerts on the page. And I clicked on it, and GitHub provided me a nice little inline dialog that showed me all of the definitions of have alert, which I think we were up to like four of them at that point. So it had been copied and pasted across a couple of different files, which I think is totally fine and a great way to start, but they were very similar implementations. I was like, oh, looks like we actually already have this in a couple of places, maybe we clean it up and extract it to a common spec support thing, and ta-da, I was able to do all of that from the GitHub pull requests UI. And I was like, this is awesome. So kudos to the GitHub team for doing some nifty stuff. Also, can I get into the merge queue? Thank you. ... STEPH: [laughs] There it is. That is very cool. I didn't know I could do that from the pull request screen. I've seen it where if I'm browsing code that, then I can see a snippet of where everything's defined and then go there, but I hadn't seen that from the pull request. I did find the changelogs for GitHub that talk about the introduction of having the tree, so we'll be sure to include a link in the show notes for that too. But yes, thank you for letting me use our podcast as a yelling channel. It's been delightful. [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Well, speaking of podcasts, actually, there was an interesting thing that happened where the CEO of Sagewell Financial, the company of which I am the CTO of, Sam Zimmerman is his name, and he went on the Giant Robots Podcast with Chad a couple of weeks ago. So that is now available. We'll link to that in the show notes. I'll be honest; it was a very interesting experience for me. I listened to portions of it. If we're being honest, I searched for my name in the transcript, and it showed up, and I was like, okay, that's cool. And it was interesting to hear two different individuals that I've worked with either in the past or currently talking about it. But then also, for anyone that's been interested in what I'm building over at Sagewell Financial and wants to hear it from someone who can probably do a much better job of pitching and describing the problem space that we're working in, and all of the fun challenges that we have, and that we're hopefully living up to and building something very interesting, I think Sam does a really fantastic job of that. That's the reason I'm at the company, frankly. So yeah, if anyone wants to hear a little bit more about that, that is a very interesting episode. It was a little weird for me to listen to personally, but I think everybody else will probably have a normal experience listening to it because they're not the CTO of the company. So that's one thing. But moving on, I feel like today's going to be a grab bag episode or tapas episode, lots of small plates, as we were discussing as we were prepping for this episode. But to share one little thing that happened, I've been a little more removed from the code of late, something that we've talked about on and off in previous episodes. Thankfully, I have a wonderful team that's doing an absolutely fantastic job moving very rapidly through features and bug fixes and all those sorts of things. But also, I'm just not as involved even in code review at this point. And so I saw one that snuck through today that, I'm going to be honest, I had an emotional reaction to. I've talked myself down; we're fine now. But the team collectively made the decision to move from a line length of 80 characters to a line length of 120 characters, and I had some feelings. STEPH: Did you fire everybody? [laughs] CHRIS: No. I immediately said, doesn't really matter. This is the whole conversation around auto-formatting tools is like we're just taking the decision away. I personally am a fan of the smaller line length because I like to have multiple files open left to right. That is my reason for it, but that's my reason. A collective of the developers that are frankly working more in the code than I am at this point decided this was meaningful. It was a thing that we could automate. I think that we can, you know, it's not a thing that we have to manage. So I was like, cool. There we go. The one thing that I did follow up on I was like, okay; y'all snuck this one in, it's fine, I'm fine with it. I feel fine; everything's fine. But let's add that to the git-blame-ignore-revs file, which is a useful thing to know about. Because otherwise, we have a handful of different changes like this where we upgrade Prettier, and suddenly, the manner in which it formats the files changes, so we have to reformat everything at once. And this magical file that exists in Git to say, "Hey, ignore this revision because it is not relevant to the semantic history of the app," and so it also takes that decision out of the consideration like yeah, should we reformat or not? Because then it'll be noisy. That magical file takes that decision away, and so I love that. STEPH: I so love the idea because you took vacation recently twice. So I love the idea of there was a little coup and people are applauding, and they're like, while Chris is on vacation, we're going to merge this change [laughs] that changes the character line. And yeah, that brings me joy. Well, I'm glad you're working through it. Sounds like we're both working through some hard emotional stuff. [laughs] CHRIS: Life's tricky, is all I'm going to say. STEPH: I am curious, what prompted the 80 characters versus 120? This is one of those areas that's like, yeah, I have my default preference like you said. But I'm more intrigued just when people are interested in changing it and what goes with it. So do you remember one of the reasons that 120 just suited their preferences better? CHRIS: Frankly, again, I was not super involved in the discussion or what led them to it. STEPH: [laughs] CHRIS: My guess is 120 is used...I think 80 is a pretty common one. I think 120 is another of the common ones. So I think it's just a thing that exists out there in the mindshare. But also, my guess is they made the switch to 120 and then reformatted a few files that had like, ah, this is like 85 characters, and that's annoying. What does it look like if we bump it up? And so 120 provided a meaningful change of like, this is a thing that splits to four lines if we have an 80 character thing, or it's one line if it's 120 characters, which is a surprising thing to say, but that's actually the way it plays out in certain cases because the way Prettier will break lines isn't just put stuff on the next line always. It's got to break across multiple lines, actually. All right, now that we're back in the opinion space, I have a strong one. STEPH: This is The Bikeshed. We can live up to that name. [laughs] CHRIS: So I do want an additional configuration in Prettier Ruby. This is the thing I'll say. Maybe I can chase down Kevin Newton and see if he's open to this. But when Prettier does break method call with arguments going into it but no parens on that method call, and it breaks out to multiple lines, it does the dangling indent thing, which I do not like. I find it distasteful; I find it noisy, the shape of the code. I'm a big fan of the squint test. I know that from Sandi Metz, I believe, or maybe it's Avdi Grimm. I associate it with both of them in my mind. But it's just a way to look at the code and kind of squint, and you see the shape of it, and it tells you something. And when the lines break in that weird way, and you have these arbitrary dangling indents, the shape of the code is broken up. And I don't feel so strongly. I actually regularly stop myself from commenting on pull requests on this because it's very easy. All you need to do is add explicit parens, and then Prettier will wrap the line in what I believe is a much more aesthetically pleasing, concise, consistent, lots of other good adjectives here that are definitely just my preferences and not facts about the world. But so what I want is, Prettier, hey, if you're going to break this line across multiple lines, insert the parens. Parens are no longer optional for breaking across multiple lines; parens are only optional within a given line. So if we're not breaking across lines, I want that configuration because this is now one of those things where I could comment on this. And if they added the optional parens, then Prettier would reform it in a different way. And I want my auto formatter don't give me ways to do stuff. Like, constrain me more but also within the constraints of the preferences that I have, please, thank you. STEPH: I love all the varying levels there [laughter] of you want a thing, but you know it's also very personal to you and how you're walking that line and hopping back and forth on each side. I also love the idea. We have the idea of clean code. I really want something that's called distasteful code now [laughs] where you just give examples of distasteful code, yes. Well, I wish you good luck in your journey [laughs] and how this goes and how you continue to battle. I also appreciate that you mentioned when you're reviewing code how you know it's something that you really want, but you will refrain from commenting on that. I just appreciate when people have that filter to recognize, like, is this valuable? Is it important? Or, like you said, how can we just make this more of the default so then we don't even have to talk about it? And then lean into whatever the default the team goes with. CHRIS: Well, thank you. I very much appreciate that because, frankly, it's been very difficult. STEPH: I do have something I want to yell about but in a very positive way or pranting as we determined or, you know, raving, the actual real term that wonderful listeners pointed out to us. CHRIS: Prant for life. That's my stance. STEPH: We had a magic show at thoughtbot. It was all remote, but the wonderful Gregg Tobo, the magician, performed a magic show for us where we all showed up on Zoom. And it was interactive, and it was delightful, and it was so much fun. And so if you need something fun for your team that you just want to bring folks together, highly recommend. I had no idea I was going to enjoy a magic show this much, but it was a lot of fun. So I'll be sure to include some links in the show notes in case that interests anyone. But yeah, magic. I'm doing jazz hands. People can't see it, but magic. I like how you referred earlier, saying that today is more of like a tapas episode. And I'm realizing that all of my tapas are related to being pregnant, yelling, and magic shows, and I'm okay with that. [laughs] But on that note, what else is on your tapas plate? CHRIS: Actually, a nice positive one that came into the world...I always like when we get those. So this is interesting because I was actually looking back at the history, and I had Gary Bernhardt on The Bike Shed back in Episode 269. We'll include a link in the show notes. But we talked a bunch about various things, including TypeScript. And I was lamenting what I saw as a pretty big edge case in TypeScript. So the goal of TypeScript is like, all right, JavaScript exists, this is true. What can we do on top of that? Let's not fundamentally change it, but let's build a type system on top of it and try and make it so that we can enforce correctness but understand that JavaScript is a highly dynamic language and that we don't want to overconstrain and that we've got to meet it where it is. And so one of the design decisions early on with TypeScript is if you have an array and you say like it's an array of integers, so you have typed that array to be this is an array of int, or it will be an array of number in JavaScript because JavaScript doesn't have integers; they only have numbers. Cool. [laughs] Setting aside other JavaScript variables here, you have an array of numbers. And so if you use element access to say, like, say the name of array is array of nums and then use brackets and you say zero, so get me the first element of that array. TypeScript will infer the type of that to be a number. Of course, it's a number, right? You got an array of numbers, you take a number out of it, of course, you're going to have a number, except you know what's also an array of numbers? An empty array. Well, of course. So there's no way for TypeScript because that's a runtime thing, whether or not the array is full of things or not. Or imagine you get the third element from the array. Well, JavaScript will either return you the third element, which indeed is a number, or undefined because there's no third element in this array. So that is an unfortunate but very understandable edge case that TypeScript was like, listen, this is how JavaScript works. So we're not going to…frankly, we don't think the people embracing TypeScript and bringing it into their world would accept this amount of noise because this is everywhere. Anytime you interact with an array, you are going to run into this, this sort of uncertainty of did I actually get the thing? And it's like, yeah, no, I know how many things are in the array that I'm working with. Spoiler, you maybe don't is the answer. And so, we ran into this edge case in our codebase. We were accessing an element, but TypeScript was telling us, "Yes, definitively, you have an object of that type because you just got it out of an array, which is an array of that type." But we did not; we had undefined. And so we had, you know blah is not a method on undefined or whatever that classic JavaScript runtime error is. And I was like, well, that's very sad. But now we get to the fun part of the story, TypeScript, as of version 4.1, which came out like the week that I recorded with Gary Bernhardt, which was interesting to look at the timeline here. TypeScript has added a new configuration. So a new strictness dial that you can configure in your tsconfig called noUncheckedIndexedAccess. So if you have an array and you are getting an element out of it by index, TypeScript will say, "Hey, you got to check if that's undefined," because to be clear, very much could be undefined. And I was so happy to find this. We turned it on in our codebase. It found the error in the place that we actually had an error and then found a few others that I think probably had errored at some times. But it was just one of those for me very nice things to be able to dial up the strictness and enforce correctness within our codebase, and so I was very happy about it. Other folks may say that seems like too much work. And, you know, I get that, I get that take. I'm definitely on the side of I'm willing to go through the effort to have enforced correctness, but you know, that's a choice. STEPH: Yeah, that's thoughtful. I like that, how you said you can dial up the strictness so then as you are introducing TypeScript, then people have that option. There is an argument there in the back of my head that's like, well, if you're introducing types, then you want to start more strict because then you're just creating problems for yourself down the road. But I also understand that that can make things very difficult to then introduce it to teams in existing codebases. So that seems like a really nice addition where then people can say, "Yeah, no, I really want the strictness. This is why I'm here," and then they can turn that on. CHRIS: So TypeScript in the configuration has strict mode, so you say strict true. And that is a moving target with each new version of TypeScript. But it's their sort of [inaudible 28:14] set of things that are part of strict, but apparently, this one's not in it. So now I'm like, wait, can I have a stricter? Can I have a strictest option? Can I have dial it to 11, please? [laughs] Really rough me up and make sure my code is correct. But it is the sort of thing like when we turn any of these on; it will find things in our codebase. Some of them, we have to appease the compiler even though we know the code to be correct. But the code is not provably correct as it sits in our file. So I am, again, happy to make that exchange. And I like that TypeScript as a project gives us configurability. But again, I am on team where's the strictest button? I would like to push that as hard as I can and live that life. STEPH: Yeah, I like that phrasing that you just said about provably correct. That's nice. CHRIS: That's the world I want to live in, everything you own in the box to the left, which is probably correct. STEPH: [laughs] That's how that song goes. CHRIS: Yeah. This is a reference to move errors to the left, which I think I've referenced before. But now that I'm just referencing Beyoncé and not the actual article, it's probably worth referencing the article, but the idea of, like, if a user hits an error, that's not great. So let's move it back to QA, that's a little further to the left in sort of the timeline. But what if we could move it to an automated test in CI? But what if we could move it into your editor? What if we could move it even further to the left? And so, a type system tends to be sort of very far ratcheted up to the left. It's as early as possible that you can catch these. So again, to reference Beyoncé, everything you own in a box to the left. STEPH: [singing] Everything you own in the box to the left. CHRIS: Thank you for doing the needful work there. STEPH: [laughs] Mid-roll Ad And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free that's studio3t.com/free. STEPH: I have a question for you that I'd really love to get your opinion on because I myself I'm waffling back and forth where someone brought up some really great points about a concern or just a question they had brought up around testing and i18n specifically. And I agree with the things that they're saying, but yet, there's also a part of me that doesn't, and so I'm Stephanie divided. And so, I'm trying to figure out where I stand on this. So let me dive in and give you some context; I'm going to share the statement/question that they had asked. So here we go. "One of my priorities has been I should be able to review a test without having to reference any other code. References to i18n means that I have to go over to YAML and make sure the right keys have the right values, and that seems error-prone. In some cases, a lack of a hit in the YAML defers to defaults. If the intent is to override the name of model attribute and error messages and it is coded incorrectly, the code fails silently without translating and uses the humanized attribute name, and that would go undetected. If libraries change structure, it might also fail silently as well, so to me, the only failsafe way is to be fully explicit in test." So this goes with the idea that if you're writing tests and then you're testing text, but it's on the screen or perhaps an email, that you're actually going to assert against that string that is shown to the user instead of referencing the i18n keys. And then that also backs up this person's idea that you really want to not have to jump around. If you're reading a test, everything you really need to know about that test should live very close by. And I really agree with that initial statement; I want everything that's very close to the test, especially if it's anywhere in that expectation line, I really want it close, so I can understand what's the expectation, what's under test, what are the inputs, what's the expected outcome. So I wholeheartedly support that idea. But yet, I am in the camp that I then will use YAML keys instead of providing that exact string because I do look at i18n as a helpful abstraction, and I want to trust that i18n is doing its job. And so that way, I don't have to provide that string that's there because then we're also choosing, okay, well, which language are we going to always use for our test? So this is the part where I feel divided. So I'm going to walk you through some of the reasons that I really support this idea and other reasons that I still use the i18n keys and then get your take on it. So there is a part of me that when I'm using the i18n YAML keys, it does make me sad because it reduces the readability in tests. Sometimes the keys are really well named where maybe it's a mailer.welcomemessage. And I'm like, okay, I understand the gist. I don't need to go see the actual string. I also think they highlighted a really good use case where if you're overriding behavior and it could default to something else, your test is still going to pass, and you don't actually know. So I could see the use case there where if you are overriding, then you want to be explicit about the string that you expect back. I also think there are some i18n messages that are fairly complex, and where then I really would like to see the string. So if you are formatting a date or a time or you're passing in just a lot of variables, then there's a chance that I do want to see how did that actually get generated for the person who's going to be reading it versus just maybe it's garbage text that came out? And I want to validate that the message that we think we're crafting is actually the one that the user is going to see. The case against actually being explicit, my biggest one is because then I do see i18n as a helpful abstraction. And I want to trust this abstraction that it's doing its job and it's doing it well. Because then if I do use explicit strings, it makes me sad if I change text from like hello to welcome, and now I have a failing test. I don't like that idea either. So I'm torn between these two worlds of it is very nice to have everything that you need in a test to be able to understand what is the expectation, but then I also lean into this abstraction and reference the i18n keys. So, Chris, with all of that, that was a bit of a whirlwind, [laughs] what are your thoughts? How do you test this stuff? CHRIS: Honestly, I'm surprised that you've got that much division in your own answer because for me, this is very obvious there's one...no, I'm kidding. This is obviously complicated. Similar to you, I think I'm going to have to give a grab bag of answers because I don't have a singular thought of like it is concretely this or that. I tend to go for explicit strings and tests all the way to...so like the readability of a test, and the conciseness of a test is interesting. I will often see developers extract. Say they're creating a user with a specific email, and then they log in with that email later, and then they expect something else. And so the email is referenced a few times, and they'll extract that into a variable called email. And I personally will tend to not do that. I will inline the literal string like user@example.com, and I'll do it in a few places. And I'm fine with that duplication because I like the readability of any given line that you're reading. So I will make that trade-off within tests. This is the thing I think we've talked about before, but the idea of DRY in tests is like I want to be careful applying that idea, Don't Repeat Yourself, to break apart the acronym. Those abstractions I will use them less than tests. And so I want the explicitness, I want the readability, I want to tell a little story, all of that feels true. That said, to flip it around, one of the things that I'm hearing...so I think I'm hearing a part of this that is around well, we can fail silently because we fail symmetrically in both the implementation and our test. Then an assertion may actually match even though it's matching on a fallback. I think that's a configurable thing. I would actually want my test to raise if I'm referencing an i18n key that is not defined. Now, granted, that's different for languages. And maybe this becomes a more complex story of like in production; in a different locale, it will fail because we don't have 100% parity across all our locale files. But fundamentally, I want to make sure that at least exists in our base, which I think typically would be en-US as the locale. I want to make sure all keys are looked up and found, and it's an error otherwise in our test. So that's a feeling. But am I misunderstanding that part of the story or how that configuration typically works? STEPH: No, I think you've got it. But just to make sure we're on the same page, so if you reference a key that doesn't exist, then it is going to fail. So at least you have your test failure is going to let you know that you've referenced something that doesn't exist. But if you are referencing, like if you want to override the defaults that Rails or i18n has provided for a model and say for an error message, if you reference that, but you want to override it, but then you've forgotten, that does exist. So you're not going to get the failure; you're going to get a different message. So it's probably not a terrible experience for the user. It's not going to crash. They're going to see something, but they're not going to see the custom message that you intended them to see. CHRIS: Gotcha. Okay, well, just to name it, the thing that I was describing, I don't know that that would be the configuration for every system. So I would strongly encourage any system where i18n just has a singular behavior which is we fall back to the key. I want my test to absolutely tell me if that's happening. And that should be a failure of the test. But to the discoverability documentation bit, I do wonder if tooling can actually help answer the question. And as I was describing the wonderful experience I had on GitHub the other day, viewing code as just static characters in a file is both true and also, I think increasingly, a limited view of it. We have editors, and we have code hosting tools that can understand semantically our code a little bit better. There's got to be like 20 Different VS Code plugins that, when you hover on an i18n reference, it will do the lookup for you. That feels like a thing that exists, and if it doesn't, well, now I've nerd-sniped myself, and I got a weekend project. JK, I'm definitely not building that this weekend. But that feels like can we use that to solve this? Maybe not. But that's just another thought of where we have these limitations where it's static, like those abstractions can be useful. But if we can very quickly dereference them, then the cost of the abstraction or that separation becomes smaller, and so the pain is reduced. And I wonder if that's a way to sort of offset it. STEPH: If I can poke at that a little bit more, because I think you're touching on something that I haven't expressed or thought through explicitly, but it's the idea of, like, why do I like the abstraction? What is it that's drawing me towards using these keys? And I think it's because most of the cases, I don't care. I don't care what the string is, and so that feels nice. Like, I understand that, yes, we're referencing something. If that key didn't exist, I'm going to see a failure. So I know that there's text there, and that's why I do lean into referencing the keys instead of the text because it feels good to not have to care about that stuff. And if we do make changes to the text, then it suddenly doesn't fail, and then I have to go update a test because we added a period or added a comma. I think that's the path of more sadness for me. And my goal is always a path of least sadness. So I think that's why I lean into it [laughs], I'm guessing. Is that why you lean into it as well? Or what do you like about referencing the keys over the explicit text? CHRIS: No, I think I share your inclination there, and the reason that you're in favor of it, and I think the consistency like if we're going to use i18n, then we should lean in because it's a non-trivial thing to do like porting to i18n projects, and they're tricky. Getting it right from the first step is also tricky. If you're going to do it, then let's lean in, and thus let's use that abstraction overall. But yeah, same ideas as you. STEPH: Cool. I think that helps validate where I'm at in terms of how I rationalize about this where ultimately, I do like leaning into that abstraction. And as you'd mentioned, some of those porting projects, I haven't been on one specifically, but I've seen that they are a lot of work. And so, if we have that in our system, then we want to continue to use it. It does reduce some of the readability. Like you said, maybe there's a VS Code plugin or some way that then we can help people be able to see if they want that full context in the test and not have to jump over to YAML. But yeah, otherwise, unless it's overriding default behavior or complex, then that's what I'm going to go with is with the keys. But I really appreciate this person's very thoughtful question and approach to testing because, normally or typically, I fully agree with I want full context in the test. And this one was one of those outliers that came up for me, and I had to really think through all the feelings and the reasons that I have for those feelings. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris is back from vacation and gives hiring and onboarding updates. Steph has an update about the CI slowdown and scaling CI. They tackle a listener question regarding having some fear around potential merge conflicts. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Deckset (https://www.deckset.com/) parallel_tests (https://github.com/grosser/parallel_tests) paralleltests - important line that may alter the `groupby` strategy (https://github.com/grosser/parallel_tests/blob/9bc92338e2668ca4c2df81ba79a38759fcee2300/lib/parallel_tests/cli.rb#L305) KnapsackPro (https://knapsackpro.com/) rspec-queue (https://github.com/conversation/rspec-queue) Vim Conflicted Overview (https://github.com/christoomey/vim-conflicted#overview) Mastering Git Course on Upcase (https://thoughtbot.com/upcase/mastering-git) Git Object Model (https://thoughtbot.com/upcase/videos/git-object-model) Git Object Model Operations (https://thoughtbot.com/upcase/videos/git-object-model-operations) The Opportunity Will Find You (https://thoughtbot.com/blog/the-opportunity-will-find-you) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Golden roads are golden. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Oh, I also have a new intro that I want to try out. This is thanks to Irmela from Twitter, where it's good morning and hooray; today is Bike Shed day. They technically said Tuesday, but we don't record on Tuesdays. So today is Bike Shed day, so happy Bike Shed day. And hey, Chris, what's new in your world? CHRIS: What is new in my world? Yeah, I loved when I saw that tweet come out. It really warmed my heart. So Tuesday, in theory, is Bike Shed day, but for you and I, Friday is Bike Shed day. It's confusing breaking the fourth wall, as I so often do. But yeah, what's new in my world? I'm back from vacation, which is the thing that I did. For listeners, well, I have been absent the previous week related to vacation and all those sorts of things. But I did what we're going to describe as a not smart thing. It wasn't intentional. The world just kind of conspired in this way. But I had two separate vacation islands that existed in my mind, and then they both kind of congealed, but as they did that, they moved towards each other, but they didn't connect. And so what I ended up with was two weeks back to back where I was out on Thursday and Friday of one week, and then I was back for Monday and Tuesday. And then I was out for Wednesday, Thursday, Friday of the following week. Protip: that's a terrible idea. It's just not enough time to sort of catch up. The whole of it was like the ramp-up to vacation and then the noise of vacation, then getting back and being like, oh, there are so many emails. Let me try and catch up on them. But also, on the very positive side, we had a new hire join the team, and so most of my focus on the days that I was in the office was around getting that new person comfortable on the team, onboarding, spending as much time as possible with them. And so, all total, it was an adventure. And again, I would strongly recommend against this. The world just kind of conspired, and suddenly these three different forces in my life came together. And this was just the shape of things. But yeah, I went on vacation, and it was great. The vacation part was great. STEPH: I will take your advice. So next time I have like two segments of PTO, I'm just going to stitch them together and just go ahead and take that whole intermittent time off. CHRIS: That probably would have been better. Again, someone new joining the team, it was very important to me to get some time with them early on, and so I opted not to do that. But yeah, the attempt to catch up in between was a completely lost effort, I would say. But I think I'm mostly caught up now, having been back in the office for about a week, so yeah. But let's see, what else has been up in my world? It's actually been a while since you and I have chatted based on the various timing and schedules and the nonsense vacation schedule that I had that you so kindly accommodated across a couple of weeks. Let's see, hiring and onboarding; the hiring went really well. We talked about that a bunch of weeks back. But now we're in the onboarding phase. And so next week will be the first week that all four of us on the engineering team are in the office together for the full week. I'm super excited to experience that. We've had different portions of it, with me being on vacation and other folks being on vacation. But now, for the first time, we're really going to feel what it's like as this team. And we're going to have our first retro as a group and all those sorts of things, so I'm very excited to do that. And thus far, all of the interactions that we've had have been really wonderful as a team. And so now it'll be the first time we're just bringing all of those various pieces together. STEPH: I just have to clarify; you said all of y'all in the office together. Do you still mean remotely? CHRIS: Oh, yes, yes, I just mean not on vacation, all present and accounted for on the internet. Remote is another interesting facet of what we're doing here and trying to figure out how to navigate that, particularly where there are some folks that are closer and can potentially get together in the city, that sort of thing, and then folks that are truly remote and making sure that we're...I'm very much of the opinion if we have anyone that's remote, we are remote team, and we must embrace async communication and really lean into that. And I think the benefits of async communication as its own consideration are so worth it. And it's one of those things that's hard to do. It requires careful, intentional thought. It requires more purposeful communication. But I think there are a lot of good things that fall out of that. It's similar to TDD in that way in my mind, like, it's not easy. It's actually quite difficult. But all the effort that I put into trying to learn how to do that has made me a better developer, I think, on all the various fronts. And I think similarly, async communication I believe in as a tool to force just better communication. And so I'm a big believer in it, and I've found a ton of benefit in remote that I'm also a big believer in that now. I, like everyone else, was forced into it as the world was, but I've really come to enjoy it a lot. And so yeah, so, no, not physically in the office, to answer your very short question with a long rambling aside. STEPH: [laughs] I like that comparison. I hadn't thought about it in that way but comparing that thoughtfulness and helpfulness of async communication and then also to TDD, where it's not easy, but the payoff is so worth it, the upfront cost of it. That is something that at thoughtbot, we've had conversations around where there are folks that really value...they want to be around people. They get energy from people, and so they want that option to be able to rent a WeWork space and maybe get together with a colleague once or twice a week, and that was supported by thoughtbot. But we also wanted to express well, if you are together, do treat everything still as a remote work environment. So let's say if you and your colleagues are on a project, but then there's a third person on that project that's remote, you still need to act like everything's remote to make sure that everyone else is still getting to participate and hear everything and be part of the conversation. So just keeping that in mind that yes, we want to support you doing your best work, and if that's around people, that's wonderful. But we are still remote-first, and communication needs to be in that fashion. Well, that's super exciting that you'll have all of the team together. That sounds like it will be wonderful to hear about and then also retros and meetings, and yeah, it sounds like you've got a fun week ahead. CHRIS: Indeed. I'm super excited to see what sort of new things come out of the new voices on the team and practices that each of the individuals have experienced at other companies that we can now fold together. The work that we've done so far has been very much inspired by thoughtbot ideas, and approaches, and workflows, and processes because that's what I brought to the table. But I'm super excited to bring in more voices and see what of that 100% stays on versus does anything change? Do we get entirely new things? So yeah, very excited about all of that. But to revisit a topic that we've talked about in the past, this week is catching up from vacation, so there's a certain amount that will constrain my work. But this was definitely another week of I did not do much coding. I'm trying to think if I did any coding this week. It's possible that the answer is no. The fact that I don't even know the answer to that is an interesting one. I still have in my mind the desire to get back to it, and I think I will. But there's so much other stuff to do. Recently, this week, there's been a lot of vendor selection and contract negotiation, which is an interesting facet of the work, but just trying to figure out, oh, we need platforms to do X, Y, and Z. And it turns out they're wildly costly and have long sales cycles. And how do you go through that, and how do you make sure that we're getting the right thing? And so that's been a big part of my work. Hiring and onboarding, again, has been a big part of it. There's also some amount of communicating back to the broader team - what are we doing? What is the product organization or the engineering team delivering? And so I'm okay at presentations, I think. I'm comfortable with giving presentations. The thing that I struggle with is finding the optimization point in preparation. I will, of my own accord, over-prepare. And that may sound a little bit like, oh, what's my greatest weakness? That I care too much. But I mean it sincerely as like, I would love to find that right amount of like, it's like an hour of preparation for a 15-minute presentation to the team. That's the right ratio. And I just hit that on the head, and it's great. But whenever I know that I need to give a larger presentation, it will distract me. And it's work that can expand to fill whatever time you give it, and so trying to thread that needle is a tricky one for me. STEPH: Yeah, I'm with you. Presentations, for me, they're one of those things that it's very stressful, anxiety-inducing; all the prep feels distracting from some of the other work that I want to do. Or maybe I'm excited about the presentation, and that is the work that I want to do. But it's not until it's done that then I'm like, oh, that was fun. That went well. This was great. It's not until after that then I feel good about it. So the lead-up to it is very stressful. And so if you can optimize that to say, well, I know exactly what this group needs, where I can cut corners, where I have to go into details, that sounds incredibly valuable. I'm curious, so this is probably a bad idea, but it's the only way I really know how to find those boundaries is you got to experiment and tweak a little bit and let yourself fail a little bit or just be very explicit with folks about this is what the presentation is, if you expected something else, let me know. Or here's what I've got, have someone to bounce ideas off of. But there's such a nicety if you can find that I'm going to try failing just a little bit and get some feedback. Or maybe it's not failing at all, but you are testing that boundary to find out did this work, or should I put more effort into this? I'm curious, do you have thoughts on that? How you're going to find that right optimization level? CHRIS: Not as specific to truly honing in on whatever the correct number is. The thing that I've been doing is I...this will sound complicated, but I wait until the last minute but a specific version of the last minute. So at most, I start working on it an hour and a half before the meeting. And these are, again, not particularly large presentations, and it's a recurring sort of thing. So it's sort of engineering talking about the work that we've done recently and trying to find the right level of detail and whatnot, so giving myself a smaller time window. I think that's enough time to tell the story and to find a meaningful way to tell the story and grab the screenshots and all of that, but it's constrained so that I don't over-optimize, over-edit, overthink. I'm using Deckset, which is a presentation tool that starts from a Markdown file. So it's just a Markdown file that I'm editing. That's great; that works really well. I do not twiddle with fonts. There's one theme that I use. It is white background with black text. That's it. And I think I've given myself deep permission to be the CTO that has a white background with black text and no transitions. I don't even go into presentation mode for it. I'm literally showing the UI of Deckset, and then just hitting the arrow to move between them. But the Chrome and the drop-down menu at the top is still visible because I want to see people's faces as I'm presenting. And I haven't figured out how to do that correctly on my computer. So I'm just presenting the window of Deckset. And I'm like, I have given myself permission to do all of those things, and that has been super helpful, actually. So that's a version of me negotiating what this means. Where I do invest the effort is trying to enumerate all of the things and then understand what is the story that I'm telling around the things and how do I get the message right for the collective audience? So, for a developer team, I would say much more nuanced technical things, for marketing folks, it would be at this end of the spectrum. I do lean on the old idea of, like, let us talk about it in the mindset of the user, so it's very much user-centric, but then some of the things that we're doing are important but invisible to the users. They're part of how we broadly build the platform that we need to, but they're completely invisible to users. And so, how do I then tell that story still with ideally a user-centric point of view? So that's where I do invest the time, and I give myself complete freedom to just grab screenshots, put black text on a white background, and then talk over it. STEPH: I love it. Because you made this comparison earlier, so now I'm thinking of a comparison of like TDD-driven presentations where it's like, what's the end goal? What's the assertion? What's the outcome that I want? And then backfilling from there. Or, in your case, you're talking about what's the story that I need to tell? What's the takeaway that I want people to have? So then you start there, and then you figure out what's the supplemental information that you need to provide to then get there. And the fact that you don't twiddle with fonts and all that stuff, I think you're already really on your way [chuckles] in terms of finding that right optimization of I need to present a clear and helpful message but not sink too much time into this CHRIS: Black text on a white background is very clear. So... STEPH: [laughs] If there are any designers listening to this, they might just be cringing to this conversation right now. [laughs] CHRIS: I actually wonder about what the...I know that dark mode is a thing that lots of folks care about. I'm thinking about the accessibility affordance of it now. I'm actually thinking through it now that I said it somewhat flippantly. I actually don't know what I'm talking about, but it was easy, and it wasn't a choice that I allowed myself to think about. So there we are. Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: In my personal world, so Tim and I are moving. We're on the move. We are transitioning from South Carolina to North Carolina. So I think I may have shared a bit of this news, but Tim has acquired his first software developer job, which is just phenomenal. It is in North Carolina. He does need to be there in person for it. So we are currently selling our South Carolina house and then moving. It's not too far. It's like three and a half hours away to where we're moving in North Carolina because we're already pretty far in North and South Carolina. So yeah, there's always another box that needs to be packed. And there's always just something else that you forget, another thing that you want to take to Goodwill or try to give to a neighbor. It's a good way to purge. I will definitely say that every time you move, it's a good time to get rid of things. CHRIS: That is a very cup half-full point of view on it, but yeah, it feels true. STEPH: [laughs] It's true. I'm a very cup half-full person. For more technical news, for more client stuff that I've been working on, so I think the last time we chatted, I was sharing that we had this mysterious CI slowdown where we were going from CI builds taking around 25 minutes to spiking to 35, sometimes 45 minutes, and I have an update there. So we found out some really great things, and we have gotten it back down to probably more about 23 minutes is where the CI is running currently. As for the actual who done it, like what caused this specific slowdown, we got to a point where we were like, we're doing so much investigative work to understand exactly what caused this that it felt less helpful because at the end of the day, we really just wanted to address the issue. And so solving the mystery of exactly what caused this started to feel less and less meaningful because we're like, well, we want to improve this anyways. So even if we found that one line or something that happened that caused this, we want a bigger solution to this type of problem because then this could happen again like, someone else maybe adds one line or something happens, and things get thrown off balance, and then suddenly, we have a slowdown, and that just takes too long to investigate. So I don't have a concrete who done it answer for the slowdown. But we've learned a couple of things; one of the things that we learned is we're using parallel_tests to then split our tests across all of the CPUs that are then running the RSpec test. And we realized that we weren't actually splitting tests based on runtime data. So there are a couple of ways that paralleltests will let you divvy up your test, and two of those ways one is file size. So you can split the files based on the size of the file, or you can use info from the runtime log. So then paralleltests can be a little bit more intelligent about like, well, I know how long these files take, so I'm going to split it based on that versus just the size of the file. And we realized that we were defaulting and using the file size instead of the runtime even though we all thought we were using runtime. And the reason for this took a bit of source code diving because looking at the README for paralleltests; it looked like as long as we're passing in a file to the runtime log path, then paralleltests is going to use that runtime data. But then there's some sneaky-sneaky in there that I'll actually link to in the show notes in case anybody's interested. But if you are setting a particular flag and don't pass in another flag, then paralleltests is going to be like, cool, I'm going to portion out your test based on file size instead of the runtime. So we fixed that, or we updated that, and that has had a significant improvement for the test being split out more evenly. So we didn't have a CPU that was taking 25 minutes while the next CPU was only taking like 17 minutes. And paralleltests also provides some really helpful data that because we have that runtime log file, we could tell how long each CPU is running and how they're getting split. So the past couple of weeks, it's heavy measure, measure, measure, take all the data, create lots of graphs, understand what's happening, and then look for ways to then fix it. So figuring out how these files or how the tests were being distributed across, we had a number of graphs that were just showing us what's actually happening. So then we could track the improvement, so that was really nice. It was the measure twice and change something once [laughs], and then we got to see the benefit from it. For scaling the CI, so we are looking on adding more machines to then process tests. That has been really interesting because we're at the point where we are adding more machines, but if we add more machines, we're not going to speed up how quickly our CI processes everything. Because we are splitting tests based on file size and not by examples, we're always going to have this effect of a tentpole. So if we have a file that takes 10 minutes, that's the fastest we're ever going to get. So Joël and I are in discussions right now of where we still really want to understand what's the fastest we can achieve just by adding another machine or two versus are we at the point that, okay, scaling horizontally and adding more machines has been helpful, but we have reached the breaking point where we actually need to divvy out the tests at a smaller scale and have a queue approach? So then that way, we can really harness the power of then we don't have one file that takes 10 minutes, and we don't have to care either. So if somebody adds a test to a file and suddenly a file goes from 12 minutes to like...well, hopefully, they added more than one test. [laughs] But let's say it goes from 10 minutes to 15 minutes; we don't want to have to manage that and understand that there's a tentpole. We just want to be able to divvy out all the examples and then have a queue approach. That's probably going to be MVP two of this, but we're still waiting that out. But it's just been really interesting to realize that scaling horizontally really only takes you so far. Like, we've added one machine, maybe one more, so then we'll have three total. And then it's like, okay, that's great, but now we need to actually address this other bigger problem. CHRIS: I know we've talked about this in previous episodes, but I'm super interested to hear as you progress into the queue approach because that's something that's been top of mind for me for a while. I don't know if we've talked about it before specifically, but Knapsack Pro is the one thing that I'm available as a service that does this. Do you have other tools that you're looking at for that, or is this still in the exploratory phase? STEPH: Knapsack is still a top contender. There's also RSpec Queue; that's another one that we have in mind. Unfortunately, I really wish paralleltests let us do this, but paralleltests just doesn't quite offer that feature. And someone in the team, I think, even reached out to the maintainer of paralleltests, and they were like, "Yep, you're totally right. We're actually more focused in making sure that this works for everybody versus has specific features." And they gave a really nice thoughtful response, which we appreciated, so at least we could confirm that paralleltests won't do exactly the thing that we need. So yeah, RSpec Queue, Knapsack, I think those are the top two that I'm familiar with. CHRIS: Gotcha. I don't know if I've seen RSpec Queue before. I'm intrigued. So actually, an interesting thing happened. While I was away on vacation, one of the folks who just joined the team as one of their first steps joining the team, noticed that our CircleCI config wasn't actually taking advantage of the parallelism that we had configured; that's on me. I turned on parallelism and then never did anything with it, which is a complete waste. And so I was super happy to come back and saw that CI, which had been creeping up to six or seven minutes, had suddenly dropped back down to two to three minutes sort of thing. I was like, this is amazing. But now I'm at the point where our RSpec suite is spreading across the different, I think, it's like four different cores that we have available, but it's not doing it as efficiently as we would like. So I'm like, oh, okay, can we dial it up to 11? But I'm intrigued; I've only looked very much in passing at RSpec Queue literally now that you've mentioned it. But Knapsack Pro exists as a different service. And so, as far as I understand, the agent that's running is going to communicate and say, "Give me another test. Give me another test." But there needs to be some external process running and managing that queue. Does RSpec Queue do that? Somebody owns the queue, right? Who owns it? Do you understand how that works? STEPH: So I was definitely familiar with this. If you'd asked me a couple of weeks ago, when I was diving heavily into the queue work before then, we transitioned more into focusing on then adding new machines; I was very up to speed on this. So I may get a couple of things wrong, but my understanding is that RSpec Queue, you're going to manage your own queue. So you bring in the gem and then use something like Redis, so then you are in charge of that. And with Knapsack, then you are using their service to manage that queue. And then they have found ways to optimize around what if you can't reach their API or something; their service is down? And making sure that that doesn't impact your CI so then you can't still run your test just because you can't reach their queue somewhere. So that's my current understanding, RSpec Queue you own it, Knapsack they're going to own it. CHRIS: Gotcha. That makes sense. That about maps to what I was expecting, and so I wonder if I could use RSpec Queue. Now I'm going to have to go research that. But it's always nice to have new things to look at on this to go at ludicrous speed. That's what I'm going for. I want to get to ludicrous speed for our CI. STEPH: I like that name. I haven't heard of that speed. I feel like I have. I feel like you've dropped that before, [laughs] like you've used that. CHRIS: I don't know; quite possibly, I have. It's a Spaceball's reference. It's a throwback to days of old. STEPH: Well, then we may be investigating RSpec Queue together. Because yeah, Joël's and I goal for this week has been very much to figure out what's our boundaries with TeamCity? What are our boundaries with horizontal scaling? And I think we're both getting to that conclusion of like, okay, this has been good, it's helpful, but we really need to look into the queue stuff if we really want to see significant progress. Also, some of the stuff we're doing because we're pushing on it, we are manually splitting files. So if there's a file that has created this tentpole that's taking 10 minutes, but we know ideally most of the other files only take six minutes, then we are splitting that file, so then we have two spec files that are associated with the same class. And then using that as a way to say, okay, what would this look like? Let's say if this were better balanced. And that's also been pushing us in the direction of like, okay, this is fun, this is informative, but it's not sustainable. We don't want to have to keep worrying about splitting these files and doing this manually and pushing us towards that queue-based approach. MIDROLL AD: And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with Mongo DB, SQL, Oracle and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30 day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial head to studio3t dot com forward slash free. That's studio dot com forward slash free. But shifting gears just a bit, we have a listener question. So this person wrote in, "I have listened and loved your podcast for many years dreaming of getting a job with people half as thoughtful and intentional as you, and finally it happened. I have my first junior dev job, and my co-workers and bosses are all super awesome. Up until now, I've been flying solo. And in my new job, I've been finding it very unsettling to resolve merge conflicts. As careful as I am to comb through the conflict and contact the other developer if needed, I feel like I am covering my eyes and crossing my fingers whenever I select the resolve conflict button. Is there some type of process or checklist I could rely on? Is it normal to have such a high fear factor with a merge conflict? Any advice or maybe just a bit of been there felt that way...?" All right. So one, that's fabulous, congratulations on the new job. That's very exciting. I think I've voiced this many times, getting your first junior dev job is so hard, and so I'm so excited when it works out for people, and they get there. And then, for the merge conflict, I have thoughts. Chris, do you want to start? Shall I start? How are you feeling? CHRIS: Why don't you start? Well, actually, I'm going to add some pre-commentary, and then I think you should lead into our actual answer. But first, I just want to say a deep thank you to this listener for sending in the question. Again, we really love getting these questions. And also, thank you for the very kind words. To be clear, listener, if you're going to send in a question, you don't have to say very kind words, but they are really wonderful to hear and especially to hear if we had any part in helping this person feel more comfortable getting into that first dev role and having an idea of what maybe a good version of that could look like. Additionally, I really love the shape of this question because it gets into the people stuff and the tech stuff, so I'm super excited about this question. Actually, both Steph you and I responded very quickly to this one. And so it really did catch our attention because I think it crosses that boundary in an interesting way that I think is sort of The Bike Shed space in the world. But to that end, you did reply first in our email chain. So I think you should start, and then I'll follow on after that. STEPH: I should also check with you. Wait, so you don't have a filter on your email that's like kind words only to The Bike Shed, and then you filter out anything that's negative? CHRIS: I have a sentiment analysis, and if it's even neutral, it gets sent straight to the trash, only purely positive. No, constructive feedback is welcome too. We would love to hear that. Well, love is a strong word. We would accept it into our inboxes and then deal with it, but yeah. STEPH: [laughs] It will be tolerated. Must require at least three hearts in all emails; just kidding. [laughs] CHRIS: Are you kidding? I'm counting them now, and I see a lot of hearts in our emails. [laughter] STEPH: Merge conflicts. So is it normal to have such a high fear factor with a merge conflict? I'm going to say absolutely. Resolving a merge conflict can be really tricky and confusing. And I think; frankly, it's something that comes with just time and practice where then you start to feel more confident. As you're resolving these, you're going to feel more comfortable with understanding what's in the branch and the code changes that you're pulling in versus something that you need to keep on your side. So I think over time, that fear will subside. But I do think it's totally normal for that to be a very scary thing that then takes practice to become accustomed to it. As for if there is some type of process or checklist, I don't know of a particular checklist, but I do have a couple of ideas. So one of the things that I do is I will often push my code to whatever management system I'm using. So if I'm using GitHub, then I'm going to push up my branch because then, at least that way, someone has a copy of my work. So if I do something and I completely botch it locally, I know I can always reset to whatever it is that I pushed up to GitHub, so then that way, I have more freedom to make mistakes and then reset from there. So that is one idea is just put it somewhere that you know is safe, so that way you now have this comfortable sandbox to then make mistakes. The other one is run the test. So hopefully, the application that you're working with has tests that you can trust; if not, that could be another conversation. But if they do have tests, then you can run those, and then hopefully, that would let you know that if you have left something in, like maybe you left a syntax error, or maybe you removed some code that you shouldn't have because you weren't sure, then those tests are going to fail, and they'll let you know that something went wrong. And you can run those while you're still in the middle of that merge conflict as long as you've addressed like...well, no, if you haven't addressed syntax errors, that's still a time that you can run it, and it's going to let you know that you haven't caught all of the issues yet. So you don't have to wait till you're done to then go ahead and run that. A couple of other ideas, practice. So go ahead and create your own merge conflicts on purpose. So this is something that I think is really helpful because it will teach you, one, what causes a merge conflict? Because now you have to figure out how to create one, and then it will help you become comfortable because you're in a completely safe place where you have made up the issue, and now you're having to resolve that, so it'll help you become more confident in reading that merge conflict message. And then last but certainly not least, grab a buddy so if you are just feeling super nervous. Anytime I'm doing something that I just feel a little nervous about, then I just ask someone like, "Hey, would you look over my shoulder? Would you pair with me while I do this?" And I have found that's incredibly helpful because it eases some of my fear. I've got someone else that is also looking through this with me. But I also find it really helpful because then it encourages that person to be like, hey, if they're ever in a spot that they need to pair, I want them to know that they can also reach out to me and have that same buddy system. I guess that's my checklist. That's the one I would create. How about you, Chris? What do you think? CHRIS: Well, first, I just want to say that basically everything you said I 100% agree with, and purposely I think was great that you actually replied to the email first and that you're saying those things first because I think everything that you said is true and is foundational. And it's sort of the approach that I would definitely recommend taking as well. My answer, then adding on to that, has to do with how I've approached learning about this space in my own career. To name it, to answer the core question, is it reasonable to be scared of this? Yes, Git is confusing. Git is deeply confusing. I absolutely love Git. I have spent a lot of time trying to understand it, and in understanding it, I've come to love it. But it's only through deep effort that I've gotten to that place. And actually, the interface, the way that we work with Git on a day-to-day basis, particularly the command line is rough. I'm going to say, what does Git checkout do? Well, it does just about everything, it turns out. That command just does all of the stuff, and that's too much. It's, frankly, the UI for Git, specifically the command-line user interface; the commands that we run to manipulate the Git history are not super intuitive. But it turns out if you pop open the hood, the object model underneath the core way that Git stores your code is actually very simple. I find it's very easy to understand, but I, unfortunately, have found that I can't understand it without dropping down to that level. And so, in my own adventures, I kind of went deep on this topic a couple of years ago, and I created a Vim plugin because obviously, that's the best way to encapsulate your knowledge about Git, and so I created a plugin called Vim Conflicted. I don't necessarily recommend the plugin. It's fine if you want to use it. I don't do a great job of maintaining my plugins at this point, to be honest. But there was a weekend where I was trying to understand the world of Git and merge conflicts in particular, and it was really sort of fighting me. And as I started to understand it better, there's a little diagram that I drew on the README that I think is probably the most interesting artifact from it. But it's this idea that there are actually four files, four versions of a given file involved in any merge conflict. And that realization shifted my thinking a good amount. And then as I started to think about that, I was like, oh, okay, and then I want to see this version of it, and this version of it, and this combination, and the diff between these two, and that was super helpful for me. More generally, I also made a course on Upcase about Git as I tried to understand it better. And there are two particular videos from the middle of the course named the Git Object Model and Object Model Operations. And again, those two videos deal with popping the hood on Git, looking inside it, and what actually is happening to your code as you perform different Git operations. One of the wonderful things about Git is it is immutable. So you're never going to destroy your Git history if you've committed. So one of the rules that I have is just always be committing, never worry about committing. If you've committed, you can always get back to that version. You would have to try very hard to destroy committed code in Git. It's the things that you do when you haven't yet committed the code that are dangerous. So commit the code, like you said, Steph, push that up to GitHub, so you have a backup of it. You will have a backup locally as well, and that's a thing that you can come to be more comfortable with. But then, from there, there's actually a lot of room to experiment and play around because there's a ton of safety in the way that Git stores the code. You do have to know how to get at it, and that's the unfortunate and tricky part. But I think, again, to sort of summarize, yes, this is confusing. Your feelings are absolutely valid and totally grounded, but it is also knowable, is what I would say. And so, hopefully, there are a couple of breadcrumbs that we've laid there in how you might go about learning about it. But yeah, find a buddy, watch a video or two, and give it a try. This is definitely a thing that you can get there but totally reasonable that your first approximation is this is confusing because it sure is. STEPH: I often forget that Git has that local copy of my code, so I'm so glad you mentioned that. And then yeah, I saw when you linked to Vim Conflicted. The diagrams are great. I had not seen these before. So yeah, I highly recommend folks take a look at those because I found those very valuable. CHRIS: In that case, it's a white background, but I allowed myself to use some colors in the little images to help differentiate the different pieces. And it's an animated diagram, so it's really a high bar for me. [laughs] STEPH: So now the question is, did you go too far? Have you over-optimized? [laughs] CHRIS: I'm going to be honest; it was a weird weekend. STEPH: [laughs] Well, I don't think you've over-optimized. I do think it's wonderful. And I think this is definitely a reference that I'll keep in mind for folks whenever they're learning about merge conflicts or just want to get more knowledgeable about them. I think these diagrams are fabulous. CHRIS: Well, thanks. Yeah, I hope...they frankly were a labor of love, and the course is three and a half hours of me rambling about Git, so hopefully, it's useful to folks. If anything, it was super useful to me because my understanding of Git was deeply crystallized in making that course. But I do hope that it's useful to other folks. And particularly those two videos that I highlighted, I think are the ones that have been most impactful for me in terms of how I think about working with Git and getting comfortable with it. STEPH: Do you still receive emails every now and then from people, or maybe they are tweets from people that are like, "Hey, I watched one of your videos and found it really helpful." I feel like I still see that every now and then where people are just commenting on like, they watched some of the content that you created for Upcase a while back, and I think that's really cool. I'm curious if you still see that. CHRIS: I do, yeah, from time to time. It is absolutely wonderful whenever I hear that. Again, listener, do not feel the need to send me anything, but it is nice when I get them. STEPH: It does seem like I'm fishing for compliments now. [laughs] CHRIS: It does seem like that. So I want to be clear that's not what's going on here. But it is nice because I do actually forget that they're out there. But a lot of the stuff that I produced for Upcase, in particular, I tried to do more timeless stuff, so like the Vim content was really about how Vim works in a deep way. And the tmux course and the Upcase course...or the tmux course, the Git course. I look back at them, and a couple of little syntactic things have changed. But I'm still like, yeah, I agree with me from six years ago or whatever it was. Oh, that's a weird number to say, and I think is honest. It's fine. I'll just be over here. [laughs] STEPH: [laughs] That's helpful to hear, though, because that's always one of my fears in creating content. It's like, I don't know, it's okay if it's more opinionated and I change my mind and disagree with my past self. But it's more like, yeah, keeping up with is this still accurate? Is this still reflective of the times? And then having to keep that stuff updated. Anywho, that's a whole big thing, content creation. CHRIS: Content creation, but there's a parallel to it that many folks will not be creating content, and I think that's a very fine and good way to go about progressing on the internet. But there's a parallel to it in learning that I think is useful. I, at this point, will typically lean in if there is something in the SQL layer that is fighting me. I have never found effort spent trying to better understand the structured query language to be wasted time. Similarly, Git is one of those tools that is just so core to the workflow that it felt very worth it to me to spend a little bit of extra time to get to a deeper level of comfort with it, and I have not regretted one minute of that. Vim and tmux are pretty similar because they're such core tools for me. But React, I would not call myself a deep expert of React. I follow some of the changes that are happening but not as deeply, and I'm not as worried about it. And if I'm like, I don't know how to do this thing, should I spend two hours learning about it or not? With frameworks and tools that have not been part of my toolset for as long, I will spend less time on them. And I think that the courses that I produced on Upcase mirror that. They're the things that I'm like; I feel very true about these things versus other stuff. Maybe it was in a weekly iteration episode or something like that. But that very much mirrors how I think about learning as well. What are the things that I'm going to continually invest in versus what are the things that I'll sort of keep an eye on from a distance but not necessarily invest as much time in? STEPH: There's a particular article that you're making me think of as we're talking about content creation and, as you mentioned, finding the things that you always find value in investing in. There's a wonderful blog post that was recently posted on the thoughtbot blog by Matheus Richard, and it's called The Opportunity Will Find You. And it made me think a lot about what you're talking about, find the things that you're excited about, find the things that you think are a good investment and just go ahead and lean into it. And it's okay if maybe that's not the thing that you're using currently at your work, but if it's something that gets you excited, then go ahead and pursue that. So in this article, for example, Matheus uses the example of learning Rust, and that's something that he's very excited about and wants to learn more about. And then there's another one where he started looking into crafting interpreters. And then that has actually led to then some fruitful work around creating custom RuboCops because then he had more knowledge around how the code is being interpreted so then he could write custom RuboCops. So yeah, plus-one to finding the things that give you energy and joy and leaning into that and investing in it. And if you share it with the world, that's fabulous, and if you don't, then keep it for yourself and enjoy it, whatever makes you happy. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
In this episode, we cover: Introduction (00:00) How Chris got into the world of chaos and teaching middle school science (02:11) The Cengage seasonal model and preparing for the (5:56) How Cengage schedules the chaos and the “day of darkness” (11:10) Scaling and migration and “the inches we need” (15:28) Communicating with different teams and the customers (18:18) Chris's biggest lesson from practicing chaos engineering (24:30) Chris and working at Cengage/Outro (27:40) Links Referenced: Cengage: https://www.cengagegroup.com/ Chris Martello on LinkedIn: https://www.linkedin.com/in/christophermartello/ TranscriptJulie: Wait, I got it. You probably don't know this one, Chris. It's not from you. How does the Dalai Lama order a hot dog?Chris: He orders one with everything.Julie: [laugh]. So far, I have not been able to stump Chris on—[laugh].Chris: [laugh]. Then the follow-up to that one for a QA is how many engineers does it take to change a light bulb? The answer is, none; that's a hardware problem.Julie: Welcome to Break Things on Purpose, a podcast about reliability, quality, and ways to focus on the user experience. In this episode, we talk with Chris Martello, manager of application performance at Cengage, about the importance of Chaos Engineering in service of quality.Julie: Welcome to Break Things on Purpose. We are joined today by Chris Martello from Cengage. Chris, do you want to tell us a little bit about yourself?Chris: Hey, thanks for having me today, Julie, Jason. It's nice to be here and chat with you folks about Chaos Engineering, Chaos Testing, Gremlin. As Julie mentioned I'm a performance manager at Cengage Learning Group, and we do a fair amount of performance testing, both individual platforms, and coordinated load testing. I've been a software manager at Cengage for about five years, total of nine altogether there at Cengage, and worn quite a few of the testing hats, as you can imagine, from automation engineer, performance engineer, and now QA manager. So, with that, yeah, my team is about—we have ten people that coordinate and test our [unintelligible 00:01:52] platforms. I'm on the higher-ed side. We have Gale Research Library, as well as soft skills with our WebAssign and ed2go offerings. So, I'm just one of a few, but my claim to fame—or at least one of my passions—is definitely chaos testing and breaking things on purpose.Julie: I love that, Chris. And before we hear why that's your passion, when you and I chatted last week, you mentioned how you got into the world of QA, and I think you started with a little bit of different type of chaos. You want to tell us what you did before?Chris: Sure, even before a 20-year career, now, in software testing, I managed chaos every day. If you know anything about teaching middle school, seventh and eighth-grade science, those folks have lots of energy and combine that with their curiosity for life and, you know, their propensity to expend energy and play basketball and run track and do things, I had a good time for a number of years corralling that energy and focusing that energy into certain directions. And you know back, kind of, with the jokes, it was a way to engage with kids in the classroom was humor. And so there was a lot of science jokes and things like that. But generally speaking, that evolved into I had a passion for computers, being self-taught with programming skills, project management, and things like that. It just evolved into a different career that has been very rewarding.And that's what brings me to Cengage and why I come to work every day with those folks is because instead of now teaching seventh and eighth-grade science to young, impressionable minds, nowadays I teach adults how to test websites and how to test platforms and services. And the coaching is still the same; the mentoring is still the same. The aptitude of my students is a lot different, you know? We have adults, they're people, they require things. And you know, the subject matter is also different. But the skills in the coaching and teaching is still the same.Jason: If you were, like, anything like my seventh-grade science teacher, then another common thing that you would have with Chaos Engineering and teaching science is blowing a lot of things up.Chris: Indeed. Playing with phosphorus and raw metal sodium was always a fun time in the chemistry class. [laugh].Julie: Well, one of the things that I love, there are so many parallels between being a science teacher and Chaos Engineering. I mean, we talk about this all the time with following the scientific process, right? You're creating a hypothesis; you're testing that. And so have you seen those parallels now with what you're doing with Chaos Engineering over there at Cengage?Chris: Oh, absolutely. It is definitely the basis for almost any testing we do. You have to have your controlled variables, your environment, your settings, your test scripts, and things that you're working on, setting up that experiment, the design of course, and then your uncontrolled variables, the manipulated ones that you're looking for to give you information to tell you something new about the system that you didn't know, after you conducted your experiment. So, working with teams, almost half of the learning occurs in just the design phase in terms of, “Hey, I think this system is supposed to do X, it's designed in a certain way.” And if we run a test to demonstrate that, either it's going to work or it's not. Or it's going to give us some new information that we didn't know about it before we ran our experiment.Julie: But you also have a very, like, cyclical reliabilities schedule that's important to you, right? You have your very important peak traffic windows. And what is that? Is that around the summertime? What does that look like for you?Chris: That's right, Julie. So, our business model, or at least our seasonal model, runs off of typical college semesters. So, you can imagine that August and September are really big traffic months for us, as well as January and part of February. It does take a little extra planning in order to mimic that traffic. Traffic and transactions at the beginning of the semester are a lot different than they are at the middle and even at the end of the semester.So, we see our secondary higher education platforms as courseware. We have our instructors doing course building. They're taking a textbook, a digitized textbook, they're building a course on it, they're adding their activities to it, and they're setting it up. At the same time that's going along, the students are registering, they are signing up to use the course, they're signing up to their course key for Cengage products, and they're logging into the course. The middle section looks a lot like taking activities and tests and quizzes, reading the textbook, flipping pages, and maybe even making some notes off to the side.And then at the end of the semester, when the time is up, quite literally on the course—you know, my course semester starts from this day to this day, in 15th of December. Computers being as precise as they are, when 15th of December at 11:59 p.m. rolls off the clock, that triggers a whole bunch of cron jobs that say, “Hey, it's done. Start calculating grades.”And it has to go through thousands of courses and say, “Which courses expired today? How many grades are there submitted? How many grades are unsubmitted and now I have to calculate the zeros?” And there's a lot of math that goes in with that analytics. And some of those jobs, when those midnight triggers kick off those jobs, it will take eight to ten hours in order to process that semester's courses that expire on that day.Julie: Well, and then if you experience an outage, I can only assume that it would be a high-stress situation for both teachers and students, and so we've talked about why you focus so heavily on reliability, I'd love to hear maybe if you can share with us how you prepare for those peak traffic events.Chris: So yeah, it's challenging to design a full load test that encompasses an entire semester's worth of traffic and even the peaks that are there. So, what we do is, we utilize our analytics that give us information on where our peak traffic days lie. And it's typically the second or third Monday in September, and it's at one or two o'clock in the afternoon. And those are when it's just what we've seen over the past couple of years is those days are our typical traffic peaks. And so we take the type of transactions that occur during those days, and we calibrate our load tests to use those as a peak, a one-time, our performance capacity.And then that becomes our x-factor in testing. Our 1x factor is what do we see in a semester at those peaks? And we go gather the rest of them during the course of the semester, and kind of tally those up in a load test. So, if our platforms can sustain a three to six-hour load test using peak estimate values that come from our production analysis, then we think we're pretty stable.And then we will turn the dial up to two times that number. And that number gives us an assessment of our headroom. How much more headroom past our peak usage periods do we have in order to service our customers reliably? And then some days, when you're rolling the dice, for extra bonus points, we go for 3x. And the 3x is not a realistic number.I have this conversation with engineering managers and directors all the time. It's like, “Well, you overblow that load test and it demonstrated five times the load on our systems. That's not realistic.” I says, “Well, today it's not realistic. But next week, it might be depending on what's happening.”You know, there are things that sometimes are not predictable with our semesters and our traffic but generally speaking it is. So, let's say some other system goes down. Single-sign-on. Happens to the best of us. If you integrate with a partner and your partner is uncontrolled in your environment, you're at their mercy.So, when that goes down, people stop entering your application. When the floodgates open, that traffic might peak for a while in terms of, hey, it's back up again; everybody can log in. It's the equivalent of, like, emptying a stadium and then letting everybody in through one set of doors. You can't do it. So, those types of scenarios become experimental design conversations with engineering managers to say, “At what level of performance do you think your platform needs to sustain?”And as long as our platforms can sustain within two to three, you know, we're pretty stable in terms of what we have now. But if we end up testing at three times the expected load and things break catastrophically, that might be an indication to an architect or an engineering director, that, hey, if our capacity outlives us in a year, it might be time to start planning for that re-architecture. Start planning for that capacity because it's not just adding on additional servers; planning for that capacity might include a re-architecture of some kind.Julie: You know, Chris, I just want to say to anybody from Coinbase that's out there that's listening, I think they can find you on [LinkedIn](https://www.linkedin.com/in/christophermartello/) to talk about load testing and preparing for peak traffic events.Chris: Yeah, I think the Superbowl saw one. They had a little QR code di—Julie: Yeah.Chris: —displayed on the screen for about 15 seconds or so, and boy, I sure hope they planned for that load because if you're only giving people 15 seconds and everybody's trying to get their phone up there, man I bet those servers got real hot real fast. [laugh].Julie: Yeah, they did. And there was a blip. There was a blip.Chris: Yeah. [laugh].Julie: But you're on LinkedIn, so that's great, and they can find you there to talk to you. You know, I recently had the opportunity to speak to some of the Cengage folks and it was really amazing. And it was amazing to hear what you were doing and how you have scheduled your Chaos Engineering experiments to be something that's repeatable. Do you want to talk about that a little bit for folks?Chris: Sure. I mean, you titled our podcast today, “A Day of Darkness,” and that's kind of where it all started. So, if I could just back up to where we started there with how did chaos become a regular event? How did chaos become a regular part of our engineering teams' DNA, something that they do regularly every month and it's just no sweat to pull off?Well, that Day of Darkness was 18 hours of our educational platforms being down. Now, arguably, the students and instructors had paid for their subscriptions already, so we weren't losing money. But in the education space and in our course creations, our currency is in grades and activities and submissions. So, we were losing currency that day and losing reputation. And so we did a postmortem that involved engineering managers, quality assurance, performance folks, and we looked at all the different downtimes that we've had, and what are the root causes.And after conferring with our colleagues in the different areas—we've never really been brought together in a setting like that—we designed a testing plan that was going to validate a good amount of load on a regular basis. And the secondary reason for coordinating testing like that was that we were migrating from data center to cloud. So, this is, you know, about five, six years ago. So, in order to validate that all that plumbing and connections and integrations worked, you know, I proposed I says, “Hey, let's load test it all the same time. Let's see what happens. Let's make sure that we can run water through the pipes all day long and that things work.”And we plan this for a week; we planned five days. But I traveled to Boston, gathered my engineers kind of in a war room situation, and we worked on it for a week. And in that week, we came up with a list of 90 issues—nine-zero—that we needed to fix and correct and address for our cloud-based offerings before it could go live. And you know, a number of them were low priority, easy to fix, low-hanging fruit, things like that. But there were nine of them that if we hadn't found, we were sure to go down.And so those nine things got addressed, we went live, and our system survived, you know, and things went up. After that, it became a regular thing before the semesters to make sure, “Hey, Chris, we need to coordinate that again. Can you do it?” Sure enough, let's coordinate some of the same old teams, grab my run sheet. And we learned that we needed to give a day of preparation because sometimes there were folks that their scripts were old, their environment wasn't a current version, and sometimes the integrations weren't working for various reasons of other platform releases and functionality implementation.So, we had a day of preparation and then we would run. We'd check in the morning and say, “Everybody ready to go? Any problems? Any surprises that we don't know about, yet?” So, we'd all confer in the morning and give it a thumbs up.We started our tests, we do a three-hour ramp, and we learned that the three-hour ramp was pretty optimal because sometimes elastic load balancers can't, like, spin up fast enough in order to pick up the load, so there were some that we had to pre-allocate and there were others that we had to give enough time. So, three hours became that magic window, and then three hours of steady-state at our peak generation. And now, after five years, we are doing that every month.Jason: That's amazing. One of the things you mentioned in there was about this migration, and I think that might tie back to something you said earlier about scaling and how when you're thinking of scaling, especially as I'm thinking about your migration to the cloud, you said, “Scaling isn't just adding servers. Sometimes that requires re-architecting an application or the way things work.” I'm curious, are those two connected? Or some of those nine critical fixes a part of that discovery?Chris: I think those nine fixes were part of the discovery. It was, you can't just add servers for a particular platform. It was, how big is the network pipe? Where is the DNS server? Is it on this side or that side? Database connections were a big thing: How many are there? Is there enough?So, there was some scaling things that hadn't been considered at that level. You know, nowadays, fixing performance problems can be as easy as more memory and more CPU. It can be. Some days it's not. Some days, it can be more servers; some days, it can be bigger servers.Other times, it's—just, like, quality is everybody's job, performance fixing is not always a silver bullet. There are things like page optimization by the designers. There's code optimization by your front-end engineers. And your back-end engineers, there are database optimizations that can be made: Indexing, reindexing on a regular basis—whatever that schedule is—for optimizing your database queries. If your front-end goes to an API for five things on the first page, does it make five extra calls, or does it make one call, and all five things come across at the same time?So, those are considerations that load performance testing, can tell you where to begin looking. But as quality assurance and that performance lead engineer, I might find five things, but the fixes weren't just more testing and a little bit of extra functionality. It might have involved DevOps to tweak the server connections, it might have involved network to slim down the hops from four different load balancers to two, or something like that. I mean, it was always just something else that you never considered that you utilized your full team and all of their expertise and skills in order to come up with those inches.And that's one of my favorite quotes from Every Given Sunday. It's an older football movie starring Al Pacino. He gives this really awesome speech in a halftime type of setting, and the punch line for this whole thing is, “The inches we need are everywhere around us.” And I tell people that story in the terms of performance is because performance, at the software level, is a game of inches. And those inches are in all of our systems and it's up to us as engineers to find them and add them up.Julie: I absolutely love everything about that. And that would have made a great title for this episode. “The Inches we Need are Everywhere Around Us.” We've already settled on, “A Day of Darkness with Chris Martello,” though. On that note, Chris, some of the things that you mentioned involve a lot of communication with different teams. How did you navigate some of those struggles? Or even at the beginning of this, was it easy to get everybody on board with this mindset of a new way of doing things? Did you have some challenges?Chris: There were challenges for sure. It's kind of hard to picture, I guess, Cengage's platform architecture and stuff. It's not just one thing. It's kind of like Amazon. Amazon is probably the example is that a lot of their services and things work in little, little areas.So, in planning this, I looked at an architecture diagram, and there's all these things around it, and we have this landscape. And I just looked down here in the corner. I said, “What's this?” They said, “Well, that's single-sign-on.” I says, “Well, everything that touches that needs to be load tested.”And they're like, “Why? We can't do that. We don't have a performance environment for that.” I said, “You can't afford not to.” And the day of darkness was kind of that, you know, example that kind of gave us the [sigh] momentum to get over that obstacle that said, “Yeah, we really do need a dedicated performance environment in order to prove this out.”So, then whittling down that giant list of applications and teams into the ones that were meaningful to our single-sign-on. And when we whittled that down, we now have 16 different teams that regularly participate in chaos. Those are kind of the ones that all play together on the same playing field at the same time and when we find that one system has more throughput than another system or an unexpected transaction load, sometimes that system can carry that or project that load onto another system inadvertently. And if there's timeouts at one that are set higher than another, then those events start queuing up on the second set of servers. It's something that we continually balance on.And we use these bits of information for each test and start, you know, logging and tracking these issues, and deciding whether it's important, how long is it going to take to fix, and is it necessary. And, you know, you're balancing risk and reward with everything you're doing, of course, in the business world, but sometimes the, you know—“Chris, bring us more quality. You can do better this month. Can you give us 20 more units of quality?” It's like, “I can't really package that up and hand it to you. That's not a deliverable.”And in the same way that reputation that we lose when our systems go down isn't as quantifiable, either. Sure, you can watch the tweets come across the interwebs, and see how upset our students are at those kinds of things, but our customer support and our service really takes that to heart, and they listen to those tweets and they fix them, and they coordinate and reach out, you know, directly to these folks. And I think that's why our organization supports this type of performance testing, as well as our coordinated chaos: The service experience that goes out to our customers has to be second to none. And that's second to none is the table stakes is your platform must be on, must be stable, and must be performing. That's just to enter the space, kids. You've got to be there. [laugh].You can't have your platform going down at 9 p.m. on a Sunday night when all these college students are doing their homework because they freak out. And they react to it. It's important. That's the currency. That is the human experience that says this platform, this product is very important to these students' lives and their well-being in their academic career. And so we take that very seriously.Jason: I love that you mentioned that your customer support works with the engineering team. Because makes me think of how many calls have you been on where something went wrong, you contacted customer support, and you end up reaching this thing of, they don't talk to engineering, and they're just like, “I don't know, it's broken. Try again some other time.” Or whatever that is, and you end up lost. And so this idea of we often think of DevOps is developers and operations engineers working together and everybody on the engineering side, but I love that idea of extending that.And so I'm curious, in that vein, does your Chaos Engineering, does your performance testing also interact with some of what customer support is actually doing?Chris: In a support kind of way, absolutely. Our customer call support is very well educated on our products and they have a lot of different tools at their disposal in order to correct problems. And you know, many of those problems are access and permissions and all that kind of stuff that's usual, but what we've seen is even though that our customer base is increasing and our call volume increases accordingly, the percentage decreases over time because our customer support people have gotten so good at answering those questions. And to that extent, when we do log issues that are not as easily fixed with a tweak or knob toggle at the customer support side, those get grouped up into a group of tickets that we call escalation tickets, and those go directly to engineering.And when we see groups of them that look and smell kind of the same or have similar symptoms, so we start looking at how to design that into chaos, and is it a real performance issue? Especially when it's related to slowness or errors that continuously come at a particular point in that workflow. So, I hope I answered that question there for you, Jason.Jason: Yeah, that's perfect.Julie: Now, I'd like to kind of bring it back a little bit to some of the learnings we've had over this time of practicing Chaos Engineering and focusing on that quality testing. Is there something big that stands out in your mind that you learned from an experiment? Some big, unknown-unknown that you don't know that you ever could have caught without practicing?Chris: Julie, that's a really good question, and there isn't, you know, big bang or any epiphanies here. When I talk about what is the purpose of chaos and what do we get out of it, there's the human factor of chaos in terms of what does this do for us. It gets us prepared, it gets us a fire drill without the sense of urgency of production, and it gets people focused on solving a problem together. So, by practicing in a performance, in a chaos sort of way, when performance does affect the production, those communication channels are already greased. When there's a problem with some system, I know exactly who the engineer is to go to and ask him a question.And that has also enabled us to reduce our meantime to resolution. That meantime to resolution factor is predicated on our teams knowing what to do, and how to resolve those. And because we've practiced it, now that goes down. So, I think the synergy of being able to work together and triangulate our teams on existing issues in a faster sort of way, definitely helps our team dynamic in terms of solving those problems faster.Julie: I like that a lot because there is so much more than just the technical systems. And that's something that we like to talk about, too. It is your people's systems. And you're not trying to surprise anybody, you've got these scheduled on a calendar, they run regularly, so it's important to note that when you're looking at making your people's systems more resilient, you're not trying to catch Chris off guard to see if he answered the page—Chris: That's right.Julie: —what we're working on is making sure that we're building that muscle memory with practice, right, and iron out the kinks in those communication channels.Chris: Absolutely. It's definitely been a journey of learning both for, you know, myself and my team, as well as the engineers that work on these things. You know, again, everybody chips in and gets to learn that routine and be comfortable with fighting fires. Another way I've looked at it with Chaos Engineering, and our testing adventures is that when we find something that it looks a little off—it's a burp, or a sneeze, or some hiccup over here in this system—that can turn into a full-blown fever or cold in production. And we've had a couple of examples where we didn't pay attention to that stuff fast enough, and it did occur in production.And kudos to our engineering team who went and picked it up because we had the information. We had the tracking that says we did find this. We have a solution or recommended fix in place, and it's already in process. That speaks volumes to our sense of urgency on the engineering teams.Julie: Chris, thank you for that. And before we end our time with you today, is there anything you'd like to let our listeners know about Cengage or anything you'd like to plug?Chris: Well, Cengage Learning has been a great place for me to work and I know that a lot of people enjoy working there. And anytime I ask my teams, like, “What's the best part of working there?” It's like, “The people. We work with are supportive and helpful.” You know, we have a product that we'd like to help change people's lives with, in terms of furthering their education and their career choices, so if you're interested, we have over 200 open positions at the current moment within our engineering and staffing choices.And if you're somebody interested in helping out folks and making a difference in people's educational and career paths, this is a place for you. Thanks for the offer, Julie. Really appreciate that.Julie: Thank you, Chris.Jason: Thanks, Chris. It's been fantastic to have you on the show.Chris: It's been a pleasure to be here and great to talk to you. I enjoy talking about my passions with testing as well as many of my other ones. [laugh].Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.
Steph celebrates Utah's adoption day and Daylight Savings Time and troubleshoots a CI build time that had suddenly spiked for a client project using TeamCity. She also shares a minor update regarding the work that thoughtbot is doing to scale horizontally and add more machines quickly and efficiently to process more RSpec tests. Chris was alarmed by logs and unknown-unknowns and had some fun using Git down. Git bless his heart! This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. TeamCity (https://www.jetbrains.com/teamcity/) lograge (https://github.com/roidrage/lograge) Cleaning up local git branches deleted on a remote (https://www.erikschierboom.com/2020/02/17/cleaning-up-local-git-branches-deleted-on-a-remote/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Today is Utah's adoption day. So officially, one year ago, we adopted Utah. He's about a year and a half years old now because we got him when he was around the six-month mark. So Utah, aka Raptor, which is the nickname that you gave him, and aka UD [spelling] the cutie is his other nickname...which I've forgotten, why do you call him Raptor? Why is that a name? CHRIS: Because there's a Utah Raptor. STEPH: A person? [laughs] CHRIS: No, I think it was like the fossils were found in Utah. But the Utah Raptor is a type of dinosaur. And so when I heard Utah, my brain went to Raptor, and then I dropped the Utah sort of a Cockney rhyming slang sort of thing. Shout out to Matt Sumner real quick. But yeah, Raptor. STEPH: Cool. Cool. Cool. I'm so glad I asked. Now I know. I just accepted it when you called him Raptor. I was like, sure, he can be a Raptor. [laughs] CHRIS: I feel like that says a lot about me that you were just like, okay, why not? STEPH: [laughs] CHRIS: That's different and has no apparent connection to the actual name of the creature, but that's fine. I might be a nonsense person. STEPH: Or me for accepting it. You share a lot of nonsense, and I accept a lot of nonsense. That might be our dynamic. [laughs] So it works out. CHRIS: That just may be our dynamic. STEPH: That's why I'm always so nice with the good idea, bad idea, or even terrible. [laughs] CHRIS: You're like, it's all nonsense 100% of the time, but yeah. So Utah is one year into living with you folks. So that's lovely. STEPH: Yeah, and he's growing up so well. Oh, and I've been training him for one of his latest tricks. I'm very excited because it seems to be really sinking in. So every night, we take him out for his final bathroom potty but then before we go to bed. And one night, for some reason, I started singing The Final Countdown. [singing] It's the final countdown. But I started singing it's the final potty instead. So now, when it's time to go out for the bathroom late at night, I look at him, and I start singing. And I start singing [vocalization], and it's working. He's starting to recognize that when I started singing that tune, he's like, okay, and he gets up from his comfy spot, and we go outside. And it brings me a lot of joy. CHRIS: That is perhaps the best use of Pavlovian conditioning that I've ever heard of. Also, I really appreciate that you both mentioned the final countdown but then said just in case anyone is unfamiliar with the tune, let me hum a few bars. Thank you for doing the service there. STEPH: I have been singing so much this week. I don't know if Joël Quenneville, who I've been pairing with a lot, appreciates that. Sorry, Joël. But I have been singing so much. And I think that's post-vacation vibes. That's what vacation does for you. And it helps you get back into, you know, lots of singing or at least it does for me. Let's see, what else is going on this week? So this is the week that we have DST in the USA, so Daylight Savings Time, aka summertime, where we advance our clocks so everybody...although this is going to be late. So at this point, by the time people are hearing this, you're going to have already dealt with all those bugs that have crept up. But those are creeping up this week, where people are starting to notice a lot of those flaky specs that aren't technically flaky. They're actually breaking for real reasons because they were tested in a way that shows that they're not considering that daytime boundary. CHRIS: It's as if you spend some of your time fixing flaky specs that that's where your mind goes with DST. Because I'm going, to be honest, part of what you're doing right now is telling me that this is coming up, and I didn't know. I had forgotten about that, which is very exciting, except you lose an hour asleep for this one, right? Or is it that you gain? STEPH: We're going forward. Yeah, it's fall back and then spring forward. That's how I remember it. CHRIS: Worth it. I'll take the sunshine at night. STEPH: Yeah, it's supposed to be so we have more sunshine during the daylight hours. That's the reasoning for the nonsense, the headaches. On some more technical news, when I came back from vacation, we noticed that the CI build time has suddenly spiked for the client project where previously we were averaging, I'd say, around 25-26 minutes. There's definitely a range there. But that seems to be pretty consistent. And right now, builds are taking more about 35, sometimes upwards to 45 minutes. And so it's been a bit of who done it or what caused it adventure of figuring out why, what's causing the spike. And so Joël and I have been pairing heavily on that to investigate what's going on and learned a lot of features that TeamCity offers and just diving into this particular issue. One thing that brought me joy is by looking through all the builds that are taking place on TeamCity. As I noticed, there are a number of builds that are using the RSpec selective testing that I added where if you only change a test to then we're only going to run those tests instead of the whole suite. And it was one of those changes where I thought, okay, maybe someone's going to get use out of this. Joël and I will probably get use out of this. But I'm actually seeing it about one every ten build something like that. And I'm just like, oh, this is awesome. One, people are improving tests. That's amazing. And then two, that then they're benefiting especially while we have this spike going on. So that was a suggestion from you that I appreciate because that is paying dividends. And so that brought me a lot of joy while looking into this other issue, which we haven't resolved yet. We think it has something to do with how the tests are being balanced across all the different parallelized processes. And we think that there is an imbalance that has happened. And then that's what's really throwing things off. So we can see that one particular process is taking around 26-27 minutes, but then the next process that's highest in time is only taking 17 minutes. So it's like, why is there suddenly ten more minutes that's being attributed to one process? And why is that not getting spread out? So still looking into that. That's the mystery for this week. But that's mostly what's going on in my world. What's up in your world? CHRIS: What is up in my world? I'm going to say a quite alarming thing happened this week, which was we were investigating some changes, or we were investigating some behavior where the particular portion of the system ended up in the logs, just sort of combing through. And I happened to notice this one log line that...our logs tend to be somewhat verbose. They're JSON-structured log format. I've talked about the lograge setup that we use in the past, but there's a bunch. These are long lines of JSON-structured data. But this line that caught my eye was not. It was just some text, and it said, "Unreported event: and then some other texts." And I was like, ah, what? Who didn't report which to when? I did some digging, eventually figured out that this was Sentry. Sentry was logging that it had not reported an event to us. But had we not randomly happened upon this in the logs, which is sort of a random thing to see, we would have missed this, which is scary. I mean, it was missed for a little while. And so Sentry was not reporting certain events. We had made a change, particularly to Sentry's before_send configuration. So there's a way that you can do some amount of filtering client-side or client being, in this case, our Ruby app. So that's like the client-side of Sentry, and then there's their server backend. So that would, weirdly, that's the way the client-server work in this case. But the idea is you can do some proactive filtering of being like, you know what? Rather than sending a ton of noise...because we know there's this one error that we can't stop for reasons. It's a JavaScript Chrome extension that's getting embedded in the app. That doesn't mean anything; that's just noise. Rather than even sending those over to Sentry, let's proactively filter them out. before_send is a function within the Sentry SDK that allows you to do this. But it turns out if you raise an error in there, if you happen to have introduced something that doesn't cover all the possible edge cases, then Sentry will just not let you know and will log, interestingly, that they did not report the event. I'm going to throw it out there that I would love if Sentry were to say Sentry me...that's where I put something very bad happened, and you should look at it. And they're just like, well, something pretty darn bad happened. We'll log it. Supposedly, my understanding is before_send can be used to filter out like PII or other things like that. And so their failure mode is quiet intentionally. That's my understanding as to maybe why this is true. I wish there were configuration that said, no, please fail as loudly as humanly possible. But that was terrifying. STEPH: Yeah, absolutely. I'm going to piggyback on what you just said for a minute because I was also thinking earlier and related to the sudden spike in our CI builds where I was like, it would be really nice if there's...because I suspect there's one particular change that has caused this to happen. I don't know what it is yet, but that's just my suspicion. And it would be great if when that build ran, let's say that build went from an average of 25 minutes and suddenly we have a build that took 35 minutes if TeamCity had alerted us or if something more aggressive had to happen to say like, "Hey, your team..." or maybe it's just in the logs somewhere. Okay, not in the logs somewhere more visible on the build where it's like, "Hey, your build took an extra 10 minutes compared to the average, just letting you know. I don't have a diagnosis for you, but we're just letting you know." So yeah, plus-one to getting those types of alerts out to people and notifying us when there's an average that's not being met or when things aren't getting logged like you'd expect them to. CHRIS: As part of what we were doing in the logs...like how to get to that anomaly detection place is a really interesting question in my mind. And this is a case where we were in the logs, and we wanted to instrument more things. So we have a bunch of stuff right now that goes in. It's either a warn or error log level. And the error should be pretty rare because, ideally, those are going to Sentry instead, but we still want to keep an eye on them. But we introduced a new search within log entries, which is what we're using for logging aggregation and searching. And the idea was to group all warn-level messages and to group it on the message string. So ideally, what this allows us to do is say, "Oh, we've seen 200 instances in the past two days of this new warning that we didn't see before." The difficulty is, as a human, I would see unhandled error blah as one bucket of warning, or I might want to see it that way. I might want to group it on part of the message. So it becomes really hard to find the signal in the noise on these, but at least it was a start. We now have this little graph for both warning and error-level log messages that we can see are there any new anomalies that are occurring pretty regularly? But this, again, was just this weird edge case where we were lucky to catch it. But it was very scary that it was just throwing stuff away. So the universe might have been true that our error log did get a little quiet for a little while, which was nice, but it wasn't 100%. It wasn't like we were at 10 hours, an hour, and then we went to zero. It was like some, and then we went to a lower number because we were still getting some. We were only filtering out certain ones. But yeah, it's how do you know at runtime that the system is doing the thing? This is increasingly the question that I have in my mind. But yeah, so that was the thing. We fixed it. It's fixed now. I also set up an alert in log entries to say, "If you ever see this particular phrase again unhandled or unreported," then please tell me about that post-haste. So we've got that now. STEPH: That's perfect. That's what I was about to ask us if there's a way that you could add a filter or add a warning for that anomaly detection. So that sounds great. CHRIS: I've got that now because this became a known-unknown, but there are still the unknown-unknowns, and there are so many of them. And I can't know them is my understanding of how they work. I would love to know them. I would love to pin them down and be like, "Hey, what are you doing here?" Someday maybe. But anyway, that was the thing in my world. [laughs] It was fun. It was a great little time. What else is up in your world? STEPH: I feel like you can always judge the level of fun based on how high someone's voice goes. No, it was fun. It was great. It was fun. [laughter] CHRIS: I believe that is an accurate assessment, yes. STEPH: I've caught myself doing that. I'm like, my voice is extra high, so I don't think I really mean that when I'm using the word fun. [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. I do have a small update that I can share regarding the work that we're doing to be able to scale horizontally. So we want to be able to add more machines quickly and easily so we can then process more RSpec tests. And we have discovered with TeamCity that we're pushing forward on that particular path because they have something called a composite build. And with a composite build, it's essentially your parent or your supervisor build. And then, from there, you can create other subsequent builds. So we can then say, all right, let's have multiple builds that then run the RSpec test, and then we can separate in that way. And right now, we're going about it in the hacky way because we just want a proof of concept. So we are saying specifically in this particular step, we want you to run spec models. And in this other process, we want you to run these particular tests just because we want to see how this works. And so far, the aggregation seemed great. So when you look at that composite parent build, it's showing you how each of those builds are doing. It's also reporting back the failures. It's even de-duping them. Because initially, we set it up where we were running the full test suite in parallel on both of these builds, [laughs] not what we wanted, fixed that. But it did highlight that it was de-duping the test failures. So that part was nice. So the UI seems great and seems quite very capable of doing this. Composite build seems to be the way that we can do this with TeamCity. But we're still diving into actually getting the metrics like, okay, how much is this actually going to speed us up? And what does this look like if we want to be able to scale up to say from 5 to 10 where we went from 5 machines to 10 machines? And that part doesn't feel graceful because then you have to go in and change the configuration and copy the configuration to then add a new build that then is going to process RSpec test. So other services like Buildkite make it very easy. I can't remember if it's like literally a slider or if it's a number that you enter. But you can say, "This is how many processes that I want to run," in which it would be a lot nicer for that actual scaling. Versus TeamCity, it feels far more manual and intentional where you then have to duplicate and add those settings. But it's a really good first step because, as we'd highlighted before, there's a lot of risk in moving over from an existing infrastructure to something totally new. So if we can have some wins with this approach and help out the team and reduce build time, then that gives us more grace period. So then we can assess, okay, do we really want to move over to Buildkite? What do we want to do next? What does this look like? And have further discussions. So that's a small update there. Next time I should have some more updates around actual data on how things are looking. CHRIS: Oh, cool. Yeah, I appreciate the update and definitely interested to hear how this continues to play out. This is a large project that you're undertaking and all the facets and whatnot, so yeah, super interested to hear the continued journey of the test build time reduction. Let's see, other news in my world. I've been exploring something that I'm intrigued by the idea. Let's go with that. [chuckles] That's going to be my start. I always start with these lead-ins that build things up too much. But I am finding a small tension in trying to just keep up with what the team is doing, which is a wonderful place to be. Our team is growing. We actually have someone new joining tomorrow, very exciting. But I'm trying to find the right version of I don't want to block things. I don't want all code review to have to go through me. But I do want to keep an eye on everything. I want to kind of know what we're doing collectively. And ideally, mostly, that's me being like, yep, that makes sense. We're doing that. I remember that, cool. Wait, what's this? And rarely, occasionally, there'll be a point where I'm like, oh, I want to intervene here. I want to have a conversation. I want to rethink how we're building this. And so it's moving from a place of any sort of blocking synchronous review or the necessity for that to ad hoc post-review sort of thing. And so the way that I'm trying to poke around with this, of course, I'm writing some code to do it because of me. So the two systems that we're using that seem most of interest are GitHub and Trello. And so it turns out GitHub has a wonderful search, and I can create a search that is parameterized like create a URL that jumps into a parameterized search saying, "Show me everything that was merged in the past X amount of time, " so I can say the past two days because I haven't checked it in two days. So I'll see all of the PRs that were merged, and some of them I'll have already reviewed. So I maybe could even filter further there. But for anything that I haven't seen, I'm like, oh, what was this? What was that? What was this other change? Similarly, on Trello, there's a way via the API to get all of the card update actions. And then I can filter down to say whenever a card was moved, which in our system that means...we're doing Kanban-style, so a card being moved from this column to that column that tells me that someone is progressing forward with some work. And then I can further filter down because, again, I don't really want to be blocking on this. I'm most interested in what have we done or completed in the most recent timeframe. And thus far, it's an interesting data set. And it's an interesting way to switch the problem around such that I'm not feeling...there was FOMO or organizational FOMO is perhaps how I would describe it of like, I want to try and keep an eye on stuff and make sure I'm responsive. But I'm now blocking, so I have to step away. But now I'm worried that I'm missing things. And so I'm trying to find that good middle spot. And this feels like an interesting exploration of that. STEPH: I'm intrigued when you mentioned the card moving over, so then you can tell things are progressing. And then you're answering the question of what did we do in this particular chunk of time? When you move stuff over, is there a clear sweep of we have finished this sprint, and then you have the date of that sprint at the top, and so then you essentially have a column that represents all the work that was done in that sprint? Is that an approach that you're using? Because that's the one that immediately came to mind for me when you're wondering what was accomplished during this week or two-week period? CHRIS: Interesting question. So we're not really doing sprints, or there are no real iterations. We're doing more of the I think Kanban is the way to describe it. But basically, we have a prioritized next up column. And then every day, I can say continuously, the work has the same shape, which is pick up the next most important thing, work on it, move it through the various columns. I did introduce in Trello just the idea of, like, here's a month, so we can see by month what we're doing, but that's too low granularity in my mind. I want to review it a month at a time. The whole point of this in my mind is to see stuff as it's happening vaguely in real-time but not requiring me to constantly be monitoring everything. So it gives me an opportunity at the end of the day to be like, what happened today? What do we do? But yeah, so there's no real sprint that I would couple this to because we're not really doing sprints. STEPH: Got it. Yeah, that gives me more context. I understand why you're then looking for ways as to how to answer that question of, like, what did we accomplish in this week or a particular time period? CHRIS: And to name it, this is not an intention on my part to be like, I need to control everything. I need to make all the decisions. I very much want to empower the team. And in my mind, this is actually a mechanism to empower the team. I want to give them more freedom and then have the opportunity occasionally to check back in and be like, oh, actually, there was some context that was missing here the way we did this. Let's actually unwind that, do it this other way for these reasons. But it gives me the ability to potentially have that conversation after the fact. We're trying very hard to have the tickets be as representative and complete, and well documented as possible. But that's very difficult to get to. And there are also things that I don't even know to mention. Again, I think the critical bit is this is not an attempt to make sure everything aligns with what I think; it's more I want to empower the team to move without me most of the time. And then, where there are things that potentially should have a small conversation or a redirection, then we have the ability to do that. And so, I'm trying to build that back into my workflow while basically loosening up my connection to the work in progress at any given point in time. STEPH: So you just touched on a topic that's really interesting to me or a particular space. You're doing a very kind thing where you want tickets to have lots of context so that people feel confident when they're picking up what's the action item to be done. And for someone that's new, that's incredibly helpful, and I think more important since they are new to that world. But in general, my spicy take of the moment is going to be as developers; that's part of our job. If we notice that context is missing or if we're not clear about the action item, is to think through what is it that I'm missing? Who do I reach out to? Who can I go to for help? How can I scope this work? All of that, to me, is very much part of our role. And the idea that tickets always have to be perfectly curated, which I don't think you're saying, but you're just trying to be extra helpful. But if someone were to have that expectation, I think that expectation is wrong. And I do think it is part of our work that then we help make sure that tickets are well-scoped and well-defined and have those conversations with the people creating the tickets or creating them ourselves. CHRIS: I love the clarification there, and I'm definitely in agreement with you. I don't know how picante of a take it is. I would be intrigued. Listeners, let us know. Are we breaking your mold of what things should be? But I do like the idea that it is a conversation so back and forth. And so the idea that as developers, there should just be this very clear list of things to do and you just kind of pick up a card and heads down, just get it done, I don't think that should be the mold. But I do think; ideally, the why is the most important thing that I think should be in a card. So ideally, a card should have little in terms of technical implementation notes and should have more in terms of here's the goal that we're going for, here's the problem, or here's the thing that we're trying to solve. And then maybe a suggestion of like, I think it could be an X, Y, and Z, but I'm not sure. Or we want to be able to send transactional emails, but I don't know any more than that. Our goal is to engage users. Like that last sentence, that last little bit of our goal is to engage users is a critical, critical data point, versus our goal is to solve for a regulatory and compliance issue. It's like, well, those are different. And they will lead to different solutions and different implementations and all that. So yeah, I definitely share the idea that cards don't need to be perfectly specified. And if anything, I think I'm closer to that than it probably sounded like I was. But for that reason, it's totally possible in my mind, that work will be done in a way that after the fact, I'm like, "Oh, sorry, there was a misunderstanding here. Let's revisit this work." And so, my goal is to try and stay connected and have a feedback mechanism at the end of the process. So when the work is done, be able to spot-check it rather than trying to have to watch it as it's happening or proactively define everything in excruciating detail such that exactly the right things happen all the time. So I'm moving to a place of ask forgiveness, not permission. That's the wrong analogy here. But that idea of like, we can clean it up after the fact, that's fine. And we don't need to try and prevent any sort of things, or at least that's what I'm exploring. STEPH: Yeah, I love that you highlighted having the why. I adore that when that's on a card just because I then I want to know the goal because then that's going to help me ask questions and think about scoping versus if it's like a very specific implementation, then I feel so narrowly scoped that I don't feel as confident that I can be like, okay, I know why I'm doing this versus I just feel very directed to do a thing, and that's incredibly helpful. I have also felt the pain that you're mentioning where it does feel like a ticket has all of the work clearly defined, and the goals, and the whys, and it can have everything there, but just something gets lost in the communication. And so someone implements something in a way that is how they interpreted the work versus it's not actually what the ticket or what the goal of the work was to be done. So I appreciate that where you are looking for ways to tweak things to make sure that whoever is picking up that ticket will have the same interpretation that the author intended for them to have. And then if that does happen, and things get misaligned, then you chat and figure out ways to improve it. I think that's the point that I was really thinking about, and my air quotes, "hot take," is that as developers, a big part of our job is communication, and then also sharing the knowledge that we have with other people. And so if someone is expecting that they can just always pick up work and never talk to someone, I don't know, maybe you're in the wrong business. [laughs] That's my hot take. CHRIS: I, for one, like the hot take. It is nice and ever so slightly spicy. STEPH: Thanks. Yeah, I just think communication is incredibly important. Earlier, you mentioned, I don't think we were on mic at the moment, but you mentioned something about a new Git alias. And I am very intrigued on hearing about what you've added, what it does, all the details. CHRIS: All the details, that's probably too many, but some of the details I can certainly provide. So I have two new Git aliases; one is Git gone, which is probably the heart of the whole thing. And so the background of this is I found myself pushing the green merge button on GitHub more. We've introduced some branch protection stuff, which I've talked about in previous episodes. And I dream of the day that one of my good, good friends at GitHub will give me access to the merge queue beta. Please, please, I implore thee. But in the interim, still clicking the green merge button more often than not. STEPH: Wait. I have to ask to help you in this dream. Are you forwarding these episodes to someone? You can just take a clip of you saying, "Please, please, please give me access," [chuckles] and just forwarding that or mentioning someone at GitHub or GitHub in general. CHRIS: Just leaving voicemails for people with a Bike Shed section of me begging for access to the merge queue beta? STEPH: Yeah. [laughs] CHRIS: No, I'm not. But maybe I need to up my game. You're right. [laughs] Someday, I'll get there. And that will only exacerbate this issue that I'm feeling, which is again, I'm clicking the merge button. That's what's happening. And as a result, that means my local branch is now like it's done its job. You've served me well. And in the Marie Kondo sense, I need to hold you up, thank you for your service, and then let you go. But I obviously wanted to automate that. So Git gone does that automation, and it was fun. So I found a blog post which we'll include in the show notes, that had most of the pieces here, but it was still fun to play with the shell pipeline in a way that I hadn't in a while. So it does a Git fetch and then git-for-each-ref with a particular structured format that references the upstream of the branch then uses awk to search for the word gone. Because Git, if you print it out in this particular way using this format, it will say the local branch name and then the upstream. But if you've deleted the upstream, it will specifically say (gone) in brackets, so you can actually use that to filter them down. And then I pipe that to git branch-D so..well, xargs of course. I love a little shell pipeline. As an aside, these are fun little things to build up. So that is Git gone. And then the other one that I have is Git down, which is what I use more. And Git down works on top of Git gone, so it's Git checkout main and and Git pull and and Git gone. But that means I get to type Git down into my terminal whenever a branch happens to get merged in the upstream land. [laughs] STEPH: [laughs] Oh, that's adorable. I love it. I like the Git gone, and yeah, I like the Git down just for fun. You are inspiring me where I now really want a Git bless your heart that's like maybe a Git blame or a Git revert. [laughs] CHRIS: I've definitely seen people do Git praise as an alias for Git blame. STEPH: That's nice. CHRIS: But Git bless your heart is...ooh, I love that. STEPH: [laughs] I might have to add that just so I can type it, and then someone can say, "What are you doing?" [laughs] Cool, I love it. CHRIS: Little things, little fun bits to add to your day and to automate and have a little fun while you're at it. So that's where I'm at. STEPH: All about the communication and fun. That's what I'm here for and the singing. Let's not forget the singing. CHRIS: And the singing, of course. STEPH: [singing] On that note, shall we wrap up? CHRIS: Let's shall. Oh. STEPH: [laughs] CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Bye. ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph is excited to be headed on a retreat with her mom in the mountains, but before that, she details how she helped troubleshoot a production issue with her team and appreciated their process. She's also looking into tooling around spinning up more machines to process more RSpec tests. Chris had a developer start their new job at Sagewell and highlights how they involved the new person in rectifying potentially missing and/or confusing existing documentation. He also has a gripe, and that is accounts. Handling too many accounts. Additionally, he talks about triaging an error and how it was tough initially to understand if something was actually broken. And then it was even harder to understand what was broken. So he paired through it and used the power of putting two heads together. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris, I am going on vacation next week, and I am so excited about that. It's going to be pretty much a week long. It's like a Tuesday through Friday ordeal. And it's a trip that I'm taking with my mom. So over the past year, she's gotten super serious about her health and nutrition and done a phenomenal job of being very focused on a plant-based diet, which is basically healthy vegan food is what that comes down to. So there is a retreat that's taking place in the North Carolina Mountains that she's really excited about. I'm going to go with her. We're going to do lots of cooking, and hiking, and hanging out in the mountains, and it's going to be lovely. CHRIS: Well, that does sound lovely. STEPH: Yeah, it seems like a really perfect time to disconnect just because you're headed into the mountains. So all you should take with you are books and things that are not iPhones, and tablets, and computers, and screens. So I'm looking forward to that, just to be away from screens for the week. On some more technical news, this past week, I helped troubleshoot a production issue, which was a bit novel for me because the work that Joël and I are doing with our current project it's all in the testing realm. And so it was probably around 10:00 o'clock at night my time, and I got a ping on Slack. And it looked like I was getting called in for a production issue. And I was like, I have touched zero production code. [laughs] So I'm very intrigued how I could have broken production at this point. And so I looked into it, and it turned out that it wasn't necessarily related to a commit that I had authored, but it was for a commit that I had reviewed and then approved. And so their strategy is they create a new channel. They'd gotten a ticket that an error was occurring. And then the site reliability team created a new Slack channel, and then they pinged everybody who either authored, reviewed, and approved that change to be like, hey, we think the issue is related to this commit. Our plan is we'd like to roll it back. But before we do, we just want to check in with folks who have more knowledge to help us confirm that, yes, this error message seems related. And I really liked that approach. I really like the idea that it's not just the person who merged the commit that then gets pinged on it, but it's like everybody else who happened to look at this and review it come help us too. So we spent some time looking into it, confirmed that yes, indeed, it was related to that particular commit. And then their team did the wonderful thing of then rolling it back. So then, it was no longer an escalated issue. And so then I asked, "What else can I do to help?" And they said, "Well, from here, it's no longer a production issue. So tomorrow, just follow up with the author and let them know and issue a fix for the bug, and then merge it like normal." So we're back in that normal pull-request flow, very calm. And overall, I just appreciated their process. I like very much how they pulled more people in because I think some of the other people that were involved weren't online, which makes sense because it was really late. So that way, you just spread in case some other people really aren't available that then hopefully you'll get lucky and one of those three or four people are available to help you troubleshoot. CHRIS: That does sound like a really nice and thoughtful and intentional bug response, communication, procedure, rollback, et cetera. All of that sounds like it worked very well and is nice to have. And it's the sort of thing that a larger organization ideally gets to, having these sorts of processes. Spoiler alert, later in the episode, I will talk about the other side of it of being a very young organization and trying to be like, wait, is this a bug? Is this not a bug? Should we roll back? What do we do? That's actually my topic de jour. But what you're describing sounds like the calm even in the case that there is a fire sort of like, yep, we've got procedures. We have workflows. We have communication channels and ways that even the exceptional things can be handled in an ideally as calm as possible way. So that's awesome that that's what you got to experience there. STEPH: Yeah, getting called in at 10:00 o'clock is never fun for anybody. But when it happens, because it's going to happen, then I appreciate the thoughtfulness and that process that they put behind it. So it all went fairly smoothly. And it was also one of those fun things where I haven't met...like this is a very big organization, so I hadn't met any of those people. So when I got pinged on it, and then I hopped in, I was like, hi, I don't know anything about this process and what y'all are doing, but I am here. I'm here to help. Where can I look? What can I do? So it was also a fun endeavor in that regard to just be like, I don't know what I'm doing, but I am here to help. Please let me know how I can help. And it ended up working pretty well. So yeah, that's been a fun adventure for this week. How about you? What's new in your world? CHRIS: What is new in my world? Well, we had a developer start this week, which has been really wonderful. Unfortunately, we had scheduled their first day to be Monday, which was Presidents' Day, and that's a holiday. So we got out in front of that one and figured it out. We're like, no, no, actually, feel free to start on Tuesday. We'll not be around on Monday, so you shouldn't be around on Monday. But then, on Tuesday, they started. And we intentionally structured things such that we have a contractor that has been working with us for like seven or eight months now. So it's been a long time and been very formative as well the work with that contractor. So this is their last week, and thus, we very purposefully brought the new person on the team and that contractor together to maximize the amount of pairing and overlap that we have there just to try and as intentionally as possible grab whatever is in their head, get another point of view. Because this new individual on the team will be able to work with myself and the other full-time developer on the team a bunch moving forward, so we want to maximize their overlap with the person who is on their way out. But otherwise, it's been great. We're a young organization, so the version of onboarding it's me running around setting up a lot of accounts, forgetting to set up other ones, getting pings in Slack, and then following up and setting up another account. Eventually, I hope that there are checklists and formalizations and, ideally, one-click SSO magic that makes all of that work. But for now, I'm happy to chase it down. But really, we're just leveraging pairing as much as possible as the onboarding tool to make sure that where we don't have formalization, procedures, documentation, et cetera, as thoroughly built out as I would love to be at, we can shore that up with some time with other humans. STEPH: That's awesome. It's always fun having someone new to join to highlight all the things you need to automate or at least have a checklist for to then help them onboard. But that's really exciting that you've got a new teammate. CHRIS: Yeah, definitely very exciting. And they've been great. They've hit the ground running and a couple of pull requests already and just contributing very effectively within their first couple of days. So that's always wonderful to see. We are definitely taking this moment to document what is undocumented or update the README where it needs to be and start to make that checklist. We have another person who will be starting in about two weeks' time. And so, ideally, that will be even a little bit more fleshed out of a process. So slowly, incrementally get a little bit better with each we add that we get there. STEPH: How much do you involve the new person in creating that documentation? Is that something that you ask them to help build, or is it something you take ownership of? What's that balance? CHRIS: It's interesting. So definitely some I want to be with that person because I think it can often be the easy first PR is an update to the README for like, oh, I tried to set up the app, and it did not work. For this reason, I have now updated the README, and now there's a pull request. And we get to experience that flow via the very low-stakes change of updating the README. So that's a definite one that I like to have. The other is I'll typically ask for the individual to capture as much as possible. There's a very delicate line in my mind between empowering them and being like, yes, absolutely. We're young. We don't have everything documented. So feel free to make changes where that makes sense to you. But at the same time, I know that joining a new team can be complicated, can be intimidating in certain ways. You're not sure what's okay to change? What's not okay to change? That sort of thing. So I simultaneously don't want to put the pressure on someone to be like, "Yeah, no, change anything you want. Literally, nothing is stable here. Nothing's glued to the ground. So feel free to pick up anything and throw it out the window." That feels too far in my mind. So I don't have an actual answer like, I'm ideally calibrated at this point. But it's sort of those two tensions that I'm holding in mind as I think about that. STEPH: Well, I really like your answer. I like that balance because I think it's really nice to include the person in those changes and also just because they're going through it. So they happen to have that insight, and it's fresh. But I agree, when you're joining a job, you want some stability and confidence that the people that you are joining that team with are also working hard to make it a very positive onboarding experience. And if you just were to push all of that responsibility on to them to be like, "Yeah, we know. We don't have this organized yet. So you tell us everything that we need to do," that would feel unkind to that new person. I think as a new person that I wouldn't fully enjoy that. I don't mind some of it, but I wouldn't want all of it. I'd have nervousness around ownership, around improving processes, and who that belongs with. CHRIS: Sort of a classic case of it depends, or it's a little from Column A, a little from Column B, but definitely some, just hopefully not too much. STEPH: The Goldilocks of onboarding, some onboarding responsibilities, but not all of them, just the right amount. [laughs] CHRIS: Shifting gears slightly, though, I just want to gripe for a minute. I'm just going to gripe. This is not my normal mode, but I'm going to lean into it. STEPH: Do it. CHRIS: Accounts, just accounts. I have so many accounts now. There are so many across different systems, and I'm trying to do the good thing, which is let's stop using personal accounts for anything and only use organizational accounts for the things that are for work. And some organizations do a great job with this. GitHub, I'm looking at you; really well done, super happy with the way that you folks have implemented accounts. You get that I am one human being that contains multitudes. I am my personal self; I am my work self. I am maybe even another version of work, and you get that. And you usually let me exist as all of those versions of myself and, man, do I appreciate that. Heroku, you're okay. Like, it's all right. You treat the different facets of me as different accounts, but that's okay. You make it relatively easy to switch between. Although you do make me two-factor auth and re-login every single day, and I don't love that. So I don't know what's going on there, but fine. Trello, aka Atlassian, I guess at this point, come on, what are we doing? What's going on here? So originally, I had started, and I had the one Trello account, and I had my personal boards. And then there was the Sagewell organizational account. And within that, there were some boards, and I would just bounce back and forth. But I realized, no, I need to do the right thing. So I created a new Trello account. And now Atlassian just forces me to switch between them, and it loses the link that I'm going to often. It's a different login interstitial screen. And it constantly shows me that like, hey, you don't have access to this. Do you want to switch accounts? And I say yes. And then they take me to a screen where I can pick between two options, the one that I was that didn't have the ability to do it and another. And as a developer, I know that the thing I'm about to say is not fair. But come on, folks, you could know the answer to this question. There are two, and one is the wrong answer, so the other one is probably the right answer. You don't need to autolog me into that; I get it. Just emphasize it because they almost look identical on the list. I have now accidentally tried to request access with my secondary account to my other account, and I can't get out of that state. So now, one of the ways that I try and do this it shows me a list of them to pick. The other it says, "You have requested access. We're waiting to hear back." And I'm like, no. So anyway, that's a thing. STEPH: So I know people can't see me. [laughs] So I'll narrate that I'm dying over here because I very much appreciate that we are positive people. We are very focused on bringing positive energy, but the descent into the amount of shade that you're throwing at different applications [laughter] just really made my day, and I feel that pain. I have felt that pain with Atlassian and can relate. And we should have some gripe sessions. This feels healthy. This feels very...okay, well, I don't know for you. I'm the one that's laughing and getting joy out of this. I don't know if it's helpful for you, but it feels very cathartic to me. [laughs] CHRIS: It is definitely somewhat cathartic. I think there's utility in having these sorts of conversations. And throwing shade at Atlassian, whatever, they're doing fine, so I'm not super worried about it. But generally, we try and keep things positive because I think that's, frankly, a more effective way to communicate. But occasionally, it is useful to look at the things where I'm like; that is a pattern that I do not want to repeat. And I'm sure that there are complex organizational enterprise-y reasons that it has to be this way. But I can look at that and say never that. That experience as a user is like, wow, yeah, I just tripped over nine layers of your enterprise there just trying to do very simple day-to-day things for myself. So I want to avoid that. I've griped about that one login, not the company OneLogin. But that one login page that I've experienced where I start to interact with the form, and suddenly some JWT handshake in the background happens, and I'm now logged in. And it just rips the page out from underneath me. That is unacceptable. That is not okay. And I really do think there's something worth occasionally looking at those and being like, well, not that. But anyway, I should probably stop my gripe session now. STEPH: [laughs] Well, if I may join in, I have one that I'd like to share. Since we're on this -- CHRIS: Throw it on the pile. What else we got? [laughs] STEPH: [laughs] So there was some code. There was a piece of code that I was looking at that was very not friendly. It was difficult to understand. It took a while to parse through what are they actually doing? What records are they creating? Why did they choose this manner? Why are we iterating over these particular numbers? What's the outcome here? And I was pairing with Joël and was going back and forth having a conversation trying to be the detectives of why this code exists, and we finally got there. And we finally understood what it's doing and why. And I just lost it for a minute once we finally got there. [laughs] I just thought the way this code is written, it does not improve readability, and it doesn't improve performance. All it did was make my life harder because it was very difficult to read. So all they did was become really clever with the code that they were writing and essentially drying it up, which I have such a beef with DRY because it has caused me pain. And so they essentially were drying up their code or introducing a way to make it just take up fewer lines that took up less vertical space. But overall, I was very grumpy about it. And Joël was very kind about it and was like, "Well, this is the type of code I could see maybe why they did this." But you're right; it doesn't help with readability and performance. And he was helping balance out my grumpy goose moment. I've been having a lot this week; maybe it's just the week I'm in. I'm in more of a fiery mode this week [laughs] with some of the code that I'm seeing, and that was one of them. That was the please, please, please don't DRY up your code. If it doesn't improve readability or performance, there's just no need. There is no benefit. CHRIS: Well, I definitely know that feeling. And I think I've probably, as a developer, gone through that arc where early on I was just trying to make stuff work, and then I learned how to be clever. And suddenly, being clever became a game that I could play. And then, pretty early on, I realized I would come back to my own code from two weeks ago and be like, what the heck does this do? I have no idea. And that's when I was drawn to Ruby. That was one of the things. I'm like, oh, I can write code that looks so much like the clear words that I have in my head about the thing. I like that. And so much of my career has been spent in the let's make it obvious and revisitable. I actually remember very clearly early on in my time at thoughtbot, I was working on something and was working on it with Joe Ferris, who is the CTO of thoughtbot and a very clever individual, and I mean that in the truly positive sense of the term, one of the most capable engineers I've ever worked with. He was describing an anecdote, but it was basically he'd put up a pull request. And someone replied, "Oh, that's clever." And Joe's reaction was, "Oh, crap." Just taking that as not an insult but as someone saying, oh, that's clever in a positive way, and Joe hearing that in the negative form of I went too far here, or this is not obvious in its initial interpretation. That really stuck in my head from there, just his reaction to it immediately of that being not a good thing. And I was like, that is interesting. And all the more so over time, I've come to believe that clever is probably something to avoid in code. STEPH: Yeah, agreed. I'm at the point that if I do see someone who's done something that I do think is clever in a positive way, I will still abstain from using that word clever because I do want to make sure they don't think that I'm saying in a bad way that this is clever, that it's not readable, and it's not friendly. So I totally avoid that word when I'm complimenting someone's code just to make sure there's no confusion. CHRIS: It's one of those words that got away from us that we lost the definition of, and then we came back, yeah. Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Let's see. In other news, you had mentioned this earlier, and then I had mentioned my side of it but errors in alerting and all of those sorts of things. They're an interesting question. We had a small situation over the weekend that turned out to be kind of real, kind of not real. But I happened to be away on vacation. I did have my computer with me because, at this point, we're early enough. And I'm like, I'm going to take my computer everywhere and just be ready in case it's necessary. And in this case, I did get a ping. I looked into it and what was unfortunate is it wasn't immediately obvious if something was broken or not. And to a certain degree, that's always going to be kind of true. There's so much noise, so many requests hitting a web application. And how do you tell the good ones from the bad ones? And ideally, I could threshold around certain volumes of traffic, but even that's going to have spikes, and ebbs and flows and things like that. So it was very hard initially to understand is something actually broken? And then all the more so to understand what was broken. Thankfully, it was tractable. It was solvable. And we've done, I think, some good work especially considering how early on we are and how we've instrumented things in Sentry, in particular, our usage of Sentry and also somewhat in the logs. But again, I think I've talked about this before, but I'm feeling this tension around there's data. There's data just kind of like, what happened? And right now, we've got logs. That's one of the places that goes to Sentry if it gets escalated up to that level. And we sort of have a weird Venn diagram between logs and Sentry. And then we also have analytics as another thing and then eventually data science, and what do we want to try and learn? And all of these kinds of want different facets of it's not the same data set. But I wonder, is there a superset of data that then we could filter and slice and cut up, do all those sorts of things? I think this is the dream of Honeycomb and platforms like that, but I'm not even certain if that's true. And so I'm in that awkward middle space is how I would describe it. But in that particular case, I was able to resolve it. I did take away as an action it's probably time to start thinking about PagerDuty anomaly detection, that sort of thing. When does alerting happen? When do engineers actually get calls when not just during the normal nine-to-five of the workday? So I'll be investigating that in the coming weeks and see where we get to. But it's sort of the first thing that really pushed us in that direction. The other thing I'll say is we have the idea of the point dev, which I've talked about on a couple of episodes. But the idea is for each week, one individual on the engineering team is in charge of the noise, for lack of a better term. They're looking at the error stream in Sentry. They're looking at any ad hoc requests that are coming from our admin team, et cetera, et cetera. And that's been really great. But one thing that I've noticed is that dealing with the errors is particularly tricky and what we did in this particular case was just to pair on that. As an individual, it is really hard to sometimes to reproduce, sometimes to just understand these are the things you didn't expect in your code, and therefore they are, by definition, harder to understand, harder to think about. And then sometimes you get to an understanding. You're like, ah, what do we do about that? Do we care? Do we not care? Is this just noise? Is this something we should solve? Is it something we should solve soon? Or is this something we can solve whenever we get to it in the backlog? And making that sort of determination is all the harder. And so I'm increasingly of the mind that there should be some amount of time that is pairing on that error backlog to bring two heads together. I hadn't been thinking of it this way, but I've now come around to thinking this is a really great place for pairing because it's so hard for one individual to deal with that complexity to make the hard value judgments. And to do that, if each individual does that in a vacuum, then we have n different value systems at play that are hopefully very similar. But if we start to pair up, then there's osmosis between those groupings. And ideally, we sort of coalesce towards a shared value structure around, like, what can we ignore? What should we snooze for a week? What should we put in the backlog? What should we prioritize and fix immediately? Because I think those are really hard things to otherwise...that's really hard to document, I would say. I would love to write up a page in the Wiki that says, "This is how you treat errors," except each error is a unique snowflake, and you just have to follow your values. STEPH: I have been on teams where we've written up documentation that helps you triage an error because you're right; you can't write documentation around a specific error. But that I always found really helpful where it was like, here's all the links that you can look at, here are some recommendations. When we were working on an application that was falling over more often, there were some specific outlines around if you see this problem, then this is typically how you can solve it. And then we had to fix that at a larger scale, but it was a nice band-aid to get us through at that point. I like the idea of pairing, especially as you mentioned; it's tricky. It's funny when you mentioned capturing those errors and putting them into the backlog because I like that idea that then you can prioritize and bring those into the sprint. It just made me feel a bit hesitant. If we don't work on it now, we're never going to work on it. But then that feels unfair to say because it really comes down to the team. If you have a team that's going to be able to look at those errors and say, "Yes, we're going to bring them in and prioritize them," then that feels really good to then be able to say, "This is an error. Let's capture it. Let's provide some content around it. But it doesn't need to be addressed at this moment. It's still pretty low in terms of risk for users or at least low in impact for users." So yeah, I guess it just depends as long as the team feels good about being able to prioritize errors, which I feel confident that your team would be able to do. And if you can't, then y'all could reassess that plan. CHRIS: That's why we definitely have that. We're revisiting the errors. They're part of the same backlog as everything else. So they're coming up in relative priority and getting worked on and getting resolved. But we're also shifting our thinking just a little bit to say, "We should take a little bit more time in the moment to try and resolve some of these where we can." I have the dream of there are just zero bugs ever. But that's hard, especially in different platforms. And we're seeing a lot of mobile traffic and from different older Android versions and so weird JavaScript edge cases and things like that. Like, why does your runtime not have object? That feels like a thing every JavaScript runtime should have. But that's a joke. Every JavaScript runtime, I'm pretty sure, does have object but that sort of thing. It's like, whoa, this is weird and specific to this one device. Cool, those are fun. So yeah, giving a little bit more time to do those. And again, so we definitely do have the document that describes here are the places to look and how to think about this category of error and this category of error. But at the end of the day, you get one that's just like, there's not a ton of detail in the error. It's hard to reproduce. It might be device-specific, et cetera. And so what do you do in that moment? And that's where we're trying to...I think pairing is a great way to share that thinking around the team. So overall, it's been great, though. I think everyone who has been involved has been like, "This was better than when I did it on my own," so cool. STEPH: Awesome. That sounds great. CHRIS: Yeah, I think so. This is one of those ever-evolving facets of how we work as a team and how we build the platform. So I will certainly report more in future episodes, but for now, happy with that. And yeah, what else is up in your world? STEPH: Yeah. So we've been looking specifically into tooling around how we're going to spin up more machines to process more RSpec tests. So specifically, we have around 80,000 RSpec tests that we are processing, and we have one machine that is parallelizing those and takes around just for that portion of the build because then there are other tests and things that get run that brings it up to about a total of 30 minutes. But for the RSpec portion, I think it's probably around 20-ish minutes to process those 80,000 tests. So we split that across four different containers, and then we run those tests. And so we'd really like to spin up more machines to then process because we've reached the point that we have given as much power to that one machine as possible. So now we're looking to add more machines. And one of those solutions that we're looking at is using Buildkite, which is built with the idea that you can add these build steps so then you can more easily say, "All right, once we get to this particular build step, hey Buildkite, we'd like to run n number of machines to process all these tests." And that seems really nice. And it is something that we are interested in. It is actually what Shopify uses. They use Buildkite ci-queue, which is built for mini-tests, which is what they use, and Redis to then run all of their tests. But we are using TeamCity, so we're not using Buildkite. And we would like to see if we can grow with our current CI infrastructure versus having to move to a new one. There's a lot of just risk involved in moving to a new one. And so we've been studying hard if TeamCity will let us do this. And so far, the answer has been no. But just recently, we found somewhere in the docs that it looks like there is a chance that with TeamCity, we can inform TeamCity that, hey, even though we have just this one build step, instead of only giving us one agent or one provisioned machine to then run these tests, instead that we actually want to spin up a couple of machines to then process these and then aggregate the results back to this one step. So we're looking into that. But I wanted to throw this out there in case anybody else is also using TeamCity and has already invested in this particular approach. I would love to hear about it because we are currently figuring out the capabilities and if this is something that we can stay with our current infrastructure or if we're really going to have to look for a new solution. CHRIS: Well, I'm hopeful that someone out there can give you some input. I definitely get the idea that you're stuck, and stuck is maybe too strong of a word. But if TeamCity is not ideal, the idea of moving off it does feel exceedingly heavy and the riskiness that you talked about. That's, I think, a critical word here because I think it's easy to think of CI as like it's a very important thing. But that's absolutely critical as part of your deploy pipeline, I assume. This is speaking generically about CI, and so it is, in fact, a critical piece of the infrastructure. If you've got a bug on production and suddenly CI is down, what do you do? I guess you can test locally and decide you're going to push past it, but then you have to circumvent it. And so I understand the intentional way that you're thinking about that and the risk associated. I do wonder, though, if TeamCity has felt like not the right platform for a while and if there are considerations. Is there the possibility of both trying to improve the world that you have now, so it's not the big move off of it but then also in parallel start to work on an alternative implementation? This is perhaps not entirely fair, but it feels like a Rails application is this repository of code. And typically, CI is configured via a file. And that's like, if you've got your teamcity.yaml or whatever it happens to be, could there also be a buildkite.yaml that is not on the critical path for deploying or anything like that? But it is a way to, frankly, somewhat inefficiently test on two different platforms but start to see if you can get the code moving on a different platform and be able to gradually build out and make that transition possible without it being one big swap over sort of thing, which eventually it would need to be. But just wondering, is that happening in parallel? Is that a possibility? STEPH: I think the short answer is, I'm sure there is. There's a way to look at the existing system and then find ways that we can tweak it. But I also know that the team has already invested a lot into working with the current system and making it as efficient as possible. So I don't know if there's any true big impact but intermediary steps that we can take. We are definitely in that proof of concept world. So we're not going to move anything over for the rest of the team until we can really prove that something is working for a small subset and then start to expand from there. But currently, our idea is to dig further in TeamCity, which I think also includes just a call to their team and say, "Hey, we'd love to talk to one of your engineers and see if the thing that we're trying to do if it's possible. Let us know if it's not and if we need to look elsewhere," which is intriguing to me because having a lot of tests isn't new. There are tons of companies that have lots of tests, and they want their CI test suite to be fast. So a company that then has built software that helps Team execute these steps that then the ability to say, "Hey, I want more machines to process. I want to give you more money and to give us more machines, and we can process more things." I feel like that should be a thing. And I'm getting at the edges of my knowledge. This is why we're exploring all of this. But it has been surprising to me to realize that that doesn't seem as easy of a thing as I would have expected it to be. There are also some other concerns around here where the client that we're working with if we're going to work with third-party vendors, then we have to get special approval to work with them. It's not just a hey, we can just go try it out. It's a lengthy contract process that we'd have to go through. So there are also some constraints that we have to keep in mind where we can't just work with anyone. We need to be careful to make sure that they're certified in a particular way. So yes, I like your idea. I will definitely keep it in mind. But I don't know if there are any true intermediary steps yet other than the building out a proof of concept and then finding small ways that we could move over. Then I think that would be ideal for sure. And then hopefully, if there's anybody that's listening that has experience with TeamCity or Buildkite, that's the other tool that we're looking at using, let me know. I would love to chat about it and find out your experience. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph joins Chris in trying new things! For her, it's a new email client – the Newton email client – because she really wants to love her inbox. She also talks about implementing a suggestion from Chris on improving CI speed. Chris continues his search for the perfect to-do list app. (It's not going great.) But he has made hiring progress and is excited to move on to the next step: onboarding. Together they answer a listener question who asked for advice on crafting project estimates for clients. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Services down? New Relic (https://newrelic.com/bikeshed) offers full stack visibility with 16 different monitoring products in a single platform. Newton (https://newtonhq.com/) Subscribe to Email Newsletters in Feedbin (https://feedbin.com/blog/2016/02/03/subscribe-to-email-newsletters-in-feedbin/) GitHub - Shopify/packwerk (https://github.com/Shopify/packwerk) Sunsama (https://sunsama.com/) TickTick: To-do List, Tasks, Calendar, Reminder (https://ticktick.com/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: I am now recording. STEPH: Me too. CHRIS: [laughs] That's my recording voice. STEPH: [laughs] CHRIS: That's how you can tell. STEPH: I just like how it sounds suspicious where we're like; I'm now recording, so be careful. [laughs] CHRIS: This is now on the record. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hello. Happy, happy Friday. Oh, I have something that I'm excited or intrigued about. I don't know. Okay, I'm hyping it up. [laughs] But I'm realizing I'm also very skeptical of it. CHRIS: This is the best sales pitch I've ever heard. I'm so excited to hear what this is. [laughs] STEPH: I am trying a new email client; it is the Newton email client. And I so want to love my inbox. I want to check on it. I want to help it grow. Okay, that's the opposite. I want to help get through all the emails that come through, but I just want to love it. I want it to be a good space that I want to go to. And I just hate email so much. And it always feels like this chore that it's really hard for me to bring myself to do, but yet it's really important because a lot of good things come through email. So this is my rambly way of saying I'm trying the Newton email client because I saw on Twitter from Andrew Mason, who has very similar feelings that I do about email, where we are just not fans of it. And we rarely check it and have declared email bankruptcy at several points in our life. And he's also one of the co-hosts for Remote Ruby. But I saw on Twitter that Andrew was talking about the Newton email client and how it actually made him feel that he enjoyed writing and looking through his inbox. And I was like, yeah, that's the sales pitch I need. So I'm giving it a go. It's been only a couple of days. But one of the nice things I have noticed about it is it's very focused, and there's not much noise, and it actually feels like very minimal design where if you open up like a new email, so you're opening up a new draft, there's no much noise. You get to just focus, almost like you're writing a little blog post or journal post or something. It takes away a lot of the noise. While in Gmail, it's going to open up a small window in the right, but then you still have the rest of the noise that feels distracting. So I like that very intentional like, hey, you're just doing one thing, just focus on this. And then also you can integrate other email accounts as well. So you can have one-stop shopping versus Gmail, then you have to click around and sign in, sign out, or visit different email accounts. So we'll see if it helps improve my email life, but that's something new I'm trying. CHRIS: Very interesting. So you're fully on inbox zero life now. That's what I'm hearing. [laughs] STEPH: Ah, hmm. I don't want to lie to you. [laughs] We have a good friendship. I won't start lying now. CHRIS: I appreciate that. So you're halfway to inbox zero. You're not even entertaining that idea, right? This is just you want a better tool to do email. STEPH: Exactly. Inbox zero is not incredibly important to me. But I do want to make sure that I know that I've seen everything important, and I know where to find things. And then making sure that I am responding to people in a timely manner. Those are more my goals. Inbox zero, if that supports it, then great, I'll work on it. But not necessarily that has to be the goal that I reach. CHRIS: Gotcha. I'm not seeing Newton, but I'm intrigued. Particularly on mobile, I have the Gmail mobile app, and that has unified inbox, which I appreciate. But Gmail on the web does not, and I find that odd. And I've never found a mail app that I enjoy because I want some of the features of Gmail. I want to do Gmail snoozing because I still want that to be consistent and whatnot. And to be honest, that's the main way that I get to inbox zero. I just say future me will have more time. I actually tweeted recently. It was a screenshot from my Saturday inbox, which I think was 15 emails that I'd snoozed from the previous week into Saturday morning. Because I'm like, Saturday morning me will have so much time, and energy, and coffee, and it'll be great. And then it became Saturday morning and, ooph, what a view. STEPH: [laughs] Yeah, your snoozing tip has been life-changing for me because that's not something that I was using all that much. The two things are, one, schedule send so that way if I do have a sudden burst of energy and I want to write an email, but I want to make sure that person doesn't get notified until a decent time. Being able to schedule an email and snoozing is amazing. I think Newton and Gmail have pretty much similar features. I was trying to do a comparison. I was like, is there something really snazzy that Newton does that Gmail doesn't already give me? But it looks like they all do about the same, having those important features like snoozing and then also being able to schedule emails. So I think it really just comes down to a lot of the UI, and there may be some other stuff I'm missing since I'm new to it. But that's the main appeal for me right now is the focus and the look and feel of it. So then maybe I will find looking through my inbox a more zenful experience, I think is how I saw them advertise it. CHRIS: Well, I definitely look forward to hearing more as you explore this space. I will say looping back to what you were just commenting on around deferred send, which is definitely something that I use, but you described one of the reasons that I use it. So the idea of wanting to be respectful of someone else and not send them an email on Sunday night because you happen to be working at that point. But you don't want to put that on their plate. I would say equal amounts; that's the reason I use scheduled send. And then the other reason that I use scheduled send is please, for the love of God, I do not want another email back in my inbox. So I will reply to something such that now I'm done with that, but I will schedule send it for the next morning. Because tomorrow morning me can deal with whatever reply this generates. There's some adage; I don't know if it's an adage, but the idea that every email that you send generates 1.1 emails in reply. So emails just have this weird way of multiplying. And so if you send one out there, you're probably going to get something back. And so often, if I'm trying to clear my inbox, I don't want to get another email in my inbox at that moment. So I will not actually send the reply. I will schedule it for a future time because I do not want to hear. I want no new inputs at this point. I'm trying to process them. So that's part of why I use deferred send. STEPH: I had not thought of that, that yeah, that if you schedule it for tomorrow, you've really gamified this inbox zero because you're like, yeah, if you send something, then you might get an email back. But you're like, if I wait till tomorrow to send it, then I'm less likely to have another email, and then I've hit inbox zero, and I'm set for the day. I like it. It seems helpful. CHRIS: Yeah, inbox zero sounds like an altruistic thing, but it is not. It's a way to force myself to have to make decisions, which is something that I want to get better at broadly. And that's part of the role that I have now. A lot of what I'm interested in exploring is just getting better at making decisions, being more decisive, being more action-oriented. Because I just have a tendency to make many, many spreadsheets and think about stuff for a while and take a long time to make a decision. But I don't get to do that, particularly now. But broadly in life, that's probably not the right mode to be in. So inbox zero is another thing that forces me to deal with things rather than just be like, I don't know, I don't know, I don't know, and keep looking at the same thing over and over. So just more thoughts about inbox zero, but now I'll stop talking about it. STEPH: I do like that, though. And you're totally right; it can be a very helpful constraint. And I think that's sometimes why I fight it because then I haven't curated my inbox enough that then when I go to it, there are so many interesting things that then I feel a little bit overwhelmed where I'm like, oh well, I want to read this, and I will look at that. And this seems interesting, and maybe I should be a part of this. It feels like one of those like; you could be a part of these ten amazing things. Do you want to be a part of all of them? And given a person that it's hard for me to say no to or recognize that no, I'm just going to not do anything with this, that is hard for me and would be a good skill for me to hone in on and practice and make quick decisions and be very realistic. Because I used to be subscribed to more newsletters, and then I finally had to stop subscribing to them because it had that same effect on me of that FOMO of like, I'm missing out on this great article or this great video. And I've become more honest with the fact that my Saturday morning self isn't going to want to read through a bunch of newsletters and videos about coding, that I'm going to want less screen time. So that is a really good constraint and helpful skill to cultivate for sure. CHRIS: All right, I said it was done, but one more thing. I feel like I've mentioned this in the past, but Feedbin is the thing that I use for RSS. I still believe in RSS as a technology. But everyone's moved to newsletters these days that go via email. Feedbin gives you an email address that you can use to subscribe to newsletters, and then they do the job of converting that into an RSS feed. So for me, I take something that was now a push into my inbox, and now I can pull whatever I want from that RSS feed. And on Saturday morning, if I'm feeling like, with a cup of coffee, I can enjoy some newsletter about all the new hot tips in Svelte land or whatever it is or not. But it's not clogging up my inbox. And with that, I think I'm actually done talking about inbox zero. [laughter] STEPH: Yeah, that's a nice separation. We could keep going. I have full faith in us that we could keep going about this. But I'll share a slightly different update. I've been implementing a suggestion that you provided a couple of weeks back where we were talking about Rspec's selective test running and how some applications will speed up their test. If you change one part of the codebase, then perhaps you only need to test this chunk of test. You don't actually need to run the full test suite. And that is complicated and seems hard to get right, and really requires understanding boundaries. But then also knowing Ruby, then how do you really identify? Do you really know where this method is being called and can identify all the tests that need to be run? I think we'd mentioned before there's a really good article from Shopify where they have worked on this and created an open-source project called Packwerk. So we can link to that article in the show notes. But more specifically, you suggested, well, what if you just change a test file? That seems very low stakes and also has the benefit of creating a reward where if someone does see something that they can improve in a test, then that's a very quick feedback. Let me just get this change. It's going to be fast on CI. I can merge it right away and also saves time on CI. So I've been working on implementing that change. And it's one of those the actual change is easy, like checking with Git to say, "Hey, what files have changed?" Does it have an _spec.rb at the end of it? Great. Does it not? Okay, we've changed some application files. So let's run the full test suite. That part's easy. Getting it integrated into the build system has been more complicated just because this team has done a lot of work around trying to improve and speed up their test. And there's a fair amount of complexity that's there. So then figuring out a way to stitch my change into all the different build processes that take place has proven to be more difficult. But it's also been insightful just because it has now helped me really understand and forced me to learn, okay, what are all the different steps? What's important for each one? Where can I cut off the rest of the running of the test and instead just focus on running these tests? So in some ways, it's been challenging, but then on the positive side, it's been like, okay, well, this has taught me a lot about the existing system. So at this moment, it is still a work in progress. I'll have more updates in the future. I am excited to see the rewards. I've gotten to the point where I just have a proof of concept where I've gotten pushed up, but it's not production-ready. But it's at least I just wanted the feedback that I'm in the right spot and that we're running just the right test. And so far, it does seem like it's going to be a nice win, even if it's maybe not used by everybody because it's probably rare that someone is altering just a spec file. But for people who are looking specifically to improve the CI build time and working on tests, it will be very helpful to them. So yeah, I'm sure I'll have some more updates in the future. What's going on in your world? CHRIS: Well, I definitely look forward to hearing more about that. However, we can improve CI speed; I'm super interested in that as a topic. Mid-Roll Ad Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem. So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy. That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software. So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's newrelic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed. CHRIS: Well, similar to your email adventures, I continue on my search for the perfect to-do list. It's not going great, if we're being honest. [laughs] To be clear, because I've mentioned this on a few different episodes, I'm not spending much time on this at all, some but not much. And so it's not really moving. But there are two interesting things. I took a look at TickTick, which was one that I mentioned in the past, a tool for this. It seems good. It seems like an intersection between things, which is what I'm currently using, Todoist, which I've used in the past, and some other tools. So I think I'll probably explore that a little more. It seems like a good option. Decidedly, the most interesting thing is a tool called Sunsama, which is different in some interesting ways but very interesting. So one thing to note about it is it's $20 a month, which is a lot of money for one of these tools because most of them are like, "We're $20 forever, and then it's free." And it's a surprisingly low-cost space. And so, they're definitely positioning themselves as a more costly entry. I would be fine with paying $20 a month for a tool if it really is like, no, cool, I feel great. I'm more productive. I'm happier when I'm not working, et cetera. But what's interesting is they seem to do a let's reach out to all the places that tasks can live for you. So there's your inbox for email. There's your Trello board that you've got. There are GitHub issues. There's Slack. There are all these different sources of potential tasks. And they do a really good job of integrating with those other tools and then allowing you to pull that list into Sunsama and then make each day you have a list. And those items can be like, this is a reference to a Trello card on that board. This is a reference to a Slack conversation over there. So I'm super intrigued by it. It's also got a very intentional plan your day mode, which I like because that's one of the things that I'm really looking for is at the end of the day, I want to clean everything up, make sense of all of the open items, and then reprioritize and set up for the next morning so that I can just hit the ground running. That said, I tried it, and it just didn't quite click. And I think it's one of those it takes some effort to understand how to use it. So I'm not sure that I'm going to get there. But it is super interesting because that idea of our work lives in all of these different tools these days feels very true. And so, something that is trying to act as a hub between them to integrate them is very interesting to me. Again, I haven't really gotten anywhere on this. I'm kind of just reading blog posts, as it were. So I'll report back if that changes, but -- STEPH: The search continues for the right to-do app. Yeah, that seems interesting. I don't know why I'm feeling hesitant towards it. I'm one of those individuals...you're right; there are so many tools. And the fact that they integrate with a lot of them seems really nice. I'm at the point where I just grab links to stuff, and I'm like, hey, if this is my priority, I grab a link to a Trello ticket, and then I just copy that into my to-do. I guess I like that bit of work over having to integrate with a bunch of different platforms. Because once you get used to integrating...I don't know; I'm just rambling. But I wish you the best on this journey. I'm excited to hear more. [laughter] CHRIS: Thank you. I will certainly report back. But yeah, nothing pointed to share at this point. But I do have something pointed to share on the hiring front, which is that we have hired some folks. STEPH: Hooray! CHRIS: Yay. So this has been a fun saga across a couple of different episodes. And in my mind, it feels like this much longer, more drawn out thing, but it's; actually, I think, come together relatively quickly, all things considering. We've got someone who's starting in a little over a week's time, and then someone else who's starting in, I think, two or three weeks after that. So that'll be great. Hopefully, we can transition into onboarding, which is a different whole approach. But hiring as a distinct activity can scale back significantly. As we discussed last week, I want to be in the always be hiring mindset but in the more passive mode of having conversations with folks, staying connected. And if a great candidate comes along and it's the right time, then bring them on the team but not actually actively reaching out and all that sort of stuff, which will be great. Because it turns out that takes a lot of time and also a lot of energy for me. Having those first conversations, going into it very intentionally trying to communicate about something, and there's a tone of salesmanship to it that is not my natural resting state. So I come away from each conversation being like, that was fun, but also, I'm drained now. Why am I so drained? So not having that be a thing that is filling up my calendar is great. And also super excited with the folks that'll be joining the team and to be able to now grow our little team and define the culture and the shape of the groups that we will be collectively. I'm excited for that work and what we can build together. So yeah, it's an exciting time. STEPH: That's awesome. Congratulations. Because yeah, everything you're saying sounds like it's just been a lot of work. So that's very exciting. There's someone that I was chatting with earlier today where they were talking about the value and the importance of understanding what your natural skills are and the things that bring you energy. And so you're mentioning there are certain activities that you enjoy them, but they're also draining because perhaps they are on the outer boundary of what you might define as your own natural skill or the things that get you really excited. And I found that all very interesting. It had me thinking about that today about where are the natural areas that I find that I get energy that are easier for me? And then making sure that I'm trying to prioritize my day so that I am more focused on the activities that just align with who I am and also that I'm engaged with and then also looking for ways to stretch. But they made the point that if you are always in a space where you are not using your natural talents, and you're always having to stretch, then that can be what leads to burnout. Versus if you're in that sweet spot, that zone of where you are using your natural skills, but then also stretching a bit. And I think there are some assessments and things like that that will help you then determine what are my natural skills, and what do I like to do with my time? I just like that style of thinking and recognizing, like you said, like, hey, I did a thing. It was fun, but I'm drained. So now I know that this is something that requires more effort for me. Like hiring, that's one for me. I really like interviewing. I like talking with people, but I'm so nervous for them because I know interviews suck. [laughs] I just have so much empathy for them where I'm like, this is going to be a hard day. We're going to make it as pleasant and positive as possible, but I know this is a hard day. And so I feel like I'm in it with them. And so afterwards, I feel that same relief of like whoo, okay, interview day is over. CHRIS: I don't know that I quite achieve the same level that you do but in no way am I surprised that that is your experience of hiring. And just to name it, you're a wonderful human being that feels for the people on the other side of the hiring table. Like, oh my God, this must be so stressful for you. It's so kind of you to be in that space with folks. But coming back to what you were saying a moment ago, that idea of, like, understanding where your strengths are and where they're areas that you're not quite as strong. And I think critically, the question of like which are the ones where I want to just kind of say no to? I'm like, that's fine. This is not going to be a competency of mine. And I'm going to just avoid that or find other people to work with that balance that out. So for me, sales is the thing that I don't think that's ever going to be my bag. I don't think I'm ever going to move in that direction, and that's totally fine. Whereas decisiveness, which I was describing, is like, I think that's the thing I could get better at. That is one that I don't want to sleep on that. I don't want to say, "That'll be fine. I'll just have other people make the decision." No, I need to get better at making decisions, making decisions with less information or more rapidly, having a bias towards action. All those things I think will be deeply beneficial. So I'm trying to really lean into that. Whereas yeah, again, the sales stuff I'm like, yeah, and there's plenty of examples of this otherwise. But I've also been coding a bit more this week, which has been lovely because the hiring stuff has ramped down. And that has freed me up amongst some other stuff that's been going on. And you know, I like to code, it turns out. It's fun. I just clack about on my cherry brown keys, and it's great. STEPH: Do you remember when we first got introduced to mechanical keyboards, and we had co-ownership of one of the keyboards? And we literally had days of where it was like your turn to use the keyboard. And then it was my turn to use the keyboard. How long did we keep that up before we were finally like, we should just buy our own keyboards? CHRIS: It was a while because we were working with a colleague who was trying out a Kinesis, I want to say, one of the split little bowl of keys. But yeah, we had a shared custody over a keyboard, and it was fantastic. I remember that very fondly. STEPH: The days that it was my keyboard, I would go to the office and be like, oh, today is my day at the keyboard. This is great. This is going to be such a wonderful day. [laughs] And now I'm just spoiled. CHRIS: It went on for a while, though. And this was something where we both obviously enjoy this keyboard. Why don't we just buy one of these keyboards? We totally could have done that. And yet, for some reason, both of us were like, no, but what if...I got to think about this. Again, decisiveness. [laughs] We come back to this topic of well; I had to really think about it. And then somebody got the 92-Key test or whatever it was in the office. And so I just went over and poked every one of those for a while. STEPH: Exactly. It was option overload where we're like, well, okay, we're going to buy one, and then you open it up and search, and you're like, oh, you want options? We have options. Do you know about the blues, and the browns, and the colors, and these different options? Like, I don't know any of this language that you're talking about. I just want to clackety-clack. So yeah, it took time. We had to do our research. CHRIS: And then I ended up on basic browns. So here we are. Let's see, popping back up the stack a couple of levels, hiring that went on for a while. Now it is less going on. Although to be clear, like I said, always be hiring. So if anyone out there in the world is hearing what I'm talking about with Sagewell or seeing any of the stuff that I'm putting on Twitter, which isn't much, I occasionally just post screenshots of my commit messages, which recently included better snakes as a commit message. [laughs] I have to dig into that or not. But we were just doing some snake case to camel case conversion. But the commit message was better snake, so here we are. Anyway, if any of that sounds interesting, please do reach out. But I'm excited to transition back to focusing more on the work. On that note, actually, I'm going to call it interesting things that is happening right now organizationally is; we are working with an external security firm to help with some...they helped us out with a penetration test when we needed that. And then they have stayed on retainer and are helping with various different configurations, taking our AWS S3 buckets and making sure those are nice and secure, and all that kind of stuff. But we've recently started to focus more on organizational security, specifically a bowl of acronyms. We've got SSO for single sign-on, MDM for something device management. I don't know what that first M is. I probably should learn it, but it's fine. That's why I've got help on this is I think they know what the acronym stands for. But so we're working on each of those. And on the one hand, they're probably going to be kind of annoying, like having to go through the single sign-on. It's a whole thing, and it's harder to sign into stuff sometimes. I mean, ideally, it's actually easier. But in my experience, it adds some friction at some points. And then MDM means that there's now some profile manager on the computer. So I can say like, "Every computer must have full disk encryption or else you can't use it. And we need a passcode, and it must be this long and those sorts of things." So it's organizational controls that I think are good for us having a robust security setup throughout the organization. But yeah, they're the sort of things that I think historically, I probably would have, as someone working in an organization, had been like, do we have to? Do we need these things? Couldn't I just do whatever? But now there's something about it that I really like. I'm trying to name it in my head, but I'm kind of like, I don't know. This feels like growing up as an organization. And there's always weird corollary that I've been thinking about with the Rails app that we've been building, intimately familiar with just everything that's going into it. And I know the vast majority of lines of code. I haven't written them all, but I've had an eye on all of the different features that we're building in. And it's hard to get out of that headspace where it feels like a bunch of pieces. It doesn't feel like a hole to me, even though it definitely is. But when does a bunch of boards that you nail together become a boat? To make a really weird analogy because that's what I do; it's a hobby of mine. But when does that transition happen? At some point, certainly. But that's harder for me to see on the code side. And organizationally, somehow getting these things in place feels like the organization sort of an inflection point for us, a growth point, which is I'm really excited about it. Even though they're probably going to mean a ton of annoying nuisance work for me because I'm the person in charge of making sure it all gets rolled out. And anytime anyone locks themselves out of an account, I have to help with that. And so it's probably just putting a bunch of annoying work on my plate. And yet, I don't know; I'm kind of excited about it. STEPH: I feel like that shows our roots in terms of how we approach projects that we work on where you mentioned do we need this? Do we need this yet? Because I feel that we're constantly as developers and consultants just we're trying to advise on the more simplified do we need this? Is this the right thing to spend the money on? How do we know? What are the metrics? What does success look like? And all those questions. So I feel like the way you just phrased all of that just really shows that sort of mentality that you grew up with in terms of checking in, and yeah, it's cool. Like you said, you're at a growth point where then it's like, yes, we are at this point that I've asked myself all those questions, and we're here. This feels like the right next step. CHRIS: I like the way you described it as that you grew up with, my formative growing years at thoughtbot. Mid-roll Ad Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Well, switching gears just a bit, we have a listener question for today, and this one comes from Stephanie. So not me, another Stephanie in the world. Hello, other Stephanie out there in the world. And they wrote in, "Hi, Steph and Chris, fellow software consultant here. And I'm wondering if you'd consider talking about how to craft a project estimate for a client on the pod. It's such an important aspect of consulting." Amen. I added the amen. "And I feel like I'm very much impacted as a project team member when the estimate isn't accurate." Double amen. So true. [laughs] "Would appreciate any and all thoughts, especially since it might be part of my job in the future. Thanks." I just realized I put us in consultant church by adding all those amens, but here we are. [laughs] CHRIS: I'm glad you clarified that they were additions by you and not part of the original question coming in. STEPH: Sure. I don't want to speak on behalf of Stephanie. So I have some thoughts on the matter. I think there are a couple of different ways that we can talk about this particular question because I think there are different formats as to when you're estimating and who you're providing the estimate for. But I'm going to pause because I'd love to see what you think. How do you go about approaching crafting an estimate? CHRIS: Sure. I'm happy to share some thoughts. And for a bit of context, this question came in to us, frankly, many months ago, but I did send an initial reply to Stephanie because I know that sometimes we take a little bit of time to get back to folks. So if ever you do send in a question, know that one of us will probably respond via email earlier, and then eventually, will make it on the show. And again, just to say, we do so appreciate when folks send in these questions. It's an interesting way to shape the conversation and a way to get topics that you're more interested in into the fold here. But so the two main ideas that I shared in my initial reply were, first, is an estimate really necessary? I think that's a critical question because an estimate implies that this thing is knowable. And as many of us, probably all of us, have found out at some point in our lives as software developers, it's really hard to do software estimation, like wildly difficult. And not just the thing that we'll eventually get better at it, which you do, but there's just some chaos. There's some noise in this work that we do that makes it so, so difficult to get it right. So pretty much always, I will ask, like, do we need to estimate here? What if, instead, we were to flip the whole question on its head and say, let's set a deadline. Let's say two months from now that's our deadline. And let's ruthlessly reprioritize every single week to make sure that we're building something that's meaningful, and we're getting there. And obviously, we have to have some general idea of what we're doing. Is two months a meaningful amount of time to build a rocket to go to Mars? Probably not. But is it enough time to build an app that can allow users to sign in and manage a simple list of items? Yeah, we can definitely do that, and we can probably add a bunch of more features. The other thing that I think is worth highlighting is there's a bunch of stuff that is table stakes and very easy to do. But I would, whenever doing estimation, emphasize unknowns. So, where are the external integrations with other systems? Where are the dependencies that rely on other folks to provide some inputs into this process that we can't be certain where there'll be? In my experience, the places where estimates go awry are often these little intersection points that you're like, well, this will probably take a day, maybe two. And it turns out; actually, this can somehow balloon into a month. That's not a thing that feels comfortable saying in an estimation process, but it is definitely real. I've seen it happen so many times. And so it's those unknowns. It's those little bits that I would emphasize as part of the process if you do need to do an estimate and say, all right, here's the boring stuff. I think we can do that pretty easily. But this part, I don't know, it could be a week, could be three months. And frame it in that way that there is this ambiguity there. Because if someone's asking you for an estimate and they're looking for like it is seven days and two hours exactly, it's like, well, that's not realistic. That's not how this thing works. Unfortunately, I wish it did. But pushing back and changing the conversation is the thing that I have found valuable. I think there's some other really interesting stuff in here around the team dynamics that Stephanie is talking about. But I want to send this over to other Stephanie to see your thoughts because I'm super interested to hear what you have to say as well. STEPH: Oh, I like how you hinted at the team dynamics. Yeah, that could be a fun one to circle back to. So I love how you called out highlighting the unknowns. There are a couple of ways that this comes to mind for me. So there's the idea of the weekly or the bi-weekly estimates that we make as developers and designers. So let's say we as a team are getting together to focus on a chunk of work and decide what we can and can't get through. And that feels one of those the more you get to practice it more frequently; you get to ask a bunch of questions. And that feels like a good rehearsal and exercise of how to go through estimates. And I know you and I have pretty similar strong feelings around how those estimates are then treated by the company. They should really just be used for the team to talk through the complexities in the work to be done versus used to communicate outwardly as to this is when it's definitely going to ship. So there's that more immediate practice of providing estimates. And then there's the idea for more of a consultancy or a company, and someone is coming to you, so thoughtbot being a great example of then how do we work with teams that are looking to come to us and gain an estimate for getting a certain feature implemented? So actually, I went to the source on this one. I went to Josh Clayton, who does a lot of the conversations for the Boost team when it comes to talking with clients and about the potential work that they would like to be done. And mostly our work is often teams will hire us. They have specific goals in mind, but they're really looking to hire ongoing development and services. So they really want to add to their existing staff. And then it's going to be an ongoing relationship versus a hey, we need you to quote us for how long it's going to take to implement this particular feature. And on that note, we don't do fixed-bid work. So we don't say it's X dollars for specific features. But on the realistic side, customers are often capped by a budget. And so that estimate is very important to them because it could be a difference between it's a go versus a no go. So if you have larger companies that are like, "Yep, we want to engage with thoughtbot. We really just want additional development power and design services," that's great. For those that are smaller, it could be an individual product owner, and they need to say, "I really want this feature, but I only have this much money. And frankly, if I can't ship it by this time, I'm not going to do it because it's not worth the investment to my company." And then, in those cases, those are the ones that we're going to spend more time with them to talk about what does the fallback plan look like? And what's our opportunity for simplifying the features? And Josh, in particular, referenced this as systems thinking. So he will go through the idea of drawing out the set of steps, understanding the complexity of the different screens. So what are the validations? What are the external dependencies? What is owned by us and what isn't? What is the likelihood that we're going to get permission to simplify or remove complexity? And even then, when we start to provide some estimates, it's going to be in weeks. It's not in hours; it's not in days. It's going to be in a slightly larger time frame. And then we're also going to spend more time in the discovery phase to say, okay, well, we know you need to fix this particular issue, or you need to integrate with this particular service. So we're going to need to ask a lot more questions about your codebase. What problems have you already run into? Have you tried to do this before? Do we have experience doing this? Is this something that we can lean on and ask someone in the team? And, say, how long do you think it would take for us to work on this? And that's knowledge that isn't privy to everybody. It depends on where you're at in your career as to like, oh yeah, I've done this like five times before, and I know exactly how this stuff can fall apart. I know where the complexity lies. So I think that's why estimation is so difficult is just because it does often pull from that existing experience. And so, if you don't have that experience for a particular set of work, of course, it's going to be hard to estimate because you just don't know. So that was a very broad scope of as day-to-day developer and designers; I feel like we're constantly getting practice and estimating and communicating the progress of our work. And then on the larger scope of if you are a consultant who's then looking to give estimates to clients, then understanding what other need can you sell them? Just ongoing development services. Or, if they are a smaller team and very focused, then what legwork can you do ahead of time to de-risk the project? And then understand how much control you're going to have to be able to simplify as you learn more as you go. Because you're going to, you're going to uncover some things, and you're going to learn some things. And what's that collaboration going to look like? I do have one more concrete example I can provide around some of the smaller projects that we take on. So when we are helping someone that's, say, getting a new product out to market, then we do have a more deliberate three, four-phase approach where we first focus on discovery, and ideation, and validation. And then, we move on to iteration and then launching. And I really like how you said about providing a deadline because then that helps us scope aggressively as to what is the minimum thing that we can get out into the world that will be valuable? And then there's usually some post-launch support as well. But that's often how we will structure those smaller, more specific engagements. CHRIS: I think one of the critical things that you highlighted in there is that thoughtbot doesn't do fixed-bid work. So we're going to do these 20 features, and it's going to take four months. thoughtbot does not do that, and frankly, that's a privilege to be able to take that position and say, "No, no, no," we're not going to work that way. But it is, I think, a trade-off. It's not just something that thoughtbot does to be like, listen, that doesn't sound fun. So I'm not going to do that. It's a trade-off. Not doing that comes in concert with saying, "But weekly, we're going to talk about the work that we have done and the work that remains and constantly, ruthlessly, reprioritize and re-decide what we're doing." And it's that engagement, the idea that you can have a body of work, look at it and say, "Yeah, that'll take about six months," and then go away for six months, and then come back with the finished software. Our strong belief is that that's not the way good software gets built. But instead, it's a very engaged team where the product owner and the development team are in constant communication about each of the features that are being developed. And then again, ideally, on a weekly cadence, coming up for air and saying, "How are we doing? Are we moving in the right direction? Are we getting towards the goals? If not, do we need to simplify? Do we need to change things?" And similarly, as I mentioned deadlines, I feel like deadlines is probably a word that many people think of as very bad because deadlines often come with also a fixed scope, but that can't happen. That's two constraints, and you can't have them fighting that way. But a deadline can be super useful as a way to say we're going to put something out there in the future and say we're heading towards that moment. And let's, again, cut scope. Let's change what we're building, et cetera. But critically, not say, "We got a deadline and a fixed scope. We're going to do that." And so it's, again, just ways to gently shift the conversation around and say, what if we were to look at this from a different angle? Because just having a pile of work and saying, "That'll take six months," I've never seen that play out. STEPH: Yeah, to me, deadline is a bad word when the deadline is set by a team that's not doing the work. So if you have leadership or if you have someone else that is setting this deadline and then just passing that down to someone else to then fulfill, regardless of the feedback or how things are going, then yeah, then it can be a nasty thing, which I think is a little bit of in that question that you picked up on that you highlighted where there could be some interesting team dynamics that Stephanie called out, highlighting that I'm very much impacted as a project team member when the estimate isn't accurate. And I'm making some assumptions here because I don't actually know the exact situation that Stephanie is experiencing. But it sounds like someone else externally is setting these team estimates. And so then you're handed this deadline, and then stuff goes wrong, but you're still pressured to meet this deadline. And I've certainly been part of projects that are like that. And then that is one of the number one things that then often comes up in a retro or like, we don't have control over these deadlines, or we don't know why these deadlines are being set. And then people are working extra hours and working nights and weekends to then meet this arbitrary deadline that none of us signed up for, and that's just not fair to treat deadlines in that way. So full-heartedly agree that deadlines can be a very positive thing, but they need to be set by the people doing the work. And then there has to be discussions and updates about how is this going? Do we have control to simplify this? We thought we could do this with this particular external provider. It turns out that that's a nightmare. Is there another provider we can go with? Can we ship this incrementally? Like some features, you can't. They may have to go out wholesale. But is there a small chunk of this that we can deliver that is then a success that leadership and others can brag about? And then we can keep working on the rest of it. So it's always identifying what are the smallest wins, and how do we get there and getting buy-in from the team? Going back to something that you said earlier around, it is a privilege, where so as thoughtbot, we don't do fixed-bid work. And that is a nice thing for us to be able to focus on. But for people who do need to do fixed bid work and are relying on that, I think that often requires more legwork. And maybe that becomes part of your estimate. I'm just making up how I might approach this if I were trying to do fixed bid work. But there's a discovery phase that's very important. So maybe the first part of your estimate is I need to really understand the feature and see the different screens and know what materials we do or don't have. What does the codebase look like? Do I feel like this is a codebase that I can work quickly in? And is it going to be hindersome for me? But answering a lot of those questions to then help me paint a picture of, like, okay, this is a feature that I've implemented before, so I feel pretty confident that I could do this in a month. And then also communicating that this is my estimate but just know it's an estimate. And I will continue to update you each day as to how things are going or each week as to how things are going, and things may adjust. And we can always talk about ways about simplifying this. But I think that's how I would go about it is; frankly, it's going to require more legwork for me to feel more confident as to then telling someone as to how long I think the work will take. I think that's a nice, broad scope of the different types of estimate work to be done with the general idea of if you can avoid estimates and go for more frequent updates, then that's wonderful. But then, if you are forced into a corner where you need to provide an update, then just do as much research and honesty as possible and then still include the frequent updates. CHRIS: Yeah, that I think summarizes it quite well. STEPH: As a side note, it's been a lot of fun to feel like I'm referring to myself as a third person as Stephanie is working through this problem. So that's been novel. But yeah, thank you, Stephanie, for the great question. I hope that was helpful. On that note, Shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph is super excited about changing her schedule to dedicate a full day to focus on being a great team lead. Chris talks about his continued adventures in the world of hiring. Together they answer a listener question about what they consider a “large” table in a database and how they review schema changes. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Services down? New Relic (https://newrelic.com/bikeshed) offers full stack visibility with 16 different monitoring products in a single platform. ClickHouse (https://github.com/ClickHouse/ClickHouse) Timescale (https://www.timescale.com/) InfluxDB (https://www.influxdata.com/) ddl-partitioning (https://www.postgresql.org/docs/current/ddl-partitioning.html) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: I just feel like every time I listen to Celine Dion, there are lots of dramatic hand gestures that have to go with it. CHRIS: Yep, definitely that. I'm strong team Power of Love. STEPH: Ooh, yeah, yeah, that's a good one. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Oh, I have some exciting news. I am changing up my schedule. It is going to start next week, where as a team lead at thoughtbot, we have been working on finding ways that we can have more time to invest into the team and team-specific initiatives. And most of us spend four days billing on client work, and then we have investment day, which is delightful. But we're finding as team leads, that's really not enough time to then have the impact that we want in terms of supporting our team and then also having time for mentorship and all the other things that go along with being a team lead and one on ones. So we have been incrementally working towards reducing billing. So team leads only bill three days a week, and then they have an additional day to really focus on being a great team lead. And I start that new schedule, that new-new schedule next week, and I couldn't be more excited. I think it's going to be wonderful. I do think there are some challenges that go with it in terms of really balancing, at least this is from the others who have gone before me where they then find it a bit harder in terms of client expectations of saying, "Well, I was billing four days, and I had a larger impact on the codebase and the team. Now I'm dropping to three days. I still need to stay within that constraint. And I want to keep the client team happy." So that seems to be a thing. But I will find out next week how it goes. CHRIS: Well, I'm very excited for you. That sounds wonderful, frankly. The balancing of the client expectations and then there's sort of now three slices to your work, which there always were, but now you have it delineated in an interesting way. Do you have specific plans for the team lead? So let's say now, nominally, there's one day a week that is dedicated to team lead time. Do you have ideas of what that looks like? Are you planning to pair with your team? Is it longer one on ones? I don't want to seed the question too much with potential answers. So what are you thinking about there? STEPH: [laughs] Ideas are great. And yes, so I think number one is structure. So right now, one on ones and any support that I need to provide others is more ad hoc, or at least the one on ones those are not ad hoc; they are structured. But they are spread out throughout the week, and then I just context switch between client work and then checking in with others. Now I can stagger everything on a Thursday or whichever day is going to be my really focused team lead day. So that way, I have all the one on ones on that day. And then yes, I can have more time to pair. So I can say, hey, let's just schedule every other week where you and I hang out, and we pair for an hour, and it can be on their client work. It can be on anything that they'd like to work on. Or, if there's a particular topic they'd like to talk about, we can pair on consulting issues or discussions. But yes, ultimately, I'd love to do more pairing and then structure one on ones to a specific day and essentially, just really protect that time. Because right now, it feels that client initiatives and work always come first, and then team lead comes second. And I'm excited to balance that more so they have equal priority. CHRIS: Yeah, that sounds great. I'm super intrigued to see what specific structures fall out of it and how you're experiencing it. I'll be interested to hear how investment time changes for you as a result of this because I remember when I started in the management role, four days a week billing, and then one day a week of investment time. But the investment time then basically went to one on ones and other things like that. And when I switched to a three-day week, I was able to reclaim some amount of investment time. And it was interesting having that open back up and have that be a consideration. Because definitely one on ones and things like that I think firmly fit within the idea of investment time or investing in the organization and whatnot. But still, there's the like; I'm going to go explore a new framework or something like that that also certainly fits within investment time. So I'll be interested to hear if you find that changes in sort of a specific way. STEPH: Yeah, I'm really interested in that as well. Because right now, as you mentioned, my investment activities are really focused more around the team and other folks and then Bike Shed. Bike Shed is a really big investment time activity. So I've noticed since becoming a co-host for the show, I talk a lot about code, but I don't necessarily contribute to open-source projects or other internal projects at the rate that I used to. It's now more focused about here and being a co-host and talking about all the things, and that requires some prep for me. So I'm also interested to see if this will shift my investment time a bit where I do find a little more time to code and then explore just things that I'm interested in. But in the experiment of doing something new, it's always important to then have a way to measure is this a good change? Is this a bad change? So we have been checking in with team leads to say, "Hey, we've changed your schedule to where you're billing one day less. How's that going for you?" Because there's the assumption that this will be great, but you really have to check in with folks to find out. So Edward Loveall has been sending out a helpful survey and checking in to say, "Hey, how are you feeling about your client work? How are you feeling about your team lead responsibilities? How are you feeling about investment time?" So then you can track your own growth and see is this really helping me? Is this really going in the right direction, or am I just more stressed about everything now? So that's helpful that we are also just looking back to make sure that this is supporting the initiatives that we said it would support. But that's some of the newness in my world. What's going on in your world? CHRIS: What's going on in my world? Continued adventures in the world of hiring. So we've got a couple of people in the pipeline now. We've got some folks in the technical interview phase, which we're structuring our technical interview very much inspired by the thoughtbot interview. So it's a pairing session as well as some code review, which is great because I think it's really representative of the actual work that we do. I believe strongly in not having an interview that is trivia or anything of that sort of thing. I want to see folks at their best as opposed to finding the rough edges. Because I think it's critical to have an interview that really represents the work that we're doing and then also gives candidates an opportunity to show themselves at their best as opposed to trying to hunt out gaps in knowledge or things like that because I think it's easier to shore up a gap of knowledge. But I really want to know what is this person like when they're firing on all cylinders? So, so far, that's going great. But hiring is a complicated long game. So it will probably be a thing that I'm talking about for some weeks to come. And if anyone out there is listening and is potentially interested in a new adventure, I would love to chat with you. Sagewell Financial is hiring. And it's a wonderful Rails codebase and lots of new opportunities, et cetera, et cetera. STEPH: As someone that has worked with you, I can absolutely vouch that you are amazing to work with. And I can only imagine the codebase must be...everything we've talked about is really interesting and stellar. So yeah, I love that you're talking about this. I think it's awesome and a great opportunity for folks to get to join Sagewell. CHRIS: Oh, thanks, Steph. That's very kind of you. But in other unrelated to hiring news, one of the things that I talked about in last week's episode was my search for a new to-do list or a new application to use. And I listed some of the ones that I've been exploring. We got more feedback about that particular segment than any other by like 2X. And there's something to be said there. Maybe the show is just living up to its name. But so many people are reaching out like, "Oh, have you looked at this one?" And to be clear, I very much appreciate all of the feedback that folks have given. And actually, it has given me a few new things to look at or ways to think about this question. But mostly, I find it very funny that even though we've dabbled in topics like agile, and is it good or bad? Or other contentious ideas [laughs] like that, somehow this idea of what to-do list application should I use by far the most engagement we've seen with our audience. STEPH: I think it makes sense. Everybody has an opinion. Like you said, we're living up to our name, which is great. Was that great? I don't know. [laughter] CHRIS: It's something, I'll say that. STEPH: It's something. But yeah, everybody has felt this pain. They get it. It resonated. But since we do have some people that shared their strategies and their thoughts, did that sway you at all? Are you still going to keep with what you have, or are you going to explore new things? CHRIS: I consider this project open. I have a project in Things, which is the current to-do list application that I'm using to explore the landscape. But it's basically like, I want to timebox it, find a version that works for me. And right now, I moved to Things, and it's fine. I'm more intrigued by the jobs to be done aspect of it. So as opposed to a particular piece of software and the features that it has or doesn't have, I really want to think about the habits and workflows that I want to make easier and more repeatable. So particularly, each day, I want to wrap up by cleaning everything up. I like my inbox zero, as you probably know, so doing a little bit of that, and then planning the next day. So I want to have a tool that supports that idea of I want to queue up what I'm going to do in the morning so that tomorrow morning when I start back up, I have a very clear list of things to do. And I can just dive in with what I find to be some of my best thinking time early in the morning. Similarly, I want to be able to review on a regular basis and know if things are getting stale or overdue. So there are a couple of different workflows that I'm really focusing on. And it's unfortunate because then I look at each piece of software, and I'm like, well, you kind of support this but not totally. So I'm more in a collecting phase right now. I'm thinking about the workflows that I want to have and then finding the different tools and comparing them across those. But the one thing that I have done at this point is I wrote a little Siri shortcut I think is the name for it. They're called Shortcuts is the name of the application, but if I try and Google that, Google doesn't really know what I'm talking about. They think I'm talking about my phone, but I'm not talking about my phone. I'm talking about my actual computer, but it's little workflow automation stuff on OS X. And so I have a shortcut now that prompts me for the amount of time, and it defaults to 45 minutes. And then, it will turn on Do Not Disturb for 45 minutes, minimize Slack, because I can't be trusted, and turn on a particular Spotify playlist. And then there's a little menu bar application that...I wrote a tiny bit of AppleScript; I found it on the internet and actually read it, that finds the top task in my to-do list and puts it in the menu bar. And so now I have all of that. I push a magic button, and I say, "Yes, so I would like to work for 45 minutes on the thing that is at the top of my to-do list.” And then all of the noise of the world goes away for 45 minutes or however long I say. STEPH: I think you just created the next new hot to-do app. [laughs] This sounds like something that I need and love, especially when you're like, it autoplays a playlist for you and shuts down the world and then has you focus. Yeah, I like this. I think this also rings a bell. I feel like Momentum, or something also has similar prompts. But this sounds delightful. CHRIS: If we're being honest, it's an absolute hodgepodge of a kludge. You have some weird shell scripts and some AppleScripts. And I had to install a weird command-line utility for Spotify to make it happen. But it was one of those like; I'm spent at the end of the day. I just want to tweak on some piece of code. And this was a perfect, productive distraction, is how I would describe it. And when I've used that, I've been very happy. I know the days that I actually lean into that mode of working are better days. The days where I allow myself to be distracted by Slack throughout the day, although I'm responsive to certain questions, things are not moving as well as they should. And so, I'm trying to be really intentional with having more of these Do Not Disturb sessions throughout the day. I feel bad saying that. I shouldn't because we all should be in agreement that this is the way that we work. But even saying that, I'm like, I'm not that special. I should be reachable, right? [laughs] But I should take even just a short 45-minute break to focus on the work that I actually need to do. It's a struggle. STEPH: I have struggled with that where I used to always feel such an urgent need that I had to respond to someone as soon as they messaged me. But over time, I've learned that one, things typically aren't as urgent as I will feel that they are. And then two, if you have that type of environment where people aren't expected to just immediately reply to stuff, then you learn to write things in a way that says, "Hey, when you see this, and here's context, and here are the things that I'm looking for. And here's an easy way for you to give feedback." It just improves the overall communication. I could go on a rant about this. I think we've actually ranted about this before in a very positive way. [laughs] Yes, I think that's great that you are fighting the good fight and turning off the world for 45 minutes to focus on a task. CHRIS: What's a positive rant? I feel like there's got to be a word for that. [laughs] But I'm trying to try and come up with that. A celebration isn't...is this one of those gaps in language where we don't have a word for a positive rant? STEPH: Oh, this is going to bother me. [laughs] There's got to be something for a positive rant. CHRIS: Well, I'm sure German has...some Scandinavian language or German has a more specific version of when one goes off on a rant for many hours about things that they love and are joyous about in the world or something like that. But maybe English is just lacking this, or maybe this is a market opportunity. And we can coin the word, and then it's ours. STEPH: I think it's just praise or accolades, although that doesn't feel strong enough. Rant feels like such an emotional word that I agree praise doesn't feel strong enough. CHRIS: It's also spacious. You don't just rant, and it's one word. It's not just like one swear that you yell in the word. No, it's this long rambling thing, and I want that but positive. Maybe it's just called The Bike Shed [laughter] because I think that might be what we do. STEPH: I love that. I'm trying to smash it together, and all that I can come up with is prant, so that leads with a P. CHRIS: Yeah, I went there real quick. [laughter] Portmanteau is where I spend most of my time. But prant is just not enough. Okay, we're going to take this offline. I think we should come up with a word. This is our market opportunity. I don't know that we'll make a lot off of this, but we'll have a word then. STEPH: It's okay. Free things are good. Oh my goodness, this is going to be so trivial and silly. But I've been playing Wordle as the rest of the world has. If you're not playing Wordle, check it out. [laughs] It's delightful. And it's free. But I started playing without really researching who created it and didn't have all of the details behind it. And then it was earlier this week I found out that the creator of Wordle is Josh Wardle. And that just blew my mind and made me so happy that it just had that alliteration and similarity. And I just hadn't put it together until that moment. And it was just this wonderful, happy bubble of a moment where I was like, oh, that's delightful. [laughs] And I'm pretty sure I texted some people who were like, "Yeah, yeah, we know that." [laughs] CHRIS: Yes, that was a wonderful positive rant or prant as it were there. And Wordle really is just such a delightful phenomenon that popped out of nowhere and is given away for free by the kindness of Josh Wardle. So yeah, wonderful things on the internet. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: We have a listener question this week. Once again, just as a reminder, everyone, we love getting these listener questions. Feel free to send them into hosts@bikeshed.fm or ping Steph or I on Twitter or any number of different ways. There's, I think, a form that you can go to the website, lots of different ways to ask us your questions. But again, we really love them. They let us have more pointed topics to talk about, such as today's topic, which is "What do you consider a quote, unquote, "large table" in a database?" Which is an interesting question, I think. And so, let me read the question here. "Hey, Steph and Chris, I've listened to you (and most of your predecessors) for a while now. I've really been enjoying the conversational style about your actual development struggles." Thank you so much. This comes from Matt, by the way. "Anyway, something Chris said in Episode 301 triggered a thought for me around large tables and databases and handling them for development tasks. What do you consider a quote, "large table" in a database? What questions/considerations come to mind when you're doing PR work that has a database interaction in it? We recently needed to delete a lot of rows out of a large table, and the team has a lot of discussion around how to handle it without impacting our production users. Curious on your thoughts. Thanks." So, Steph, what do you think? What's a large database table in your mind? STEPH: So I don't have a scientific answer for that, but I can give you my gut instinct. So typically, if there's a table that has a million or more records, I'll refer to that table as a large table. And then, if a table has around half a million records, then I start to be more cautious about data changes and how I'm rolling out schema changes. So that's my very loose; this is my feeling of when we're getting into large territory. How about you? Do you have more of a concrete answer? CHRIS: I don't. And I think it would actually, in a lot of cases, be defined based on the database system that we're working with and, frankly, the RAM available on that system. There are two different sides of it; one is on the right side, like, how quickly are we inserting data into this table? And how quickly is it growing? Is probably a better question. Maybe there's a ton of data in it, but it's not growing that quickly. And so, we don't need to worry about any runaway characteristics. The other side of it is how easily can we read from it? And that is the one that's going to be RAM-constrained. Where can we maintain an index efficiently? Can we query effectively and use RAM and whatnot? So a million starts to become an interesting number, probably. But I've worked on plenty of databases where hundreds of millions of rows existed, and we've got efficient indices in place and enough RAM that the database just happily works with that, and there are no problems. So really, it's a question of like, if we start thinking about having to need to delete data, then that's a large table. If we have one table that is wildly larger than the others in the system, then that is something that I'll keep an eye on. I want to make sure, like, how's that table doing? How's the special table doing? And often, there is one or two special tables similar to the idea of god objects within a system where these are the one or two classes that have just method after method after method after method. Similarly, there are one or two database tables that often have the lion's share of the data within the system. And so those are the ones that I'm really focused on. And especially as we get closer to the RAM limit, there's this drop-off that I've seen happen where a system is like, it's fine. We got 250 gigs of RAM; there's no problem. And our database is only 100 gigs. And then a couple of weeks later, suddenly, had a bunch of new users sign up, and suddenly, your data and your indices no longer fit in memory. And now we're paging to disk, and suddenly the performance characteristics of your system just tank. And so it's that sort of thing. Watching growth rates is perhaps more important than the absolute size of any individual table. So yeah, those are some loose thoughts. STEPH: I like how you used the word interesting. I think that's a nice replacement for the word large. When we get around a million records, things start getting more interesting in how we're rolling out schema changes. And then there's also you touched on usage, which aligns well with I often don't think so much about how many records that we have in a table. But what's the usage of that table? How many queries or transactions are being executed against that table? Is this a very popular table like the users table? And will running a migration that renders that table inaccessible for a couple of seconds will that be problematic, or is this a table that we write to a lot, but we don't read from very often? And even if it runs a couple of seconds, it's not likely to have an impact on people using the application. So that's one area I tend to think about first is what's the popularity of this table? And how cautious do I need to be in making sure that we don't block other people from accessing this data? I also really like how Matt asked the question about what considerations come to mind when you're doing PR work that has a database interaction? That's one of those areas that, honestly, I lean pretty heavily on Strong Migrations to remind me how I can rewrite a migration to avoid or to transfer a blocking operation to a non-blocking operation. So a really good example is setting a NOT NULL constraint on an existing column. I know that it can be very blocking if you try to do that by default when you first run it, and I will look it up every time. I will check Strong Migrations and say, "Hey, I know you've got some really great docs that will walk me through about adding a check constraint instead," and then making sure that I can then add this new column. So going forward, for inserts and updates will apply the default, but it doesn't validate all the existing data. It's also a really good reminder, that particular example, is start with stricter constraints because it's a lot easier to remove a constraint than to add one later. So that's one consideration that comes to mind. I also think the fail fast and fail loudly applies nicely here. So if I'm looking at a PR that is making a schema change, then I want to validate that the application has low timeout values so that way if a migration does take more than 30 seconds to run, then the migration will timeout. And then that will alert the developer to say, "Hey, do you need to think of a new approach or see if there's a way to improve this?" Versus if that migration didn't timeout, then that timeout is going to become user-facing as they start to experience problems with the site. And then also looking for more performant methods so using findnbatches, update all, delete all, just checking for the more performant ways that we can update large sets of data. Those are, I think, the top things that I really look for. How about you? CHRIS: Yeah, I think very similar to everything you just said. And broadly, there's a point in time that happens frankly pretty early on in the growth of an application and the data set behind it where you need to start behaving differently with regard to migrations. There's a small period of time where I can just get away with anything. I actually really love the part before we have any actual users where I'm like, oh, we need to change this fundamentally. I'm just going to drop the table and rebuild it because it's easier than trying to think about how to migrate this data. But so quickly, you get into a place where it's like, nope, sorry, can't do that have to treat this as realistic. So a bunch of the strategies that you're describing, like indexes concurrently, is one of the things that I'll reach for often because that allows me to decouple the timing there and not...again, the migration timeout that you're talking about is absolutely something that I want to have. Migration should go through quickly, and if they can't, then we need a different approach. Maybe we need to introduce the new column right to that one in parallel to the existing column, and then eventually do a switchover. It's definitely more work and involves a couple of deploys to get that done, but that's the unfortunate reality that we have to move to. I will say one of the things we talked about is like, if we hit that timeout, then we're going to stop that migration. This is a critical feature that I rely on deeply at Postgres, which is that schema migrations or DDL transformations; if I'm saying that correctly, I'm not sure I am, but throwing an acronym out there, it'll be fun. This is actually one feature of Postgres that I really rely on. My understanding is that Postgres has this; MySQL does not, but I may be off. I know that Postgres has transactional DDL transformation, so schema migration sort of things. I'm adding a column; I'm removing a column, et cetera. Those inherently happen within a transaction, and that's wonderful because if they do timeout, we want to be in a consistent state. The worst thing I can possibly imagine is being like, we got halfway through, but then we failed, or we lost connection, and so it's half migrated. It's like, oh God, I want to trust my database deeply. That's sort of one of the fundamental things that I have. And I've, over time, pushed more and more into the database and saying let's have check constraints. Let's have null true and all of these sorts of things so that the data in my database can be deeply trustworthy. The idea that a schema migration could go awry, and suddenly we've got like, well, half of the rows have these extra columns. What does that even mean? How do you live in that world? So I love this feature of Postgres. I really rely on it. I feel very bad whenever I have to disable it. I think there are some enum-related things that require disabling DDL transactions. And whenever I type that in a migration, I'm like, I don't like this. I'm not happy about this, but it's the world we live in for now. STEPH: If we're sharing our truths, yeah, adding an index concurrently also you have to remove that DDL transaction and disable it. For a previous project that I was working on, we often ran into that timeout where we'd run a migration, and then it would timeout. And we would then just specify and be like, "Hey, for this migration, I'm going to bump you up to a minute. I'm just going to make it longer." And that felt questionable at times, but I at least appreciate the explicitness of it where you're making that decision to say, nope, I think this is fine. It's not going to impact anybody, or we're going to run it in off-hours. I do want to extend this to a minute, and then make sure you do reset it, so it doesn't affect it globally from there on out. But that's something that you can do, and I have done before, which I feel is important. You still want to know some of your outs in case you do need something like that just to fix things in a moment but then at least be intentional for when you're using it and then communicate to the team like, "Hey, I'm doing this and let me know if you have concerns about it." For this specific scenario that Matt provided about we recently needed to delete a lot of old rows out of a large table, and the team had a lot of discussion about how to handle this without impacting production users; I happened to have a really nice conversation with Steve Polito, a fellow thoughtboter, about this particular question. And he had a very thoughtful response that I hadn't considered where he suggested starting with deleting the data for a small set of records. So, for example, if you're working with a users table, you could scope the data deletion to only inactive users and then use a feature flag to disable any interactions that would be affected by that data loss, run that change to delete the data for those inactive users, and then check for unexpected errors or side effects. So then that way, you have this moment to pause to say, "Hey, did we forget something? Is there something about this application that's still relying on that data that we forgot about? We've only done it for a small amount of users, so we're in a safer space." So then, at that point, you can either repeat those steps for another batch of records or use that feedback to then drop the column with confidence. And that was an approach that I hadn't considered, but I really liked that idea. CHRIS: Yeah, it's a nice, I'd say methodical approach to what can be a very complex and difficult to wrangle task. I will say I haven't actually explored this too much, but I've always had in the back of my mind, like, if we're deleting data from the application, ideally, we're saying this data is no longer needed. But I wonder if using table partitioning is an alternative that can be useful in these cases. What if we're able to figure out the correct partitioning? It's often time series sort of stuff. What if we're able to lean into that and say, "Let's partition this by year." And then yeah, we don't use the old data anymore, but it lives in a separate partition. And therefore, I think Postgres is able to do reasonable things with that. And again, like disk space, we can have a lot more storage on disk, but RAM is really going to be the constraining factor of how much of the index fits in memory. And again, I haven't pushed on this. But I think that's an alternative approach that can be really interesting. But if we do have time-series data, in particular, Postgres is wonderful. But it's not necessarily honed exactly to that use case. And so, there are a couple of tools that I've kept an eye on right now: ClickHouse, Timescale, and InfluxDB being the three of them. And I think most if not all of them are based on Postgres, but then they build on top of it. And they add some deeper understandings of time series data and how to optimize querying and storing, and all of that. And so, is that an alternative that allows us to still stay in this world but then have a different approach and alleviate some of the burdens that might come with this heavy data that we have? STEPH: Yeah, all those sound interesting. I haven't heard of some of those. This is why I love chatting with you. You always bring interesting perspectives that I had not considered before, like the partitioning. Just to clarify, partitioning the data is a way of keeping that data, but then it's not indexed. So that way, your system isn't spending as much time making sure that data is easily readable. But then that way, you don't actually delete it, so then it's there should you wish to be like, oh, I wish I hadn't gotten rid of that data. CHRIS: I think so. I'll be honest; I don't deeply understand it. But I think you basically can say given a giant projects table within your system; we actually may have that logically grouped by user sort of thing. And so we can shard and partition and say, there are ten different buckets of these. And if we optimize it well such that all of the things that are logically together actually live together on disk, then it allows Postgres to be much more efficient. Similarly, with time-series data, then you can say, use this sort of windowing where it's each month, we get a new bucket. And then it's much easier to query across just that bucket because it's already sort of partitioned down in that way. But I'll be honest; I'm now speaking well past my actual knowledge. I've never actually worked with it. But it's one of those things that I have in the back of my mind. Like if all of my other tools fail me and if I cannot solve these performance problems in a Postgres system with indexes, and tuning, and other things like that, then I will look to partitioning. So I look forward to that day when the data problems are so massive that I need to table partition. STEPH: Got it. Like they say, it's a good problem to have. While adding to the list of tools, there's one that I discovered recently; it's called Safe PG Migrations. And it's very similar to Strong Migrations, where Strong Migrations will warn you and say, "Hey, this is not safe. There are other ways to write this migration." Safe PG Migrations take some more aggressive approach and will rewrite your migration to be a safer version. And I don't know how I feel about it. I love it, and I hate it. [laughs] It's one of those the magic is there, and that could be phenomenal. But I get squeamish when things want to rewrite something as important as my migrations. But on the other hand, it is like a really nice default for the team because it's more than a warning. So that way, if you're trying to put something more strict in place for people to follow, then this would be a good way to do that. CHRIS: I'm very intrigued by that as a tool because if it were obvious, then Postgres would do it. The team behind Postgres does absolutely amazing work. And so if I tell them, "This is the change I want to make to the system," and they're like, "Cool, we're going to do that super inefficiently," and someone else is like, "No, no, no, I can trick it." Postgres is good at tricking itself, is my stance. So I'd be intrigued as to what secret knowledge they have or what are their caveats where Postgres has to handle every possible edge case. And therefore, it's slower because of pessimistic concerns that it has. But this tool says, "No, no, as long as you're not doing this very terrible thing, you're fine. And we'll rewrite it to a safer, faster version." But I'm just kind of intrigued, like, why do you think you're better than Postgres? STEPH: [laughs] Why do you think you're better? Well, I do you have an example I can provide. It's one that they have on their README. And this one highlights that if you're adding a column to an existing table and that you're adding a default value and no constraint, then they show you how it's rewritten where they set explicitly the lock timeout, and then they will add the column. And then they will set the default value but not in a way that it's going to do a table scan where it's going to add it for all the existing records; it's going to be for new records. And then they, let's see, they also update the users in batches to then set a default value, and then they will reset the statement timeout because it looks like they are...yeah, because initially, they change it, so they're resetting it to an original value. And then, they set the column Null constraint. I know I just said a lot of things reading from their README. But they have a good example here that kind of highlights that this is how they rewrite it. So I do find that more reassuring as long as I can then see how it was rewritten, and then I can validate it and confirm it with what I think is appropriate. Then I still have full control. Then it's more of a hey, we rewrote this thing for you. Feel free to review it and then change it as you see fit. As long as I have that final authority, then that makes me feel better about this. CHRIS: Got it. That makes sense. And the thing that you're describing, I think, is a semantically different thing than the first migration where it's like, do this thing. And they're like, well, okay, you could. But if instead, you did X, Y, and Z, then it would go way faster and be way easier. That makes a lot of sense. And it feels like shared knowledge wrapped up into a tool which I'm always a fan of that. STEPH: Yeah, in general, when I think about general strategies for schema changes, there are really three areas that come to mind or three strategies that come to mind. The first one is that we take incremental steps to avoid blocking reads and writes to the table, which then allows you to deploy during business hours or off business hours. That often means just more manual steps that you have to take to make sure that it's safe. And then the other one is scheduling downtime to run a migration. That is a very real option, something that you can do. Or have a fancy setup that utilizes followers for seamless migrations and upgrades. I feel like that's like the three big buckets that you can fit your strategy within. And it just depends on the needs of your application and users as to which one of those you're ready for or which strategy you need to use. What do you think? Are there any other big buckets that I left out of that list? CHRIS: No, I think we covered a bunch there. Hopefully, that was useful. Hopefully, it, I don't know, maybe introduced folks to some new ideas or ways to think about this sort of work. And yeah, with that, shall we wrap up? STEPH: Yeah, I've still got my Wordle to play for the day. So let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeee!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris updates us on his new window manager of choice, Moom, and tells us what's good with it. He's also giving yet another task manager a go: OmniFocus. (Sorry Things.) Steph talks about defining test classes in RSpec and readdresses flaky tests to improve CI build time. Chris is worried about productivity. He's still not coding as much as he'd like to be. Steph lends an ear, and together, they discuss potential ways Chris could gain back a little bit of coding time at work. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Moom (https://apps.apple.com/us/app/moom/id419330170?mt=12) OmniFocus (https://apps.apple.com/us/app/omnifocus-3/id1346190318) Is It Worth the Time? (https://xkcd.com/1205/) Knapsack Pro (https://knapsackpro.com/) Shopify Monolith (https://shopify.engineering/shopify-monolith) Sacrificial Test Classes (https://blog.bitwrangler.com/2016/11/10/sacrificial-test-classes.html) rspecq (https://github.com/skroutz/rspecq) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris, what's new in your world? CHRIS: What's new in my world? Well, hey, Steph. Oh, I have an update on a thing that I think I talked about a while back or at least asked on Twitter. But I've been looking for a window manager for forever. And in that way that I sort of overcorrected a while back, I think where I'm no longer allowed to do anything related to productivity or dev tools. I was just forbidden because it was a time sink. I'm slowly trying to correct back and be like, you know what? I regularly think about how it would be nice to have a better window manager. So previously, I had used Divvy, D-I-V-V-Y, which is fine. It did an okay job, but it just didn't have quite the level of control that I wanted, or maybe I didn't investigate it enough. But it felt like it was lacking. So I did a little bit of research. A bunch of people recommended different things. There was Spectacle; there was Rectangle. There was a whole bunch of other things that I'm forgetting now because I have settled on Moom, M-O-O-M. Those are fun words. STEPH: I feel like you keep bringing interesting words [laughs] because last time, it was Things where you're tracking all the things. And now we have Moom to track the space. All right. CHRIS: If this is my legacy as a podcaster, then I feel like I will have done well just, you know, weird sounds mostly that's what he's going for. But yes, I've been using Moom now for…[laughs] God, it's just ridiculous to say, but here we are. STEPH: [laughs] CHRIS: I've been using it. I've been enjoying it. In particular, the thing that I liked about it...a bunch of the other ones that I looked at were like, oh, we've got all these different configurations. And you can move things any which way, and you can have any number of hotkeys. And I was like, wait, wait, wait, say more right now. You want to take over my global namespace of hotkeys and just clutter it with 19 different things? You know that that is a limited space that I'm working with here. And so Moom, somewhat uniquely, at least in the ones that I experienced, was what I would describe as a modal window manager. So much like Vim is modal where you start out in normal mode, and you're moving around and you kind of bounce and search and all of that, and then you enter insert mode. And in insert mode, keys do different things. And then in command mode...it's got all these different modes. And so there are lots of different namespaces for hotkeys. It's one of the things that makes Vim so powerful. Moom is similar in that there's one global activation hotkey. And then, within that, I can have a whole namespace of hotkeys. So like M will put something in the middle of my screen now. F will put something full-screen. And I don't need to remember weird multikey combinations for that. There's just the one to get started, and then I've configured it such that the tab will bounce to a secondary display and sort of rotate through them. M and F and Q and P I've got it physically laid out on the keyboard. So it looks like my screen. Q being on the left side will push something to the left side, P to the right side. And I'm very happy with that. I don't need a lot out of this tool. I don't need very complex management or scripting or any of that, which are very nice features that exist in the other ones. But that combination, the one hotkey to rule them all, and then the sub hotkeys within it, and the ability to mostly move between the screens and then put stuff where I want it is great. I'm very happy. STEPH: I think I've figured it out. So Moom, I think it's a combination of move and zoom, and that's how they got Moom. CHRIS: You're probably right. STEPH: That does sound really nice. I'm a Spectacle fan. And I have enjoyed it and just stuck with it because I haven't felt a need to change from it. And it's really nice where I use my arrow keys for which direction I want to go. So that has been easy for me to recall. But that sounds really nice, all the things that you're describing with Moom. CHRIS: Does spectacle have the like, is it some Command Option Control and then left or right or up or down? Or is it you type something, and then you type left, right, up, down? STEPH: I have to actually touch my keyboard to answer that question because I have the muscle memory, which is an interesting thing that my muscles knows it, but my brain has to really think about it. So I think it's like the Option Command, and then yeah, then use the arrow keys. CHRIS: Gotcha. That's roughly what I had when I was using Divvy previously, but I found just enough of a limitation there. And so Moom has been great as another tool. But I think Spectacle has a lot more features in terms of scripting and other fancier stuff that you can do, which is both super intriguing and, again, sort of the thing that I'm not allowed to do. [laughs] So I went with, like, this tool seems fine and has the one feature that I really want. That said, you brought up Things, which is the to-do list app that I've been looking at. I've been using it for a week now. It's great. I'm enjoying having a more structured way to say, like, here's what I'm doing today. Here's what I'm doing tomorrow. It's been wonderful. But I'm already looking at OmniFocus as a better version. STEPH: [laughs] CHRIS: Because I think there's some stuff that I don't love, and yes, I can hear my own voice in the back of my head that's like, always chasing that next thing. But I haven't actually made the effort to switch over or even tried. I've used OmniFocus in the past. But anyway, I'll let you know if I do make additional moves there. STEPH: Yeah, I'm enjoying this journey. Keep me up to date on it. I've heard of OmniFocus, but I know nothing about it. But I feel like I've heard good things. So I like this journey you're going on where you just keep switching and trying new things. That's fun for me [laughs], and there's chasing productivity. So I'm into it; I'm here for it. CHRIS: If I just invest enough hours to save a handful of minutes down the road, then I will have...oh no, wait, that's not how this goes. There's, of course, an xkcd about this which we can include in the show notes. But I'm trying to be very intentional with it. I waited for many years before I allowed myself to reinvestigate the world of to-do lists. And I'm hopefully going to keep it to just a couple of weeks of nonsense and then back to a few years of stable. That's the dream. But yeah, that's some of the smaller things that are up in my world. I have another topic that I want to chat about. But I'd love to hear what's new in your world? STEPH: Yeah, I have some interesting bits that I can talk about with the project that I'm working on. But more concretely, I have something that's been on my mind that I don't think that I've talked about here on the show, but I think would be fun to talk about because I just happened to run into it this week while working on some code. And it's the idea of defining test classes in RSpec so as you are testing part of your code, but then you want to create just like a fake class, something that you can use as a substitute for real application code. And so it's a really nice way that then you can have this replica behavior, but then maybe it's just one particular method or some behavior that you need to use in the class but then doesn't actually go to the real code. That's wonderful. That's great. One thing that I've learned is that with RSpec is when you are introducing a test class, so let's say if you have your RSpec describe and then either a string or it's the name of a class, and then you have a block so do, and then within that block is where you write your test. If you create a temporary class, say, like I have my class test class, and then I have some behavior, that gets defined in the global namespace. It's not scoped to that particular RSpec example. And the reason for that it's not specific to RSpec. RSpec is not the one that's doing this; it's actually Ruby behavior. So for Ruby, when you're defining within a block like that, if you're defining a constant, if you're defining another class inside of a block, it's going to use the outer namespace as its namespace. So if you had a top-level class that you were defining, but if you define a class as a block, and then inside of that block you define a constant, that constant is then defined in the object namespace instead of within that particular class that you have written. And so that's why RSpec has this behavior. Because someone brought up a really great question about this on RSpec::Core asking about it, and they're like, yeah, that's actually how Ruby works. And so we're not going to change RSpec's behavior since that is how Ruby has decided to handle this. And the part where this becomes important is when you define a test class within an RSpec example. While it may be unlikely that someone is going to use that exact same name for their test class that they're going to create in their RSpec example, if they were to use that same name, then you're going to have a collision between the two. One of them's going to win, and you're probably going to end up with some really weird test failures because it's going to get confusing as to which class is being used, and they may not match up with each other. So one way around this, and this is going to be one of the rare times that I suggest this, but let. Let is scoped to an RSpec example. And so you could define a class inside of a let, and then that will scope it to the example. There are probably some other approaches as well, but that's the one that I'm most familiar with to ensure that when you define that class or constant, it's not getting defined in the global namespace and ensuring that none of the other tests have access to it. CHRIS: Well, this is certainly interesting. I'm pretty sure I've been operating under the opposite assumption for the entirety of my career. This is good to know. I feel like I probably have had tests that failed because of this. And then I learned this truth, and then I subsequently forgot it. I don't know if you know this, but if you define a method within just a helper method that you extract in RSpec, are those also on the global namespace? I don't define classes in RSpec blocks that often. It's pretty rare. Like if I have a controller concern sort of thing that I want to test, I might say random controller and inject the thing there or some other abstracted piece. That is the only case I can think of where I have a fake model or a fake controller or something like that for test purposes. But it doesn't come up that often. I do extract a heck ton of local helper methods. And I'm wondering now, are those all in the shared global namespace? STEPH: I'm pretty sure they're not. And I'm getting on the edges of my knowledge here, but I think it has to do with the fact of when you're defining a constant. So if you're defining a class versus an actual constant, that will get into the global namespace because it's using the outer scoping. But in my experience, I'm pretty sure that's not true for the method just because I remember one time I did some funky stuff with RSpec. And I remember seeing that I couldn't access those methods from another example. CHRIS: I like the honesty. And you're like, to be clear, I was doing something weird, but I learned that day. Okay, that's good because at least that part maps to my understanding. So methods may be safe, but classes get shared. Very interesting. STEPH: And it's something that I rarely think about or had worried about just because if I'm defining a fake test class, I often will put it somewhere that's intended to be more global. So I'll stuff it somewhere in like spec support. So then other people can see, hey, I've already mimicked this behavior. So if you need to use the same thing, just go ahead and use this. It's not often that I am adding that class directly to the RSpec example group. So I think I've been fortunate where I haven't actually run into that conflict for that reason. But this came up while giving an RSpec course. And while we were just in a very small, tiny codebase and replicating some examples, someone in the class was like, "Hey, by the way, do you know that that's in the global namespace?" And I was like, "No, friend. Tell me more." So thanks to that person, they're the ones that actually enlightened me about how it's going into that namespace and how it can actually pollute your testing namespace. There's a really good article that's written by Ken Mayer. And we'll be sure to include a link in the show notes that talks about it and also provides the let example as a way to work around this. And also links to the GitHub discussion on RSpec::Core, where they talk about this behavior and why things are the way that they are. Circling back to some of the more general project-y things that I alluded to earlier, I've shared a bit about the project that I'm working on. But just to recap it, it is focused on helping a very large team that has a large number of tests, around 85,000. And they are looking to address flaky tests that they have and overall really improve their CI build time. So right now, it takes about 30 minutes for the build to take place. But they also have flaky tests, and then that slows things down. And so, the re-verify rate has been painful for them. There's been some really great work that has improved that, particularly there is a, I think we've talked about this before, but where they're re-verifying certain flaky tests, which isn't great because they're still flaky tests, but at least they're not preventing people from moving forward and shipping code. But some of the bigger stuff that is just on my mind is when you have a very large team and a very large application, by large team, I'm talking about 100 developers, and they are all contributing to this codebase. And there are around 85,000 tests, and that has grown substantially in the last 12 months. And so, if you think about the trajectory of the addition of those tests, it is just going to continue to grow. So there's a concern there of even if we address flaky tests and we improve things, there's an architecture concern of how do we really reduce the CI build time? And so there's that aspect, and then there's also the aspect of then well, how do we still work to improve the tests and the codebase as well as we go across all of these disparate teams? And right now, there is a bit of a culture where engineers don't feel empowered where they can necessarily address all of the flaky tests or things that they run into. And so there is a bit of a mindset of I'm stuck on this, or this test failed, or it's flaky, or I don't understand it. So I'm just going mute it, or I'm going to hand it off to someone else to work on it. So there are three big areas that are on my mind. The first one is architecture. You can throw architecture at it. There's also the code quality that's a concern. And then how do you improve the code quality in a way that you're improving it fast enough that then you've got 100 other developers that are also contributing to it at the same time? And then individual IC empowerment where then people feel like, hey, I ran into a slow test or a flaky test, and I feel like I can triage this, and I can make changes. For the architecture piece, we're still in the infancy stages of how to approach this and the strategy that we're using. But one of the ideas that has come up is how do we reduce tentpoles? And the tentpole is like when you're running your test and, let's say that it's parallelized, all of the various tests. But there is one process that takes like 20 minutes, and then the other process is completed in 5 minutes as a drastic example. And overall, you could have reduced your time if you had managed to split that 120-minute process across all the other workers who are then available for that work. So there are some tentpoles that are taking place. And that could be one first step in reducing the CI build time. There are also discussions around how to scale horizontally. Right now, we don't think that's something we can do with the service that we're using to run the test. But it's something that maybe we need to manually look into is then how do we build a queue of all these tests and not where we just split test by a file, which is typically how the Parallelize gem does it. But you could actually split up tests within a file. So if you had a particularly large file, that doesn't necessarily matter. But then building a queue of all these tests so then as each test finishes, a worker can just grab that next test. And then also you can easily scale up and scale down workers. As I'm saying that, that feels big, that's a lot to invest in. But that as an idea is how can we essentially then scale the architecture? So even as we continue to invest in the tests, in the system, and they continue to grow, our architecture can keep up with it. CHRIS: That last bit there is super interesting to me. It's something that I've looked into and haven't pursued yet. We're currently running on CircleCI with our test suite. And I don't even know that we pushed on parallelization because we're early enough on that. And we turned off bcrypt recently, which super-duper helps with the speed up. But overall, the test suite time is fine, is where I would put it. It had crept up, though, to a place where it was starting to be painful, is how I would describe it. And I think it's very easy for that to just continue growing and suddenly, it's 20 and 25 minutes. And then, depending on your merge strategy and all of that, it can be all the more complicated, and this gets in the way of deploys. And so, I think it is a super important thing to keep an eye on. I know Charity Majors pushes really hard for 15 minutes from merge to deploy to production. And so if your CI suite takes 25 minutes, then already you're stuck. As an aside, I just once more want to say out into the ether, CircleCI or any other CI platform, if you would allow me to say yes, we've already tested this Git hash, this Git SHA, or the working tree, ideally, because that's also deterministic, I would love that feature. I would love to not have to rebuild the same code when it gets merged into main, just saying once more out into the world. Also, GitHub, if you want to put me on the merge queue beta, I would love that if anybody out there is listening. [laughs] STEPH: I like how this has become a special requests hotline for all the things [laughs] that you're hoping to get a part of or features you'd like to see added. CHRIS: Hello, internet. I have some requests. STEPH: [laughs] CHRIS: I would love to see those things, but in the world where those don't exist. The particular thing that you're talking sort of a test queue, is something that I've seen. So Knapsack is a...what's the word? It's a tool; it's a service. It's a combination of things. But it does that essentially where it starts up a local build agent. And then it basically says like, all right, give me all of the tests that you need to run, and then I will feed them back to each of the individual agents that there's one agent running per parallelized process. And so say you've got five of them. The first one says, "Hey, give me a test," and runs it. And the second one says, "Give me a test," and et cetera. And so, the queue manager on the other side is in charge of that orchestration. And it means that they basically all finish in identical time, with one being an outlier, whichever one happens to be the longest. But it's only going to be however long your longest test is is basically that outlier versus what you're describing of like, well, if we split it by file, we can end up with more naive things where there's a bunch of feature specs on one of them, and it skews by two minutes. We obviously don't want that. So Knapsack, in particular, is a tool that I've looked at, but generally, I'm very interested in that as a solution to how do we maximally take advantage of parallelization there? STEPH: Interesting. I have not heard of Knapsack. There is one that sounds similar. It's called RSpec Queue. And it does some really interesting work where it will split the individual test, so it won't do it by file. It will also look at historical data to then try to be intelligent about how it's going to split it and find the longer running test. And I believe it uses Redis to then keep track of the test set up in run and things that still need to be run. That is a gem that the team is looking into using as well. I don't know how that works if that can integrate with the current platform as we're using TeamCity to run tests. I don't know if that's something that can integrate with TeamCity, if it's a replacement. I don't have all of the knowledge about RSpec Queue yet. But it seems to do a number of the things that we're interested in. So even if we can't use the gem, then maybe it's something that we can still imitate. CHRIS: The other thing that I'm surprised we haven't said yet is this is one of the places where people would often reach for microservices. I feel like we have to have the microservice conversation at this moment. Microservices can actually be a great solution to organizational problems. As a team scales, it does become really hard to manage a large group of developers. And so microservices introduces a very fixed boundary that then draws nice lines that you can have around things. And so, the individual build time for a portion of your application can be much more manageable by virtue of that. But it has this huge cost of technical complexity and overhead and et cetera, et cetera, all of the reasons that we may not want to go that route. And so interestingly, I was just looking at Shopify's Deconstructing the Monolith blog post, which I think at this point, they've skewed a little bit more into the microservices. Shopify is huge, one of the largest Rails apps out there. And so looking at them and being like, oh, what are they doing? It's an interesting sort of plot a course and to see how long they waited before they even started thinking about the much deeper things and even exploring microservices. But in this blog post, they talk about a different approach where they stuck with sort of a monolith. But then they started to introduce Rails engines and clear encapsulation within the large codebase such that then you can actually start to say, well, we don't need to run all the tests every time because if we're making a change within this section of the application, then we just need to run those tests. I've also heard of organizations having some logic that can determine based on the code change; we know the associated test files that we should run. I'm scared of that is how I would describe it. I want to trust my test suite. I want to be able to deploy on a Friday and say if tests are green, it's going out to production. That's great. And I worry about that sort of thing. That's hard to get right. That feels like caching, right? And that's one of those things that we historically get wrong a lot. But nonetheless, that is an approach that large organizations I've heard having good success with. So some way to determine what's the affected code and what tests do we need to rerun and et cetera. And that can really drastically reduce down the scope of each CI build. But those are some larger things that I have not had to reach for on any of the applications I've worked with. I've taken different approaches, different ways to reduce the time or otherwise Parallelizer et cetera. But it's interesting for when you get to a certain scale. STEPH: Yeah, it's funny that you bring up that idea because that came up in conversation with some of the other developers as well, was the idea of, like, what if we could just not run all the tests? You changed one file, and you don't need to run everything. And I immediately was like, that sounds very cool and super hard to be able to get right. And a lot of this code is extremely coupled, which then moves to the code quality area. So I suspect a lot of the test times could be improved by creating smaller objects because right now, a lot of the tests will load the entire world because they have to. They have to test everything. And so that is creating a ton of data, and then taking a long time to run versus if we were able to split out that code into smaller objects and test in unit tests, then that would also help speed up. But that's also hard to do. Where do you look first? We do have some great data, thanks to RSpec. RSpec is letting us know how long each test file takes to run, and then we are capturing that data. So I can go look at which files and say, oh, this file takes 10 minutes to run. Let's look at that file first versus some of the other ones that are performing better. But that is a battle that will take a long time to win. And it's something that takes consistency and then also encouraging others to join that battle. So while it's very important, it doesn't address the concern of tests growing rapidly and then being able to support that. Something that you said in a previous episode also was on my mind in talking about building processes in a way that encouraged people that they can make small, quick changes. And I think that's really important. So if we can build out the architecture to help scale this so then the tests were running in say 15 minutes, then if someone saw a test and they wanted to make a small refactor, they saw a factory.create, and they're like, oh, that could be a FactoryBot.build_stubbed instead and issue that into a pull request or change request and get that merged. I don't know if people feel as comfortable doing that right now because it takes them 30 minutes or longer to run the test. But that idea of how do we get a structure in place where people can make tiny, little improvements and do that as a whole, as a team, to then work on the code quality concerns? CHRIS: That last little bit is so interesting where you're saying, like, oh, we have a FactoryBot.create, FactoryBot.Build, but it has the overhead of having to go through the 30-minute test suite. But coming back to the thing we were talking about before, what if we didn't have to run all the tests? Although I find it very hard to tell, given a code change in actual production code, what tests do I need to run? When I'm just changing a test, I'm pretty sure I know which test I need to run in order to determine if that test still runs correctly. So that feels is there an optimization that can happen there? Which is I've only made test changes; therefore only run the changed tests. And then that's an encouragement to say, like; this is a part of our codebase that we are trying to improve on. Let's optimize the iteration speed there. You'd have to figure out how to write that. And so it's probably much like my productivity adventures, maybe not a good investment. Although given that this is such an organizational concern, maybe that is the thing that's worth spending an afternoon on and seeing if it could happen. Because if you can speed that process up, get more [inaudible 23:46] and more iteration in fixing the tests, that feels like it could be a win. STEPH: I think that's a really good idea. I think we could certainly tell that if a file's changed, that it's only a test file that has changed. And then I've heard very good things from the other developers that TeamCity has a wonderful API to work with. And so there's a way that we could then tell TeamCity to say, hey,...or it may not even be a TeamCity command. It may just be somewhere in the universe we have to say, "Hey, RSpec, only run this test," or "TeamCity, we're only going to feed you this one RSpec test to run, so user agent but only run this particular test." So I really like that idea. I think that's really intriguing. And I'll bring it up with the team because that would be a huge win, especially as Joël and I are really focused more on tests. That would just improve our lives. So selfishly, I'm excited about that idea because we are touching less of the application code and more focused on improving the test at this point. CHRIS: I mean, if right now you're getting, say, 5 or 10 pull requests through a day which frankly feels like a high bar on this, if suddenly that's 10 to 20, that's material right there. STEPH: Yeah, I don't know how large of an impact it would have for the rest of the team because I don't know how often they're only making changes to a test file, but it still feels like a nice optimization to have. Cool. Well, thanks. I appreciate that idea. CHRIS: My pleasure. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: What else is going on in my world? I continue to not code a ton which is interesting and probably makes sense for right now. But to share a small anecdote from this week, we had retro, and I ended up attending retro ever so slightly late. I was doing a hiring interview, which is super exciting. Again, for anyone that's out there, we are hiring at Sagewell Financial. And I would love to chat with you if that sounds interesting. But so I was having a wonderful hiring conversation that ran a little bit long. So I was a little bit late to retro, and I arrived, say like eight minutes in, and someone was expressing a concern. And the concern was, I very sincerely know this to be true, but they were saying in the most positive way. But they were like, "It'd be great if Chris could code more," and not in the judgmental like, Chris, why are you not getting as much done? Not in that way at all, very much in the it would be great if Chris had more time, if there wasn't as much pulling my attention in different directions. But then it kind of went into this interesting direction. So we then go back through and address the concerns and talk as a group about how we resolve them. But this one was like, my name was in the concern, again, in a very positive way, in a very supportive way. And we had a wonderful conversation, and there were really great ideas that were passed around. But man, did I feel weird having my name in a retro item. [laughs] STEPH: So one thing I've learned is that you do a really good job when you are giving presentations and being in the spotlight. But I don't think you actually love it. You love sharing content and things that you have learned. But I could see how being a focal point, especially if there's a concern or something that could have a negative connotation, that would feel squeamish. It would make me feel squeamish. CHRIS: I hadn't thought about it in that way. But as you say it, also, this conversation is a meta version of that. Like, let's talk about me talking about me. I don't want to be the center of attention. But I love technology or process. I love talking about the work. That's great. And so I'm happy to do that. I'm happy to stand in front of a room and talk about it. But yeah, when it's about me, that's weird. And so now I'm going to move...well, no, I'm not going to move on [laughs] because this is the topic right now. But so there's a bunch of things that we have been trying to introduce. And I think this is a useful part of the conversation more broadly and less about me. So one of the things that I think I mentioned in a previous episode was the introduction of point-dev, which is each week, we rotate through a person. And that person is in charge of triaging the errors, making sure that nothing is stuck in Sidekiq, responding to any support requests, et cetera, et cetera. But they're meant to be the frontline such that everyone else can be heads down and really focus on the work. And what was interesting of the three developers that are working on the project, I am point-dev this week. So I was like, yes, that's awesome this week because I'm the person on the frontline. That has not helped me, but in the future, it will. And then one of the other developers mentioned that they feel like it's really useful but also feel like it's been noisy. And we realized the previous week was their week on point-dev. But the other developer was like, "Yeah, it's been great. I haven't had to think about anything." And so they have been off of that rotation for two weeks now. They'll be taking it over next week. But it is doing exactly its job of providing that attention coverage so that they can keep their focus on the code, and that's really wonderful. So I'll be honest, when we started talking about it, there was a tiny voice in my head that was like, is this a failure mode? Should we be dealing with the noise rather than having a process to address it in the moment? Should we be dealing with the root cause rather than the symptoms? And I still think that's a good point of view. But we found so much value from this. And as I've mentioned it, many people are like, oh yeah, we have that. It's great. I've heard enough positive things. So I've backed away from that. But there was a voice in my head that was like, are we failing right now? But yeah, so point-dev has been really wonderful. And next week, I will have to...well, frankly, the next two weeks, I'm off of point-dev appointments, so I'm very excited about that. I've been doing some of the product management or sort of the tech side of the product management and helping to triage cards and make sure that there's very clear work lined up for the engineering team when they're ready to do that. I'm trying to back away from that just a little bit. And one of the things that we did there was introduced an inbox column in our Trello board. You know how I love a good inbox. You know how I love to get to inbox zero. But that is a good way for me, for anyone now in the organization, which I don't want everyone to have to learn our processes, but just saying, "This is the place that you put requests, and we will deal with them. I assure you of that." It has been great because that means I don't need to be quite as responsive in Slack. I can just gently redirect people, "Hey, if you don't mind, please put this in Slack in the inbox column, and that'll be great." That thing, though, that gentle pushback in Slack is one of the things that I've struggled with. And this was one of the more personal aspects of the conversation that happened in retro was me being, like, if we're being honest, I tried to do that. But it's not my favorite thing to do in the world. Whenever someone asks me something, I want to be helpful. I don't want to seem rude or brisk or like I'm too busy for you, et cetera, et cetera. So I will often respond to the question or do the thing that they're asking and then say, "In the future, if you could go to this other place." And ideally, I'm slowly moving forward and being like, "No, no, no, please go to the other place. We've talked about this a few times." But it is an interesting example of one of the specific aspects of my personality coming through in this. But that introduction of an inbox has been great. Love me a good inbox, as I said. And then, more generally, we just tried to talk through what are the things that I'm doing? Do I need to own all of those uniquely? And some of them the answer we decided was yes but some of them we decided no. And we started to sort of distribute the work there or some of the meetings or different aspects of it. And so overall, it was a really great conversation but also very weird for me. STEPH: Yeah, because then you wonder, am I not doing the right thing? Am I not spending my time the right way? But then hopefully, that meeting helped reinforce that yes, you are spending your time the right way and that you're doing a lot of productive things. There are just too many productive things for you to do, and so you have to prioritize those aggressively. I like all the things that you just highlighted. There's one in particular, the last one that you mentioned about finding things that you can hand off to others. And I love that for a couple of reasons. It came up in a recent conversation that I was having with some other thoughtbot developers around when someone's on a project, typically someone just falls into being the point person. They just happen to be the person that the client talks to and ask questions and goes through the most. And that's something that is okay. But we want to make sure that that's not a bad thing, that everybody is treated equally, that everybody is given equal opportunities and room to grow. And so, in my mind, whenever someone is that point person, or you have fallen into that role, it is your job to then pull other people up. So if you have been given the responsibility of running a particular meeting each week, then go ahead and do it once or twice, so you can demo it and show it to someone else as to how you do this. But then tag somebody else and say, "Hey, I'm going to let you or ask you to run this next time." So then that person can experience it. They can demo their style, and then it continues on to have more people. So I really like that you are highlighting it's not just beneficial for you to then distribute those tasks, but it's empowering for everybody else on the team as well. I'm curious, so what was the final outcome? It sounds like there are some really good things in place, and you're transitioning, handing some things off. But I can't imagine that things have gotten...all of your priorities are still there. So do you think you'll actually code more, or what's the outcome for next week? CHRIS: Short term, maybe probably not, if we're being honest, but trending in that direction. So one of the things that's going on right now is hiring. That is just an activity that takes a lot of time. And I care a lot about doing that well, both for the organization and then for individuals on the other side. I want to be respectful of their time and communicate in reasonable timelines and not leave people without an answer or follow up or those sorts of things. It probably makes sense for that to sit with me as the starting contact. And then from there, folks that are continuing on in our hiring process they're going to talk to many other members of the team, and that won't just be me. But there are a lot of first conversations that I'm having. And so right now, my schedule has a bunch of that, which is fine and good. And that will hopefully, at some point, we'll hire some great people. And then we'll be on the other side of that. And that piece of the work that I have right now goes away. Some of the other outcomes that we named there were a couple of action items. And so I think those will help, but they're sort of we got to work towards that. One is transitioning a meeting, but it's a biweekly meeting. And I'm not going to just not attend the next one. So it'll be me and one of the other developers attending to transition ownership of that meeting moving forward. And then from there, so like, two weeks from now, I will not have that consideration on my calendar. And that's like one 30-minute block that I get back or, depending on how you think about it, one block that that 30-minute broke up. I do want to touch back just on something that you're saying there. I think you're being very kind to me in saying like, no, but you've got so many things, and so it's hard to do that. I think that's true, but that's kind of the work overall, and my version of that is one thing. But everyone sort of has, as a team, we have a version of like, how are we being most productive? Are we making sure we're doing the most important things? And so it was interesting in the moment, but I think it was a very good conversation. And I want to make sure that both we as a team and then me as an individual, wherever that happens to be the case, are open to these sort of constructive things. Like, frankly, to do the work to figure out how to get work off my plate that hasn't felt like the most important thing. It felt like close to the most important thing, but then there were all the other things that I had to do. So I wasn't doing the work to figure out how to not do the work. It is a complicated sentence that I just said. But this was a case where retro, I think, very usefully highlighted that this was a good thing for us collectively to put effort into such that we can be more productive moving forward. It happened to be slightly more focused on me rather than the entirety of the team. But broadly, that kind of thinking is why I'm a huge fan of retro. I think it's a great place to take a step back think about how we're doing the work rather than just being in the work day-to-day. STEPH: So if I'm internalizing what you said correctly, let me know if not, but it sounds like you're in one of those places, and I've witnessed this with other people and myself where someone is overwhelmed. They have a lot to do, and they're very focused in that grind and in that moment of doing all the things that they have to do. And it's very hard to then say, "I'm in the weeds right now. And then I also have to figure out how to get out of the weeds." And that's a very different skill and mental space to be able to do that. Because often, when you're just in that mode, all you can focus on is a bit on survival at that time. And then it may take other people to notice to say, "Hey, you're in the weeds. We need to figure out a way to help you not live there and to find ways to distribute some of the work." Does that sound like a fair assessment? Because I think I say all that because I've just seen people in that position. And then they think back, like, oh, I should have offloaded stuff earlier. And it's like, yeah, true, totally. And it often takes a retro or someone else coming to you and saying, "Hey, I've noticed...I looked at your calendar today; how can I help?" [laughs] CHRIS: I think that's probably the right calibration. And mostly, my emphasis was just I want to make sure that broadly, any team that I'm on has the space for this sort of conversation. And that thing that you're saying exactly that phrasing of like, "Hey, I saw your calendar. How are you doing? How's that going, though? Are you feeling okay? [laughs] You can't sleep and whatnot." That can be a really useful thing to have and to have organizational norms about what are our expectations of how many meetings someone should have in a week. And where do we start to think about different things? You did use the phrase overwhelmed. I want to say that I'm like 101% whelmed. So I'm just ever so slightly overwhelmed, but it is like I'm in the weeds. I need to figure out how to clear some of the weeds so that then I can get out of it. And it was a great conversation that came from that. STEPH: That's awesome. I'm glad you got a good team that, frankly, felt comfortable bringing it up, and then that you could lean on them for ways to talk about how you could code some more and talk about priorities and where you want to focus your time. CHRIS: It will be an interesting thing. As the team grows, I don't expect this to get easier. We talked about this a number of weeks back. And I think for a while; hopefully, we clear a little bit of dust here, and then I get back to being a little bit more on the code, and that's going to happen for a while. But as I think about the longer sort of the future of the company, this is something I'm going to have to revisit a handful of times. And it's a really interesting question that I'm still struggling with internally. And where do I want to be versus what will be needed and whatnot? So it'll be interesting to see how it evolves. But for now, I think I can gain back a little bit of coding time, a little bit of maker time versus manager time, as Paul Graham's essay goes. And yeah, I think that'll be good. STEPH: Yeah, I like how you're already looking forward to the fact that it will probably fluctuate because, yeah, right now, you are sort of paying a tax. You are building up to then where you can have more people on the team. And then that may give you back some of your time where then you can code because you can outsource some of the work to them. But then, as the team grows, so are other responsibilities. And traditionally, being in a CTO role and most CTOs I know will code here and there because they want to, and they enjoy it, but it is not their full-time job. So I think you're really wise to have already noticed that and start thinking about how that's going to trend in the future. And it sounds like you might need to figure out how to throw some architecture at it. So then you can scale horizontally, and then you can just have more time to do all the things. Yeah, that's right. [laughs] CHRIS: You're suggesting microservices, right? That's how my job becomes easy? STEPH: Yeah. Well, I'm thinking more like RSpec Queue, but we'll have RSpec Chris or some version of that. CHRIS: Chris Queue. STEPH: Chris Queue. [laughs] CHRIS: And then I just paralyze my human, and then it'll be great. STEPH: Yeah, that's always worked out well in the movies. Whenever somebody clones themselves, that goes super well. CHRIS: Multiplicity is a fantastic piece of cinema, and I stand by that. STEPH: I haven't seen it, but I feel like it doesn't end well for the main character. CHRIS: I feel like every time I mention a movie, you haven't seen it. I feel like we need to do a movie marathon at some point just to catch up so that we've got shared analogies. But yeah, it's a fun movie. It's fine. It turns out fine in the end. But there are some humorous adventures that happen in the middle. Cloning maybe [laughs] isn't the most direct option to solve productivity problems. STEPH: [laughs] Yeah, I think I've got Labyrinth, Hackers, and Multiplicity now on the watch list. And I appreciate the fact that you know that I'm not likely to watch them, although out of the three, Hackers will probably happen. CHRIS: All right, what if I were to get a bunch of Pop-Tarts, non-frosted? STEPH: Ooh. CHRIS: Does that change -- STEPH: Wait, are you going to send them to me? Because if you just have them, that's no good. [laughter] CHRIS: Eat Pop-Tarts on a video call and be like, "Look at this movie. It's great." [laughter] STEPH: All right, bribery definitely works for me. [laughs] CHRIS: Okay, so got it, noted. And based on the nature of the conversation that we have devolved into here, I think we've probably reached a good point. What do you think? Should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeeee!!!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph talks about winter storms and thoughts on name pronunciation features. Chris talks about writing a query to add a new display of data in an admin panel and making a guest appearance on the Svelte Radio Podcast. Finally, Chris decided that his productivity to-do list system was failing him. So he's on the search now for something new. He asks Steph what she uses and if she's happy with it. How do you, dear Listener, keep track of all your stuff in the world? This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Upcase advanced Active Record (https://thoughtbot.com/upcase/advanced-activerecord-querying) Svelte Radio (https://www.svelteradio.com/episodes/chris-toomey-talks-svelte-rails-and-banking) Things (https://culturedcode.com/things/) Todoist (https://todoist.com/) MeetingBar (https://meetingbar.onrender.com/) SavvyCal (https://savvycal.com/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. We have Winter Storm Izzy headed our way. It's arriving in South Carolina early tomorrow morning. So that's kind of exciting just because it's South Carolina. We rarely see snow. In fact, I looked it up because I was curious because I've seen it every now and then. But I looked up the greatest cumulative snowfall in 1 season, and it was 19 inches in the winter of 1971. I was trying to add an old-timey voice there. I don't know if I was successful. CHRIS: Does 1971 deserve a full old-timey voice? STEPH: Apparently. CHRIS: I feel like people from 1971 would be like, "We were just people in the 70s." Like, what do you... STEPH: [laughs] CHRIS: Wait. Nineteen inches, is that what you said? STEPH: 19 inches. That was total for the season. CHRIS: Yeah, we can bang that out in an afternoon up here in the North. So yeah, okay. You were here for the terrible, terrible winter, right? STEPH: Oh, Snowmageddon? Yes. CHRIS: Yeah, that was something, oof. STEPH: I don't remember how many inches. Was it like 100 inches in a month or something wild like that? I've forgotten the facts. CHRIS: I, too forget the facts. I remember the anecdotal piece of data, the anecdata as it were where we shoveled our driveway, and then another storm came, we shoveled our driveway. And then finally, I was living in an apartment, and it was time to shovel the driveway again. But the pile of snow on the lawn was too big. So we had to shovel the pile of snow further up the lawn to make room for the snow that we were shoveling out of the driveway. But I also remember that being a really nice bonding moment, and I met more of the people living in the...it was a house that had been converted into six apartments. So I actually met some of the people from the house for the first time. And then we hung out a little bit more in the day. So I actually have weirdly fond memories of that time. But to be clear, that was too much snow. I will officially go on record saying too much snow. No, thank you again. STEPH: It was a lot of snow. I think it broke Boston for a while. I remember I don't think I went to...I worked remotely for two weeks because they were just like, "Yeah, don't even try to come into the office. Don't worry about it." So it felt like my first dabbling into understanding quarantine [laughs] except at least with less complicated reasons, just with lots of snow. We also went snowboarding in Charlestown, where we were living. And that was fun because there are some really great hills, and there was so much snow that that was delightful. But I'm not expecting Snowmageddon in South Carolina, although people may act like it and rush out and get their milk and bread. But hopefully, we'll get a couple of inches because that'll be lovely. I don't know that Utah has ever seen the snow. So this will be fun. CHRIS: Oh, that'll definitely be fun. I imagine you've got like even if you do get some amount of accumulation, a day later the sun will just be like, "I'm back. I got this," and clear it up, and you won't have any lingering. The year of Snowmageddon, if I remember correctly, the final pile of snow left in July, the shared one that the city had collected. So you'll probably do better than that turnaround time. [laughs] STEPH: Yeah, it's perfect. It's very ephemeral. It snows, it's beautiful. It's there for a couple of hours, and then poof, it's gone. And then you're back to probably 70-degree weather typically what's here, [laughs] which I have no complaints. There's a reason that I like living here. But in some other news, I have something that I'm really excited about that I want to share. So there's something that you and I work really hard to do correctly, and it's pronouncing someone's name. So whenever there is either a guest on the show, or we are referencing someone, we will often pause, and then we will look for videos. We'll look for an audio clip, something where that person says their name. And then we will do our best to then say it correctly. Although I probably put a Southern twang on a lot of people's names, so sorry about that. But that's really important to say someone's name correctly. And one of thoughtbot's projects is called Hub, which is something that we use internally for all of our project staffing and then also for profiles and team information; there's a new feature that Matheus Richard, another thoughtboter, implemented that I am just so excited about. And now that I have it, I just think I don't know how I lived without this. And I want it everywhere. So Matheus has added the feature where you can upload an audio file with your name pronunciation. So you can go to someone's profile, and you can click on the little audio button and hear them pronounce their name. And then a number of people have taken it a bit further where they will provide, say, the American or English pronunciation of their name. And then they will provide their specific pronunciation; maybe it's Greek, maybe it's Spanish, and it's just phenomenal. And I love it so much. And I can't wait for just more platforms to have something like this. So really big shout out to Matheus Richard for that phenomenal feature. CHRIS: Oh, that is awesome. Yeah, we definitely do pause pretty regularly to go scan through YouTube or try and find an example. And often, people just start into talks, or they'll only say their first name. We're like, oh, okay, keep searching, keep searching. We'll find it. And apologies to anyone whose name we still got wrong regardless of our efforts. But it's making this a paramount idea similar to people putting their pronouns in their name. Like, okay, this is a thing that we should get into the habit of because the easier we make this, the more common that we make this. And names absolutely matter, and getting the pronunciation right really matters. And especially if it can be an easier thing, that's really wonderful. I hope Twitter and other platforms just adopt this; just take this entirely and make it easy because it should be. STEPH: That's what I was thinking; if Twitter had this, and then I was thinking if Slack had this, that would be a wonderful place to be able to just see someone's profile because we can see lots of other helpful context about them. So yeah, it's wonderful. I want to hear more people how they pronounce their names. Because I'll always ask somebody, but it would just be really nice to then be able to revisit or check-in before you talk to that person, and then you can just say their name. That would be delightful. CHRIS: I do feel like creating it for my name would be interesting. I actually had someone this week say my name and then say, "Oh, is that how you pronounce it?" And I stopped for a minute, and I was like, "Yes. I'm really intrigued what other options you were considering, though. I would like to spend a minute and just...because I always thought there was really only the one approach, but I would love to know. Let's just explore the space here," but yes. STEPH: [laughs] You ask them, "What else you got? What other variations can I hear?" CHRIS: [laughs] I would like three variations on my desk by tomorrow so that I can understand what I'm missing out on, frankly. There's a theme or an idea that I've seen bouncing around on Twitter now of people saying, "Yeah, I really just want to apply, get hired, work for one day, make this one change to a platform, and immediately put in my resignation." And I could see this like, "All right, I'm just going to go. I'm going to get hired by Twitter. This is it. This is all I'm doing," which really trivializes the amount of effort that would go into it for a platform like Twitter. I can't even imagine what engineering looks like in Twitter and how all the pieces come together. I'm imagining some amount of microservices there, and that's just my guess. But yeah, that idea of just like, this is my drive-by feature. I show up; I work for a week, I quit. And there we go; now we have it. STEPH: Well, we are consultants. Maybe we'll get hired for all these different companies, and that will be our drive-by feature. We'll add it to their boards and be like, "Don't you want this? Don't you need this?" And then they'll say, "Yes." [chuckles] CHRIS: I am intrigued because I can't imagine this hasn't come up in conversations at Twitter. And so, what are the trade-off considerations that they're making, or what are the reasons not to do this? I don't have any good answers there. I'm just asking the question because, for an organization their size, someone must have had this idea. Yeah, I wonder. STEPH: Yeah, there's; also, I'm sure malicious things that then you have to consider as to then how people...because, at the end of the day, it's just an audio file. So it could be anything that you want it to be. So it starts to get complicated when you think about ways that people could abuse a feature. On that peppy note, what's going on in your world? CHRIS: I had a fun bit of coding that I got to do recently, which, more and more, my days don't involve as much coding. And so when I have a little bit of time, especially for a nice, self-contained little piece of code that I get to write, that's enjoyable. And so I was writing a query. I wanted to add a new display of data in our admin panel. And I was trying to write a query, and I got to build a nice query object in Ruby, which I always enjoy. That's not a real thing, just in case anyone's hearing that and thinking like, wait, what's a query object? Just a class that takes in a relation and returns relation but encapsulates more complex query logic. It's one of my favorite types of ways to extract logic from ActiveRecord models, that sort of thing. So I was building this query object, and specifically, what I wanted to do here is I'm going to simplify down the data model. And I'm going to say that we have users and reservations in the system. This may sound familiar to you, Steph, as your go-to example [laughs] from the past. We have users, and we have reservations in the system. So a user has many reservations. And reservations can be they have a timestamp or maybe an enum column. But basically, they have the idea of potentially being upcoming, so in the future. And so what I wanted to do was I wanted to find all users in the system who have less than two upcoming reservations. Now, the critical detail here is that zero is a number less than two. So I wanted to know any users that have no upcoming reservations or one upcoming reservation. Those were the two like, technically, that's it. But say it was even less than three, that's fine as well. But I need to account for zero. And so I rolled up my sleeves, started writing the query, and ActiveRecord has some really nice features for this where I can merge different scopes that are on the reservation.upcoming is a scope that I have on that model that determines if a reservation is upcoming because maybe there's more complex logic there. So that's encapsulated over there. But what I tried initially was users.leftjoinsreservations .groupbyusers.id havingcountofreservations. So that was what I got to. And thankfully, I wrote a bunch of tests for this, which is one of the wonderful things about extracting the query object. It was very easy to isolate this thing: write a bunch of tests that execute it with given data. And interestingly, I found that it worked properly for users with a bunch of upcoming reservations. They were not returned by the query objects which they shouldn't, and users with one upcoming reservation. But users with zero upcoming reservations were being filtered out. And that was a surprise. STEPH: Is it because the way you were joining and looking where the reservation had to match to a user, so you weren't getting where users didn't have a reservation? CHRIS: It was related to that. So there's a subtlety to LEFT JOINS. So a JOIN is going to say like, users and reservations. But in that case, if there is a user without reservations, I know they're going to be filtered out of this query. So it's like, oh, I know what to do. LEFT JOINS, I got this. So LEFT JOINS says, "Give me all of the users and then in the query space that I'm building up here, join them to their reservations." So even a user with no reservations is now part of the recordset that is being considered for this query. But when I added the filter of reservations.upcoming, I tried to merge that in using ActiveRecord's .merge syntax on a query or on a relation, as it were. That would not work because it turns out when you're using the LEFT JOINS...and as I'm saying this, I'm going to start saying, like, here's definitively what's true. I probably still don't entirely understand this, but trying to do the WHERE clause on the outer query did not work. And I had to move that filtering logic into the LEFT JOINS. So the definition of the JOINS was now I had to actually handwrite that portion of the SQL and say, LEFT OUTER JOIN users on and then, you know, the users.id=reservations.userid and reservations. whatever the logic there for an upcoming reservation. So reservations.completed is null or reservations.date>date.current or whatever logic there. But I had to include that logic in the definition of the LEFT OUTER JOIN, which is not a thing that I think I've done before. So it was part of the definition of the JOIN rather than part of the larger query that we were operating on. STEPH: Yeah, that's interesting. I don't think I would have caught that myself. And luckily, you had the test to then point out to you. CHRIS: Yeah, definitely the tests made me feel much more confident when I eventually narrowed down and started to understand it and was able to make the change in the code. I was also quite happy with the way I was able to structure it. So, suddenly, I had to handwrite a little bit of SQL. And what was nice is many, many, many years ago, I recorded a wonderful course on Upcase with Joe Ferris, CTO of thoughtbot, on Advanced ActiveRecord Querying. And I'm still years later digesting everything that Joe said in that course. It's really an amazing piece of content. But one of the things that I learned is Joe shows a bunch of examples throughout that course of ways that where you need to, you can drop down to raw SQL within an ActiveRecord relation. But you don't need to completely throw it out and write the entire query by hand. You can just say, in this case, all I had to handwrite was the JOINS logic for that LEFT JOINS. But the rest of it was still using normal ActiveRecord query logic. And the .having was scoped on its own, and all of those sort of things. So it was a nice balance of still staying mostly within the ActiveRecord query Builder syntax and then dropping down to a lower level where I needed it. STEPH: I love that you mentioned that video because I have seen it, and it is so good. In fact, I now want to go back and rewatch it since you've mentioned it just because I remember I always learn something every time that I do watch it. On a side note, the way that you represented and described query objects was so lovely. I know you, and I have talked about query objects before because we adore them. But I feel like you just gave a really good mini class and overview of like, this is what a query object is, and this is what it does. And this is why they're great because you can test them. CHRIS: Cool. I'm going to be honest; I have no idea what I said. But I'm glad it was good. [laughs] STEPH: It was. It was really good. If anyone has questions about query objects, that'll be a good reference. CHRIS: Well, thank you so much for the kind words there. And for the ActiveRecord querying trail, really, I was just along for the ride on that one, to be clear. I did write a bunch of notes after the fact, which I've found incredibly useful because the videos are great. But having the notes to be able to reference...past me spent a lot of time trying really hard to understand what Joe had said so that I could write it down. And I'm very glad that I invested that time and effort so that I can revisit it more easily. But yeah, that was just a fun little bit of code that I got to write and a new thing that I've learned in the world of SQL, which is one of those topics that every little investment of effort I find to be really valuable. The more comfortable I feel, the more that I can express in SQL. It's one of those investments that I'm like, yep, glad I did that. Whereas there are other things like, yeah, I learned years ago how to do X. I've completely forgotten it. It's gone from my head. I'm never going to use it again, or the world has changed. But SQL is one of those topics where I appreciate all of the investment I've put in and always find it valuable to invest a bit more in my knowledge there. STEPH: Yeah, absolutely same. Just to troll Regexs for a little bit, they're powerful, but they're the thing that I will never commit to learning. I refuse to do it. [laughs] I will always look it up when I need to. But Postgres or SQL, on the other hand, is always incredibly valuable. And I'm always happy to learn something new and invest in that area of my skills. CHRIS: Yep, SQL and Postgres are great things. But let's see. In other news, actually, I had the pleasure of joining the Svelte Radio Podcast for an episode this week. They invited me on as a guest. And we got to chat about Svelte, and then I accidentally took the conversation in the direction of inertia as I always do. And then I talked a little bit more about Sagewell, the company that I'm building, and all sorts of things in the world. But that was really fun, and I really enjoyed that. And I believe it will be live by the time this episode goes live. So we will certainly include a link to that episode in our show notes here. STEPH: That's awesome. I haven't listened to the Svelte Podcast before. So I'm excited to hear your episode and all the good things that you said on it. I'm also just less familiar with them. Who runs the Svelte Podcast, and what's the name of the show? CHRIS: The show is called Svelte Radio. It's hosted by Antony, Shawn, and Kevin, who are three Svelters from the community. Svelte is a really interesting group where the Svelte society is, as far as I can tell, a community organization that is seemingly well-supported by the core team and embraced as the natural center point of the community. And then Svelte Radio is an extension of that. And it's a wonderful podcast. Each week, they talk about various things. So there are news episodes, and then they have guests on from time to time. Recently, having Rich Harris on to talk about the future of Svelte, Rich Harris being the creator of Svelte. Interestingly, if you search for Svelte Radio, they are the second Google result because the first Google result is the tutorial docs on how to use Svelte with radio buttons. But then the second one is Svelte Radio, the podcast, [laughs], which is an interesting thing. Good on Svelte's documentation for having such strong SEO. STEPH: I was just thinking there's something delightful about that where the first hit is for documentation that's a very helpful; here's how you use this. That's kind of lovely. Well, that's really cool. I'm really looking forward to hearing more about Svelte and listening to you be on the show. CHRIS: Yeah, they actually had some very kind things to say about The Bike Shed and, frankly, you as well and our co-hosting that we do here. So that was always nice to hear. STEPH: That's very kind of them. And it never fails to amaze me how nice podcasters are. Everyone that I've met in this community that's a fellow podcaster they're just all such wonderful, nice, kind people, and I just appreciate the heck out of them. CHRIS: Yeah, podcasts are great. The internet is doing its job; that's my strong belief there. But let's see. In other news, I actually have more of a question here, sort of a question and an observation. My work has started to take a slightly different shape than it has historically. Often, I'm a developer working on a team, picking up something off the top of the Trello board or whatever we're using for project management, working on that thing, pushing it through to acceptance. But all of the work or the vast majority of the work is encapsulated in this one shared planning context. But now, enough of my work is starting to spill out in different directions. Like right now, I'm pushing on hiring. That's a task that largely lives with me that doesn't live on the shared Trello board. Certainly, the rest of the team will be involved at some point. But for now, there's that that's really mine. And there are other pieces of work that are starting to take that shape. So I recognize, or at least I decided that my productivity to-do lists system was failing me. So I'm on the search now for something new, but I'm intrigued. What do you use? Are you happy with what you have for to-do lists? How do you keep track of stuff in the world? STEPH: Oh goodness. I'm now going to overanalyze everything that I do and how I keep track of the things that I do. [chuckles] So currently, I have two things that I used to track, and that is...okay, I'm going to expand. I have three to-do lists that I use to track. [laughs] Todoist is where I add most things of where whatever I just think of, and I want to capture it Todoist is usually where it goes. Because then it's very easy for me to then go back to that list and prioritize or just simply delete stuff. If I haven't gotten to it in a while, I'm like, fine, let it go. Move on. And then the other place I've started using just because it's been helpful in terms of linking to stuff is Basecamp. So we use Basecamp at thoughtbot, and we use it for a number of internal projects. But I have created my own project thanks to some advice from Mike Burns, a fellow thoughtboter, because he created his own and uses that to manage a lot of his to-dos and tasks that he has. And then that way, it's already one-stop shopping since you're in Basecamp a lot throughout the day or at least where you're going to visit some of the tasks that you need to work on. So that has been helpful just because it's very simple and easy to reference. And then calendar, I just live by my calendar. So if something is of the utmost importance...I realize I'm going in this in terms of order of importance. If something is critical, it's on my calendar. That's where it goes. Because I know I have not only put it somewhere that I am guaranteed to see it, but I have carved out time for it too. That's my three-tier system. [laughs] CHRIS: I like it. That sounds great, not overly complicated but plenty going on there. And it sounds like it's working for you, sounds like you're happy with that. STEPH: It has worked really well. I'm still evaluating the Basecamp, but so far, it has been helpful. It does help me separate between fun to-do items which go in Todoist and maybe just some other work stuff. But if it's really work-focused, then it's going in Basecamp right now. So there's a little helpful separation there between what's going on in my life versus then things I need to prioritize for work. What are the things that you're currently using, and where are you feeling they're falling down or not being as helpful as you'd like them to be? CHRIS: My current exploration, I'm starting to look for a new to-do list-type things. Specifically, I've been using Trello for a long time for probably a couple of years now. And that was a purposeful choice to move away from some of the more structured systems because I found they weren't providing as much value. I was constantly bouncing between different clients and moving into different systems. And so much of the work was centrally organized there that the little bit of stuff that I had personally to keep track of was easy enough to manage within a Trello board. And then slowly, my Trello board morphed into like 10 Trello boards for different topics. So I have one that's like this is research. These are things that I want to look into. And so I can have sort of a structure and prioritization within that context in my world. And then there's one for fitness and one for cooking. I'm trying to think which else...experiments, as I'm thinking about I want to try this new thing in the world. I have a board for that. So I have a bunch of those that allow me to keep things that aren't as actionable, that are more sort of explorations. But then they each have their own structure. And that I found to be really useful and I think I'll hold on to. But my core to-do board has started failing me, has started being just not quite enough. And then, more so, I wanted a distinct thing for work for a professional context. So I was like, all right, let me go back to the drawing board and see what's out there. And I did a quick scan of Todoist and Things, respectively. And I've settled on Things for right now. It just matches a little bit more to my mental model. Todoist really pushes on the idea of due dates or dates as a singular idea associated with most things. Almost everything should have a date. And I kind of philosophically disagree with that. Whereas Things has this interesting idea of there is the idea of a due date, but it's de-emphasized in their UI because not everything has a due date; most things don't. But Things has a separate idea of a scheduled date or an intent date. Like yeah, I think I'll work on that on Wednesday. It's not due on Wednesday; that's just when I want to work on it. It can have a separate due date. Like, maybe it's due Friday. STEPH: Is the name of the application that you're saying is it Things? Is that the name of it? CHRIS: Yeah, it is. STEPH: I haven't heard this one. You kept saying Things. I was like, wait, is he being vague? But I realized you're being specific. [laughs] CHRIS: It's one of the few things that...yeah, one of the few things that I think is not great about Things. It's from a company Cultured Code, and the application is called Things. And that is all I will say on that topic. Different names maybe would have been better, but they seem to have carved out enough of an attention space. Enough people know of it that if you search for Things and to-do list, it will very quickly pop up. But yeah, that's a pretty ambiguous name. They maybe could have done a different one there. But the design of the application is really nice. It's on my desktop. And now I have it on my phone as well, and they sync between them and all the stuff. So there's never going to be a perfect system. I'm certain of that. I've at least talked myself out of trying to build my own because, man, have I fallen into that trap before. Oh goodness, so many times. STEPH: I'm very proud of you. CHRIS: Thank you. I'm trying. STEPH: But yeah, it'll be interesting to see how it evolves. I continue to struggle with there are these things that come to mind, and I want to capture them during the day. But some of them are just stories I'm telling myself, which would probably be best captured in a journal tool. And then there are notes that I might want to keep on remote work and how people think about that. And so I'm starting to think about Obsidian or a note-taking system for that. And then I've got this Trello board concoction. And now I've got a to-do...and suddenly I'm like, well, that's too many things. And so I'm trying to not overthink it. I'm trying to not underthink it. I'm trying to just find that perfect amount of thinking. That's what I'm aiming for. I'm not sure I'm going to hit it directly, [laughs], but that's what I'm aiming for. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Some of the topics that you mentioned earlier did stand out to me when you're talking about recipes and working out some other topics. Those are things for me that I often just put in notes. So I liked the word that you used for stories that you're telling yourself or things that you're interested in. Is that something that...I don't put it in Todoist or put it somewhere because I don't really have an action item. It's more like, yeah, this recipe looks awesome and one day...so I'm going to stash it somewhere so I can find it. I'm currently using Notion. I used Bear before. It is beautiful. I really liked Bear, but I needed a little bit more structure, and Notion gave me that structure. And so I will just dump it in Notion. And then it's very searchable, so I can always find whatever recipe or whatever thought that was as long as I try to add buzzwords to my own notes. Like, what would have Stephanie searched for looking for this? So I will try to include some of those words just so I can easily find it. CHRIS: I love you're defining yourself as a Stephanie. For a random Stephanie walking through the woods, what search terms? How can I SEO arbitrage a Stephanie? STEPH: What would she look for? CHRIS: Who knows? STEPH: That Stephanie, she's sneaky. You never know. CHRIS: You never can tell. Obsidian is the one that I'm looking at now. But I'm currently using Apple Notes. And it's really nice to be able to search directly into a note very quickly. I have that both via Alfred and then on my phone. And I'm finding a lot of utility in that, particularly for notes, for things I want to talk to someone about. But now there are seven different things, and how are they connected? And where is something? And to the question of where would a future Christopher look for this, let's make sure I put it in that place. But I don't know what that dude's going to be up to. He's a weird guy. He might look in a completely random place. So I'm trying to outsmart him, and oof, good luck, me. STEPH: [laughs] I have heard of Obsidian, but I don't recall much about it. So I'd have to look into it. I do feel your pain around Todoist and where it really encourages you to set a date. Because there are often things where I'm like, I saw something I want to read. And I know there are tons of tools. There are so many tools and videos and things that people could watch if they really want to invest in this workflow. But right now, I've told myself no, and so I use Todoist. And I see something I want to read, and so I just link to it. And I don't have a particular date that I want to read it. I'm like, this looks cool, and so then I add it to a reading list. But that also, I guess, could be something for notes. More and more, I'm trying to shove things into notes, so it feels less like a task and more of a I'm curious, or I'm interested in things that have piqued my interest. Let me go back and look at that list to see if there's something I want to pull from today or I need inspiration. That's what my notes often are; they're typically inspiration for something that I have seen and really liked, or maybe it's a bug that I looked into, and I want to recall how that happened or what was the process. But yeah, my notes are typically a source of inspiration. So I try to dump most things in there. I don't know if that's particularly helpful for your task, though, because it sounds like you're looking for a way to manage the things that you actually need to do versus just capturing all of your thoughts. CHRIS: Honestly, part of it is having a good system for those like, oh, I'd like to read this sometime. Ideally, for me, that doesn't go into my whatever to-do list system. But if my brain doesn't trust that I'll ever read it or if I feel like I'm putting it into a black hole, then my brain is just like, hey, you should really read that thing. Are you thinking about that? You should think about that and just brings it up. And so having a system externalized that I trust such that then the to-do list can be as focused as possible. It's a sort of an arms race back and forth battle type thing of like, I've definitely done the loop of like, all right, I want to capture everything. I want to have perfect, lossless, productivity system, and that is not possible. And so then I overcorrect back the other way. I'm like, whatever, nothing matters. I'll just let everything fall away. And then I'm like, well, then my brain tries to remind me of stuff or tries to remember more. And there's a book, Getting Things Done, which is one of the more common things recommended in the productivity world. And that informs a bunch of my thinking around this, the idea of capturing everything that's in your head so that you can get it out of your head. And in the moment, be focused and in the moment and not having to try and remember. And so that's the ideal that I'm searching for. But it's difficult to build that and make that work. STEPH: It seems the answer is there's no perfect system. It's always finding what works for you. And I feel like it's always going to change from hopefully not month to month because that would be tedious. But it may change year to year depending on how you're prioritizing things and the types of things that you need to remember or that you need to accumulate somewhere. So I feel like it's always this evolving, iterative process of changing where we're storing this. But I feel like where you store the notes and inspiration, that's something that, ideally, you want to make sure that you can always continue to keep forward. So even if you do change systems, that's something that's usually on my mind. It's like, well, if I use this system to store all of my thoughts, what if I want to move to something else? How stuck am I to this particular platform? And can I still have ownership of the things that I have added here? But overall, yeah, I'd be intrigued to see what other people think if they have a particular system that works for them, or they have suggestions. But overall, it seems to be whatever caters best to your personality and your workflow. That's why there are so many of these. There are so many thoughts, so many videos, so many styles. CHRIS: Yeah, I think a critical part of what you just said that feels very true to me is this is something that will change over time as well. Life comes in seasons, and my work may look a certain way, or my life may look a certain way, and then next year, it may be wildly different. And so, finding something that is good enough for right now and then moving forward with that and being open to revisiting it. And yeah, that feels true. So I'm in an explore phase right now. I'll report back if I have any major breakthroughs. But yeah, we'll see how it goes. STEPH: I will say I think the main tool that I have really leaned into, while some of the others will change over time, is my calendar. There are certain things I've let go. My inbox is always going to be messy. My to-do list is always going to be messy. But my calendar that is where things really go to make sure that they happen. And I will even add tasks there as well. So I feel like the calendar will always stick with me because I can trust that as the one source of like, these are the things that have to happen. Everything else I can check for during that day or figure it out as I go. Or if something gets dropped or bounces to the next day, it's okay. CHRIS: Yeah, the calendar is definitely a core truth in my world. Whatever the calendar says, that is true. And I'm actually a...I hope I'm not annoying to anyone. But I'm very pointed in saying, "This recurring meeting that we have if we keep just canceling it the day before every time, let's get this off our calendars. Let's make sure our calendars are telling the truth because I trust that thing very much." And two apps that I'm using right now that I've found really useful in the calendar world are MeetingBar, which I've talked about before. But it's a little menu bar application that shows the next meeting that's upcoming. And then I can click on it and see the list of them and easily join any video call associated, just a nice thing to keep the next thing on my calendar very top of mind, super useful, really love that. That's just open source and easy to run with. The other that I've been spending more and more time with lately is SavvyCal. SavvyCal is similar to Calendly. It's a tool for sharing a link to allow someone to schedule something on your calendar. And, man, it is an impressive piece of technology. I've been leaning into some of the fancier features of it of late. And it has an amazing amount of control, and I think a really well-designed sort of information architecture as well. It took me a little while to figure out how to do everything I wanted to do in it. But I wanted to be able to define a calendar link thing that I could share with someone that really constraints in the way that I wanted. Like, oh, don't let them schedule tomorrow, and make sure there's this much buffer between meetings. And don't let this calendar link schedule too many things on my calendar because I need to control my day, and give me some focus blocks. And they're not actually on my calendar, but please recognize that. And it basically supports all of these different ways of thinking and does an incredible job with it. As an aside, SavvyCal is created by Derrick Reimer, who is the co-host of The Art of Product Podcast, which is co other hosted by Ben Orenstein, former thoughtboter, creator of Upcase, and a handful of other things. So small world and all of that. But yeah, really fantastic piece of technology that I've been loving lately. STEPH: That's really cool. I have not heard of SavvyCal. I've used Calendly and used that a fair amount. And that is so awesome where you can just send it to people, and they can pick time on a calendar and do all the features that you'd mentioned. So it's good to know that there's SavvyCal as well. Well, pivoting just a bit, we have a listener question that I'm really excited to dig into. This question comes from fellow thoughtboter, Steve Polito. And Steve writes in that, "Hey, Bike Shed, I've got a question for you. I find it difficult to know if there's an existing method in a large class or a class that includes many concerns. How can I avoid writing redundant methods when working on a large project?" And Steve provided a really nice just contrived example where he's defined a class user that inherits from ApplicationRecord. And then comments, "Lots of methods making it really hard to scan this giant class. And then there's a method called formatted name. So it takes first name, adds a space, and then adds the user's last name. And then there are a lot more methods in between. And then, way down, there's another method called full name that does the exact same thing. Just to provide a nice example of how can you find a method that has existing logic that you want and avoid implementing essentially the same method and the same class?" So as someone who has worked on some legacy systems this year, I feel that pain. I feel the pain of where you have a really giant class, and that class may also include other modules. So then you have your range of all the methods that you may be looking through gets really widened. And you are looking for particular logic that you feel like may exist in the system, but you really just don't know. So I don't know if I have a concrete method for how you can find duplicate logic and avoid writing that other method. But some of the things that I do is I will initially go to the test. So if there's some logic that I'm looking for and I think it's in this class or I have a suspicion, I will first look to see what has test coverage. And I find that is just easier to skim where I can find, and I'll use grep, and search and just look for anything. In this particular case, let's use first name as our example. So I'm looking for anything that's going to collaborate with first name. Some of the other things that I'll do is I'll try to think of a business case where that logic is used. So, where are we displaying the user's full name? And if I can go to that page and see what's already in use, that may give me a hint to do we already have this logic? Is there something there that I should reuse, or is it something new that I'm implementing? And then if I really want to get fancy about it, for some reason that I really want to see all the methods that are listed, but I'm trying to get rid of some of the noise in the file, then I could programmatically scan through all the available methods by doing something like class.instance methods and passing in false. So we don't include the methods that are from superclasses, which can be very helpful. So that way, you're just seeing what scoped to that class. But then, let's say if you do have a class that is inheriting from other modules, then you may want to include those methods in your search. So to get fancier, you could look at that class' ancestor chain and then collect the classes or models that are custom to your application, and then look at those instance methods. And then you could sort them alphabetically. But you're still really relying on is there a method name that looks very similar to what I would call this method? So I don't know that that's a really efficient way. But if I just feel like there's probably already something in this space and I'm just looking for a clue or some name that's going to hint that something already exists, that's one way I could do it. To throw another wrench in there, I just remembered there are also private methods, and private methods don't get returned from instance methods. I think it's private instance methods is a method that you'd have to call to then include those in your search results as well. So outside of some deeper static analysis, this seems like a hard problem. This seems like something that would be challenging to solve. And then I guess the other one is I ask a friend. So I will often lean on if there's someone else at the team that's been there longer than me is I will just ask in Slack and say, "Hey, I want to do a thing. I'm worried this already exists, or I think it already exists. Does anybody have any clues or ideas as to where this might live?" I know I just ran through a giant list of ideas there. But I'm really curious, what are your thoughts? If you have a messier codebase and you're worried that you are reimplementing logic that already exists, but you're trying to make sure you don't duplicate that logic, how do you avoid that? CHRIS: Well, the first thing I want to say is that I find it really interesting that I think you and I came at this from different directions. My answer, which I'll come to in a minute, is more of the I'm not actually sure that this is that easy to avoid, and maybe that's not the biggest problem in the world. And then I have some thoughts downstream from that. But the list that you just gave was fantastic. That was a tour de force of how to understand and explore a codebase and try and answer this very hard question of like, does this logic already exist somewhere else? So I basically just agree with everything you said. And again, I'm deeply impressed with the range of options that you offer there for trying to figure this out. That said, sometimes codebases just get really large. And this is going to happen. I think the specific mention of concerns as sort of a way that this problem can manifest feels true. Having the user object and being like, oh man, our user object is getting pretty big. Let's pull something out into a concern as just a way to clean it up. That actually adds a layer of indirection that makes it harder to understand the totality of what's going on in this thing. And so personally, I tend to avoid concerns for that reason or at least at the model layer, especially where it's just a we got 1,000 methods here. Let's pull 200 of them into a file and maybe group them somewhat logically. That tends to not solve the problem in my mind. I found that it just basically adds a layer of indirection without much additional value. I will say in this particular case, the thing that we're talking about presenting the full name or the formatted name feels almost like a presentational concern. So I might ask myself, is there a presenter object, something that wraps around a user and encapsulates this? And then we as a team know that that sort of presentational or formatting logic lives in the presentation format or layer. Maybe I'm not entirely convinced of that as an answer. But it's just sort of where can we find organizational lines to draw within our codebases? I talked about query objects earlier. That's one case of this is behavior that I'll often see in classes as, say, a scope or something like that that I will extract out into a query object because it allows me to encapsulate it and break it out a little bit more but still have most of the nice pieces that I would want. So are there different organizational patterns that are useful? I think it's very easy to start drawing arbitrary lines within our codebases and say, "These are services." And it's like, what does that mean? That doesn't mean anything. App Services, that's not a thing, so maybe don't do that. But maybe there are formatters, queries, commands, those feel like...or presenters, queries, commands. Maybe those are organizational structures that can be useful. But switching to the other side of it, the first thing that came to mind is like, this is going to happen. As a codebase grows, this is absolutely going to happen. And so I would ask rather than how can I, as the developer, avoid doing this in the first place...which I think is a good question to ask. And again, everything you listed, Steph, is great. And I think a wonderful list of ways that you can actually try to avoid this. But let's assume it is going to happen. So then, what do we do downstream from that? One answer that comes to mind is code review. Code review is not perfect. But this is the sort of thing that often in code review I will be like, oh, I actually wrote a method that's similar to this. Can you take a look at that and see can we use only one of these or something like that? So I've definitely seen code review be a line of defense on this front. But again, stuff is still going to sneak through. And someday, you'll find it down the road. And that's the point in time that I think is most interesting. When you find this, can you fix it easily? Do you have both process-wise and infrastructure-wise the ability to do a very small PR that just removes the duplicate method, removes the usage of it, and consolidates on the one? It's like, oh, I found it. Here's a 10-line PR that just removes that method, changes the usage. And now we're good. And that can go through code review and CI very quickly. And we have a team culture that allows us to make those tiny changes on the regular to get them out to production as quick as possible so that we know that this is a good code change, all of that. I found there are teams that I've worked on where that process is much slower. And therefore, I will try and roll a change like that up into a bigger PR because I know that's the only way that it's really going to get through. Versus I've been on teams that have very high throughput is probably the best way to describe it. And on those teams, I find that the codebase tends to be in a healthier shape because it just naturally falls out of having a system that allows us to make changes rapidly with high trust, get them out into the world, et cetera. STEPH: This is that bug or inconsistency that's going to show up where on one page you have the user's full name. And then on another page, you have the user's full name, but maybe the last name is not capitalized, or there's just something that's slightly different. And then that's when you realize that you have two implementations of essentially the same logic that have differed just enough. I like how you pointed out that this is one of those things that as a codebase grows, it's probably going to happen, and that's fine. It's one of those if you do have duplicate logic, over time, based on your team's processes, you'll be able to then identify when it does happen, and then look for those preventative patterns for then how you organize your code. How quickly can you make that change? Can you just issue a PR that then removes one of them? But then look for ways to say, how are we going to help our future selves recognize that if we're looking for a user's full name, where's a good place to look for that? And then what's a good domain space or naming that we can give to then help future searchers be able to find it? I also really like your code review example because it does feel like one of those things that, yes, we want to catch it if we can, and we can leverage the team. But then also, it's not the end of the world if some of these methods do get duplicated. There's one other thing that came to mind that it's not really going to help prevent duplicate methods, but it will help you identify unused code. So it's the Unused tooling that you can run on your codebase. And that's something that would be wonderful to run on your codebase every so often. So that way, if someone has added...let's say there was a method that was full name but is not in use. It didn't have test coverage; that's why you didn't find it initially. And so you've introduced your own formatted name. And then, if you run unused at some point, then you'll hopefully catch some of those duplicate methods as long as they're not both in use. CHRIS: I think one more thing that I didn't quite say in my earlier portion about this. But in order to do that, to use Unuse or to have these sort of small pull requests that are going through, you have to have test coverage that is sufficient that you are confident you're not going to break the app. Because the day that you do like, oh, there's a typo here; let me fix it real quick. Or there's this method I'm pretty sure it's not used; let me rip it out. And then you deploy to production, and suddenly the error system is blowing up because, in fact, it was used but sneakily in a way that you didn't think of, and your test coverage didn't catch that. Then you don't have trust in the system, and everything slows down as a result of that. And so I would argue for fixing the root problem there, which is the lack of test coverage rather than the symptom, which is, oh, I made this change, it broke something. Therefore, I won't make small changes anymore. STEPH: Definitely. Yeah, that's a great point. CHRIS: So yeah, I don't have any answer. [laughs] My answers are like, I don't know, it's going to happen, but there's a lot of stuff organizationally that we can do. And granted, you gave a wonderful list of ways to actually avoid this. So I think the combination of our answers really it's a nice spectrum of thoughts on this topic. STEPH: I agree. I feel like we covered a very nice range all the way from trying to identify and then how to prevent it or how to help future people be able to identify where that logic lives and find it more easily. Also, at the end of the day, I like the how big of a problem is this? And it is one of those sure; we want to avoid it. But I liked how you captured that at the beginning where you're like, it's okay. Like, this is going to happen but then have the processes around it to then avoid or be able to undo some of that duplicate work. But otherwise, if it happens, don't sweat it; just look for ways to then prevent it from happening in the future. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Happy New Year (for real)! Chris and Steph both took some end-of-year time off to rest and recharge. Steph talks about some books she enjoyed, recipes she tried, and trail-walking adventures with her dog, Utah. Chris' company is now in a good position to actually start hiring within the engineering team. He's excited about that and will probably delve into more around the hiring process in the coming weeks. Since they aren't really big on New Year's Eve resolutions, Steph and Chris answer a listener question regarding toxic traits inspired by the listener question related to large pull requests and reflect on their own. The Midnight Library by Matt Haig (http://www.matthaig.com/books/midnight-library/) Tim Urban on Twitter (https://twitter.com/waitbutwhy/status/1367871165319049221) How to Stop Time by Matt Haig (http://www.matthaig.com/how-to-stop-time/) Do the Next Right Thing (https://daverupert.com/2020/09/do-the-next-right-thing/) Debugging Why Your Specs Have Slowed Down (https://thoughtbot.com/blog/debugging-why-your-specs-have-slowed-down) test-prof (https://github.com/test-prof/test-prof) Tests Oddly Slow Might Be Bcrypt (https://collectiveidea.com/blog/archives/2012/11/12/tests-oddly-slow-might-be-bcrypt) Transcript: STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world? CHRIS: What's new in my world? Well, spoiler, we actually may have lied in a previous episode when we said, "Hey, happy New Year," because, for us, it was not actually the new year. But this, in fact, is the first episode of the new year that we're recording, that you're hearing. Anyway, this is enough breaking the fourth wall. Sorry, listener. STEPH: [laughs] CHRIS: Inside baseball, yadda, yadda. I'm doing great. First week back. I took some amount of vacation over the holidays, which was great, recharging, all those sorts of things. But now we're hitting the ground running. And I'm actually really enjoying just getting back into the flow of things and, frankly, trying to ramp everything up, which we can probably talk about more in a moment. But how about you? How's your new year kicking off? STEPH: I like how much we plan the episodes around when it's going to release, and we're very thoughtful about this is going to be released for the new year or around Christmas time, and happy holidays to everybody. And then we get back, and we're like, yeah, yeah, yeah, we can totally drop the facade. [laughs] We're finally back from vacation. And this is us, and this is real. CHRIS: Date math is so hard. It just drains me entirely to even try and figure out when episodes are going to actually land. And then when we get here, also, you know, I want to talk about the fact that there was vacation and things, and the realities of the work, and the ebb and flow of life. So here we are. STEPH: Same. Yeah, I love it. Because I'm in a similar spot where I took two weeks off, which was phenomenal. That's actually sticking to one of the things we talked about, for one of the things I'm looking to do is where I take just more time off. And so having the two weeks was wonderful. It was also really helpful because the client team that I'm working with also shut down around the end of the year. So they took ten days off as well. So I was like, well, that's a really good sign of encouragement that I should also just shut down since I can. So it's been delightful. And I have very little tech stuff to share because I've just been doing lots of other fun things and reading fiction, and catching up with friends and family, and trying out new recipes. That's been pretty much my last two weeks. Oh, and walks with Utah. His training is going so well where we're starting to walk off-leash on trails. And that's been awesome. CHRIS: Wow, that's a big upgrade right there. STEPH: Yeah, we're still working on that moving perimeter so he knows how far he can go. Before then, he needs to stop and check on me. But he's getting pretty good where he'll bolt ahead, but then he'll stop, and he'll look at me, and then he'll wait till I catch up. And then he'll bolt ahead again. It's really fun. CHRIS: I like that that's the version of it that we're going for. This is not like you're going to walk alongside me on the trail; it's you're obviously going to run some distance out. As long as you check back in once every 20 feet, we're good; that's fine. Any particularly good books, or recipes, or talks with friends to go with that category? But that one's probably a little more specific to you. STEPH: [laughs] Yes. There are two really good books that I read over the holidays. They're both by the same author. So I get a lot of books from my mom. She'll often pick up a book, and once she's done with it, she'll drop it off to me or vice versa. So the one that she shared with me is called The Midnight Library. It's written by Matt Haig, H-A-I-G. And it's a very interesting story. It's a bit sad where it's about a woman who decides that she no longer wants to live. And then, when she moves in that direction to go ahead and end her life, she ends up in this library. And in the library, every time she has made a different decision or made a decision in life, then there is a new book written about what that life is like. So then she has an opportunity to go explore all of these lives and see if there's a better life out there for her. It is really interesting. I highly recommend it. CHRIS: Wow. I mean, that started with, I'm going to be honest, a very heavy premise. But then the idea that's super interesting. I would, actually...I think I might read that. I tend to just read sci-fi. This is broadly in the space, but that is super interesting. There's an image that comes to mind actually as you described that. It's from Tim Urban, who's also known as Wait But Why. I think he posts under that both on Twitter, and then I think he has a blog or something to that effect. But the image is basically like, all of the timelines that you could have followed in your life. And everybody thinks about like, from this moment today. Man, I think about all of the different versions of me that could exist today. But we don't think about the same thing moving forward in time. Like, what are all the possibilities in front of me? And what you're describing of this person walking around in a library and each book represents a different fork in the road from moving forward is such an interesting idea. And I think a positive reframing of any form of regret or looking back and being like, what if I had gone the other way? It's like, yeah, but forward in time, though. I'm very intrigued by this book. STEPH: Yeah, it's really good. It definitely has a strong It's a Wonderful Life vibe. Have you ever watched that movie? CHRIS: Yes, I have. STEPH: So there's a lot of that idea of regret. And what if I had lived differently and then getting to explore? But in it's A Wonderful Life, he just explores the one version. And in the book, she's exploring many versions. So it's really neat to be like, well, what if I'd pursued this when I was younger, had done this differently? Or what if I got coffee instead of tea? There are even small, little choices that then might impact you being a different person at a point in time. The other book that I read is by the same author because I enjoyed Midnight Library so much that I happened to see one of his other books. So I picked it up. And it's called How to Stop Time. And it's about an individual who essentially lives a very long time. And there are several people in the world that are like this, but he lives for centuries. But he doesn't age, or he ages incredibly slowly, at a rate that where say that he's 100 years old, but he'll still look 16 years old. And it's very good. It's very interesting. It's a bit more sad and melancholy than I typically like to read. So that one's good. But I will add that even though I described the first one, it has a sad premise; I found The Midnight Library a little more interesting and uplifting versus the other one I found a bit more sad. CHRIS: All right. Excellent additional notes in the reading list here. So you can opt like, do you want a little bit more somber, or do you want to go a little more uplifting? Yeah, It's a Wonderful Life path being like, starts in a complicated place but don't worry, we'll get you there in the end. STEPH: But I've learned I have to be careful with the books that I pick up because I will absorb the emotions that are going on in that book. And it will legit affect me through the week or as I'm reading that book. So I have to be careful of the books that I'm reading. [laughs] Is that weird? Do you have the same thing happen for when you're reading books? CHRIS: It's interesting. I don't think of it with books as much. But I do think of it with TV shows. And so my wife and I have been very intentional when we've watched certain television shows to be like, we're going to need something to cut the intensity of this show. And the most pointed example we had was we were watching Breaking Bad, which is one of the greatest television shows of all time but also just incredibly heavy and dark at times, kind of throughout. And so we would watch an episode of Breaking Bad. And then, as a palate cleanser, we would watch an episode of Malcolm in the Middle. And so we saw the same actor but in very different facets of his performance arc and just really softened things and allowed us to, frankly, go to bed after that be able to sleep and whatnot but less so with reading. So I find it interesting that I have that distinction there. STEPH: Yeah, that is interesting. Although I definitely feel that with movies and shows as well. Or if I watch something heavy, I'm like, great, what's on Disney? [laughs] I need to wash away some of that so I can watch something happy and go to sleep. You also asked about recipes because I mentioned that's something I've been doing as well. There's a lot of plant-based books that I've picked up because that's really my favorite type of thing to make. So that's been a lot of fun. So yeah, a lot of cooking, a lot of reading. How about you? What else is going on in your world? CHRIS: Well, actually, it's a super exciting time for Sagewell Financial, the company that I've joined. We are closing our seed financing round, which the whole world of venture capital is a novel thing that I'm still not super involved in that part of the process. But it has been really interesting to watch it progress, and evolve, and take shape. But at this point, we are closing our seed round. Things have gone really well. And so we're in a position to actually start hiring, which is a whole thing to do, in particular, within the engineering group. We're hiring, I think, throughout the company, but my focus now will be bringing a few folks into the engineering team. And yeah, just trying to do that and do that well, do that intentionally, especially for the size of the team that we have now, the sort of work that we're doing, et cetera, et cetera. But if anyone out there is listening, we are looking for great folks to join the team. We are Ruby on Rails, Inertia, TypeScript. If you've listened to the show anytime recently, you've heard me talk about the tech stack plenty. But I think we're trying to do something very meaningful and help seniors manage finance, which is a complicated and, frankly, very underserved space. So it's work that I deeply believe in, and I think we're doing a good job at it. And I hope to do even a better job over time. So if that's at all interesting, definitely reach out to me. But probably in the coming weeks, you'll hear me talk more and more about hiring and technical interviews and all of those sorts of things. I got to ramp myself back up on that entire world, which is really one of those things that you should always be doing is the thought that I have in my head. Now that I'm in a position to be hiring, I wish I'd been half-hiring for the past three months, but I'll figure it out. It'll be fine. STEPH: That's such a big undertaking. Everything you're saying resonates, but also, it's like that's a lot of hard work. So if you're not in that state of really being ramped up for hiring, I understand why that would be on the backburner. And yeah, I'm excited to hear more. I've gotten to hear some more of the product details about Sagewell, but I don't think we've really talked about those features here on the show. So I would love it if we brought some more of the feature work and talked about specifically what the application does. I am intrigued speaking of how much energy goes into hiring. Where are you at in terms of how much...like, are there any particular job boards that you're going for? Or what's your current approach to hiring? CHRIS: Oh, that's a great question. I have tweeted once into the world. I have a draft of a LinkedIn post. This is very much I'm figuring out as I go. It's sort of the nature of a startup as we have so many different things to do. And frankly, even finding the time to start thinking about hiring means I'm taking time away from building features and growing out other aspects. So it's definitely a necessary thing that we're doing at this point in time. But basically, everything we're doing is just in time compiling and figuring out what are the things that are semi-urgent right now? And to be honest, I like that energy overall. I've always had in the back of my mind that I like this sort of work and this space, especially if you can do it intentionally. It shouldn't feel like everything's on fire all the time, but it should feel like a lot of constraints that force you to make decisions quickly, which, if we're being honest, I think that's something that is not my strongest suit. So it's something that I'm excited to grow that muscle as part of this work. But so, with that in mind, at this point, my goal is to just start getting the word out there into the world that we are looking to hire and get people interested and then, from there, build out what's the interview process going to look like? I will let you know when we get there; I will. I will figure that out. But it's not something that I've...I haven't actually very intentionally thought about all of this. Because if I were to do that, it would delay the amount of time until I actually say into the world, "Hey, we're hiring." So I very purposely was like, I just need to say this into the world and then continue doing the next steps in that process. I'm prone to the perfect is the enemy of the good just trying to like, I want to have a complete plan and a 27-step checklist, and a Gantt chart, and a burndown. And before I take any first action and really trying to push back, I'm going to be like, no, no, just do something, just take a step in the right direction. There's actually a blog post that comes to mind, which is by Dave Rupert, who is a former guest on this podcast. It was wonderful getting to interview him. But he wrote a blog post. The title of it is Do the Next Right, which is a line from a song in the movie Frozen 2, I believe. He is like, all right, stick with me here. And I know this is a movie for kids, maybe. But also, this is a very meaningful song. And he framed it in a way that actually was surprisingly impactful to me. And it's that idea that I'm holding on to of you can't do it all, and you can't do it perfectly. Just do the next right thing. That's what you're going to do. So we'll link to that blog post in the show notes. But that's kind of where I'm at. STEPH: I love that. I'm looking forward to reading that because that has been huge for me. I used to be held back by that idea of perfection. But then I realized other people were getting more work done more quickly. And so I was like, huh, maybe there's something to this just doing the next thing versus waiting for perfection that is really the right path. So, how do folks reach out to you? Should they reach out to you on Twitter or email? What's best for you? CHRIS: Oh yeah, Twitter. This is all probably going to be said at the end of the show as well. But Twitter @christoomey. ctoomey.com is my blog. I'm on GitHub. I make it very easy to contact me because I haven't regretted that up to this point in my life. So basically, anywhere you find me on the internet, you will be able to email me or DM me or any of the things. I'm going to see how long I can hold on to that. I want to hold on to that forever. I want just a very open-door policy. So that's where I'm at right now, but any of those starting points. And bikeshed.fm website will somehow link to me in any of the various forums, and they're all kind of linked to each other, so any of those are fine. I will happily take inquiries via any of the channels. STEPH: Cool. Well, I'm excited to hear about how it goes. CHRIS: Me too, frankly. But in a very small bit of little tech news or tech happenings from my holiday time, this was actually just before I started to go on break for the holidays. I had noticed that the test suite was getting very slow, like very, very slow but on my machine. It was getting a little bit slow on CI, but the normal amount where we just keep adding new things. And we're adding a lot of feature specs because we want to have that holistic coverage over the whole application, and we can, so for now, we're doing that. But our spec suite had gotten up to six-ish minutes on CI and had a couple of other things. We have some linting and some TypeScript and things like that. But on my machine, it was very slow. So I hadn't run the full spec suite in a long time. But I knew that running any individual spec took surprising amounts of time. And in the back of my head, I was like; I guess I hadn't configured Spring. That seems weird. I probably would have done that, but whatever. And I'd never pushed on it more until one day I ran the specs. I ran one model spec, and it took 30 seconds or something like that. And I was like, well, that's absurd. And so I started to look into it. I did some scanning around the internet. There was a wonderful post on the Giant Robots blog about how to look through things from Mike Wenger, a wonderful former thoughtboter. Unfortunately, none of the tips in there were anything meaningful for me. Everything was as I expected it to be. So I set it down. And there were a couple of times that this happened to me where I'd be like, this is frustrating. I need to look into this a little bit more, but it was never worth investing more time. But I mentioned it in passing to one of the other developers on the team. And as a holiday gift to me, this person discovered the solution. So let me describe a little bit more of what we've got working on here. On CI, which in theory is less powerful than my new, fancy M1 MacBook, on CI, we take about six minutes for the test suite. On my computer, it was taking 28 minutes and 30 seconds. So that's what we're working with. The factories are all doing normal things. We're not creating way too many database records or anything like that. So any thoughts, anything that you would inspect here? STEPH: Ooh, you've already listed a number of good things that I would check. CHRIS: Yeah, I took all the easy ones off the list. So this is a hard question at this point. To be clear, I had no ideas. STEPH: Could you tell if there's a difference if it's like the boot-up time versus the actual test running? CHRIS: Did that check; it is not the boot-up time. It is something that is happening in the process of running an individual spec. STEPH: No, I'm drawing a blank. I can't think of what else I would check from there. CHRIS: It's basically where I was at. Let me give you one additional piece of data, see if it does anything for you. I noticed that it happened basically whenever executing any factory. So I'd watch the logs. And if I create this record, it would do roughly what I expect it to. It would create the record and maybe one or two associated records because that's how Factory Bot works. But it wasn't creating a giant cascade or waterfall of records under the hood. If we create a product, the product should have an associated user. So we'll see a product and a user insert. But for some reason, that line create whatever database record was very, very slow. STEPH: Yeah, it's a good point, looking at factories because that's something I've noticed in triaging other tests is that I will often check to see how many records are created at a certain point because I've noticed there's a test where I think only one record is created, but I'll see 20. And that's an interesting artifact. But you're not running into that. But it sounds like there's more either some callback or transaction or something that's getting hung up and causing things to be slow. CHRIS: I love those ideas. I didn't even know those were sort of ideas in the back of my head. I didn't know how to even try and chase that down. There was nothing in the logs. I couldn't see anything. And again, I just kept giving up. But again, this other developer on the team found the answer. But at this point, I'll just share the answer because I think we've run out of the good bits of the trivia. It turns out bcrypt was the answer. So password-hashing was incredibly slow on my machine. What was interesting is I mentioned this to the other developer because they also have an M1. But there are three of us working on the project. The third developer does not have the M1 architecture. So that was an interesting thing. I was like, I feel like this maybe is a thing because we're both experiencing this, but the other developer isn't. So it turns out bcrypt is wildly slow on the M1 architecture, which is sort of interesting as an artifact of like, what is password hashing, and how does it work? And in normal setups, I think the way it works is Devise will say by default, "We're going to do 12 runs of bcrypt." So like take the password, put it into the hashing algorithm, take the output, put it back into the hashing algorithm, and do that loop 12 times or whatever. In test mode, it often will configure it to just run once, but it will still use the password hashing. Turns out even that was too slow for us. So we in test mode enabled it so that the password hashing algorithm was just the password. Don't do anything. Just return it directly. Turn off bcrypt; it's too painful for us. But it was very interesting to see that that was the case. STEPH: Yeah, I don't like that answer. [laughs] I'm not a fan. That is interesting and tricky. And I feel like the only way I would have found that...I'm curious how they found it because I feel like at that point, I would have started outputting something to figure out, okay, where is the slow process? What's the thing that's taking so long to return? And if I can't see tailing the test logs, then I would start just using a PUT statement to figure out what's taking a long time? And start trying to troubleshoot from there. So I'm curious, do you know how they identified that was the core issue? CHRIS: Yes, actually. I'm looking back at the pull requests right now. And I'm mentioning that this was related to the M1 architecture, but I don't think that's actually true because the blog post that they're linking to is Collective Idea blog post: Tests Oddly Slow? Might be bcrypt. And then there's a related Rails issue. They used TestProf, which is a process that you can run that will examine, I think the stack trace and say where are we spending the most time? And from that, they were able to see it looks like it's at the point where we're doing bcrypt. And so that's the answer. As an aside, my test suite went from 28 minutes and 30 seconds to 1 minute and 30 seconds with this magical speed up. STEPH: Nice. That's a great idea, TestProf. I don't know if I've used that tool. It rings a bell. But that's an awesome sales pitch for using TestProf. CHRIS: Similarly, I don't think I'd ever use it before. But it truly was this wonderful holiday gift. Because the minute I switched over to this branch, I was like, oh my God, the tests are so fast. I have one of those fancy, new fast computers, [laughs] and now they're so fast. STEPH: Wait, you had to switch to a branch? I figured it was something that you had to do special on your machine. So I'm intrigued how they fixed it for you, and then you switched to a branch and saw the speed increase. CHRIS: So they opened a pull request. And that pull request had the change in the code. So it was a code-level configuration to say, "Hey, Devise, when you do the password hashing thing, maybe just don't, maybe be easy for a moment," [laughter] but only in the test configuration. So all I had to do was check out the branch, and then that configuration was part of the Rails helper setup, and then we were good to go from there. I added an extra let me be terrified about this because the idea of not hashing passwords in production is terrifying. So let me raise...I put a couple of different guards against like, this should only ever run in test. I know it's in the spec support directory, so it shouldn't. Let me just add some other guards here just to superduper make sure we still hash passwords in production. STEPH: Devise has a bcrypt chill mode. Good to know. [laughs] And I like all the guards you put in place too. CHRIS: Yeah, it was really frankly such a relief to get that back to normal, is how I would describe it. But yeah, that's a fun little testing, and password hashing, and little adventure that I get to go on. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: So I have something that I've been wanting to ask you, and it's not tech-related. But we can make this personal and work however we want to tackle it. But there is a previous episode where we read a listener question from Brian about their self-diagnosed toxic trait being large pull requests. And Brian was being playful with the use of the term toxic trait. But it got me thinking, it's like, well, what is my toxic trait? And it seems like a fun twist on you, and I aren't really big on New Year's Eve resolutions. And in fact, I think you and I are more like if we're interested in achieving a goal, we'd rather focus on building a habit versus this specific, ambiguous we're going to publish ten blogs this year. But rather, I'd rather sit down and write for 15 minutes each day. And it seemed like a fun twist instead of thinking about what are my toxic traits, personal, at work? Large pull request is a really fun example. So I'll let you choose. I can go first, or you can go first, but I'm excited to hear your thoughts on this one. CHRIS: I think I've been talking too much. So let's have you go first at this point/ also, I want a few more seconds to think about my toxic trait. STEPH: [laughs] All right, I have a couple. So that's an interesting point start there [laughs], but here we are. So I was even bold because I asked other people. Because I'm like, well, if I'm going to be fully self-aware, I can't just...I might lie to myself. So I'm going to have to ask some other people. So I asked other folks. And my personal toxic trait is I am tardy. I am that person who I love to show up 5, 10, 15 minutes late. It's who I am. I don't find it a problem, but it often bothers other people. So that is my informed toxic trait. That might be a strong term for it. But that's the one that gives people the most grief. CHRIS: Interesting. I do find the framing of I don't find my own tardiness to be a problem as a really interesting sort of lens on it. But okay, it's okay. STEPH: I see it as long as I'm getting really good quality time with someone; if I'm five minutes late, I'm five minutes late. I think the voice going high means I'm a little defensive. [laughs] CHRIS: But at least you're self-aware about all of these aspects. [laughs] That's critical. STEPH: I am self-aware, and most of the people in my life are also self-aware, although I do correct that behavior for work. That feels more important that I be on time for everything because I don't want anyone to feel that I am not valuing their time. But when it comes to friends and family, they thankfully accept me for who I am. But then, on the work note, I started thinking about toxic traits there. And the one I came up with is that I'm a pretty empathetic person. And there's something that I learned that's called toxic empathy. And it's when you let people's emotions hijack your own emotions, or you'll prioritize someone else's physical or mental health over your own. So, for example, it could be letting another person's anxiety and stress keep you from getting your current tasks and responsibilities done. And there's a really funny tweet that I saw where someone says, "Hey, can I vent to you about something?" And the first person telling it from their perspective they're crying in the middle of a breakdown. And they're like, "Yeah, sure, what's up?" And I felt seen by that tweet. I was like, yeah; this seems like something I would do. [laughs] So over time, as something I'm aware of about myself, I've learned to set more boundaries and only keep relationships where equal support is given to both individuals. And this circles back to the book anecdote that I shared where I had to be careful about the books that I read because they can really affect my mood based on how the characters are doing in that book. So yeah, that's mine. I have one other one that I want to talk about. But I'm going to pause there so you can go. CHRIS: Okay, fun. [laughs] This is fun. And it is a challenging mental exercise. But it is also, I don't know, vulnerable, and you have to look inside and all that. I think I poked at one earlier on as we were talking, but the idea of perfect is the enemy of the good. And I don't mean this in the terrible like; what's your worst trait in a job interview? And you're like, "I'm a perfectionist." I don't mean it in that way. I mean, I have at times struggled to make progress because so much of me wants to build the complete plan, and then very meticulously worked through in exactly the order that I define, sort of like a waterfall versus agile sort of thing. And it is an ongoing very intentional body of work for me to try and break myself off those habits to try and accept what's the best thing that I can do? How can I move forward? How can I identify things that I will regret later versus things that are probably fine? They're little messes that I can clean up, that sort of thing. And even that construing it as like there's a good choice and a bad choice, and I'm trying to find the perfect choice. It's like almost nothing in the world actually falls into that shape. So perfect is the enemy of the good is a really useful phrase that I've held onto that helps me. And it's like, aiming for that perfection will cause you to miss the good that is available. And so, trying to be very intentional with that is the work that I'm doing. But that I think is a toxic trait that I have. STEPH: I really like what you just said about being able to identify regrets. That feels huge. If you can look at a moment and say, "I really want to get all this done. I will regret if I don't do this, but the rest of it can wait," that feels really significant. So the other one that I wanted to talk about is actually one that I feel like I've overcome. So this one makes me happy because I feel like I'm in a much better space with it, but it's negative self-talk. And it's essentially just how you treat yourself when you make a mistake. Or what's your internal dialogue throughout the day? And I used to be harsh on myself. If I made a mistake, I was upset, I was annoyed with myself, and I wouldn't have a kind voice. And I don't know if I've shared this with you. But over time, I've gotten much better at that. And what has really helped me with it is instead of talking to myself in an unkind voice, I talk to myself how someone who loves me will talk to me. I'm not going to talk to a friend in a really terrible, mean voice, and I wouldn't expect them to talk to me. So I channel someone that I know is very positive and supportive of me. And I will frame it in that context. So then, when I make a mistake, it's not a big deal. And I just will say kind things to myself or laugh about it and move through it. And I found that has been very helpful and also funny and maybe a little embarrassing at times because when pairing, I will talk out loud to myself. And so I'll do something silly, and I'll laugh. I'm like, "Oh, Stephanie," that was silly. And the other person hears me say that. [laughs] So it's a little entertainment for them too, I suppose. CHRIS: Having observed it, it is charming. STEPH: It's something that I've noticed that a lot of people do, and we don't talk about a lot. I mean, there's imposter syndrome. People will talk about that. But we don't often talk about how critical we are of ourselves. It's something that I will talk to people who I highly admire and just think they're incredibly good at what they do. And then when they give me a glimpse into how they think about themselves at times or how they will berate themselves for something they have done or because they didn't sit down for that 15 minutes and write per day, then it really highlights. And I hope that if we talk about this more, the fact that people tend to have such a negative inner critical voice, that maybe we can encourage people to start filtering that voice to a more kind voice and more supportive voice, and not have this unhelpful energy that's holding us back from really enjoying our work and being our best self. CHRIS: That's so interesting to hear you say all of that for one of your traits because it's very similar to the last one for myself, which is I find that I do not feel safe unless (This is going to sound perhaps boastful, and I definitely do not mean it as boastful.) but unless I'm perfect. I guess the standard that I hold myself to versus the standard that I hold others to are wildly different. Of course, for other people, yes, bugs will get into the code, or they may misunderstand something, or they may miss communicate something, or they may forget something. But if I do that, I feel unsafe, which is a thing that I've slowly come to recognize. I'm like, well, that shouldn't be true because that's definitely not how I feel about other people. That's not a reasonable standard to hold. But that needing to be perfectly secured on all fronts and have just this very defensible like, yeah, I did the work, and it's great, and that's all that's true in the world. That's not reasonable. I'm never going to achieve that. And so, for a long time, there have been moments where I just don't feel great as a result of this, as a result of the standard that I'm trying to hold myself to. But very similarly, I have brought voices into my head. In my case, I've actually identified a board of directors which are random actual people from my world but then also celebrities or fake people, and I will have conversations with them in my head. And that is a true thing about me that I'm now saying on the internet, here we are. STEPH: [laughs] CHRIS: And I'm going to throw it out there. It is fantastic. It is one of my favorite things that I have in my world. As a pointed example of a time that I did this, I was running a race at one point, which I occasionally will run road races. I am not good at it at all. But I was running this particular race. It was a five-mile January race a couple of years back. And I was getting towards the end, and I was just going way faster than I normally do. I was at the four-mile mark, and I was well ahead of pace. I was like, what is this? I was on track to get a personal record. I was like, this is exciting. But I didn't know if I could finish. And so I started to consult the board of directors and just check in with them and see what they would think about this. And I got weirdly emotional, and it was weirdly real is the thing that was very interesting, not like I actually believed that these people were running with me or anything of that nature. But the emotions and the feelings that I was able to build up in that moment were so real and so powerful and useful to me that it was just like, oh, okay, yeah, that's a neat trick. I'm going to hold on to that one. And it has been continuously useful moving forward from that of like, yeah, I can just have random conversations with anyone and find useful things in that and then use that to feel better about how I'm working. STEPH: I so love this idea. And I'm now thinking about who to put on my board of directors. [laughs] CHRIS: I'm telling you, everybody should have one. As I'm saying this, there is definitely a portion of me that is very self-conscious that I'm saying this on the internet because this is probably one of the weirdest things that I do. STEPH: [laughs] CHRIS: But it is so valuable. And it's one of those like; I like getting over that hump of like, well, this is an odd little habit that I have, but the utility that I get from it and the value is great. So highly recommend it. It's a fun game of who gets to go on your board. You can change it out every year. And it is interesting because the more formed picture that you have of the individual, the more you can have a real conversation with them, and that's fun. STEPH: So, as I'm working on forming a board of directors, how do you separate? Is it based on one person is running work and one is finance? How does each person have a role? CHRIS: So there are no rules in this game. [laughs] This is a ridiculous thing that I do. But I find value in it's sort of vaguely the same collection of individuals. Some of them are truly archetypal, even fictitious characters. As long as I can have a picture in my head of them and say, "What would they say in a situation?" If you're considering, say, moving jobs? What would Arnold Schwarzenegger have to say about that? And you'd be surprised the minute you ask it in your head; your brain is surprisingly good at these things. And it's like, let me paint The Terminator yelling at you to get the new job. STEPH: [laughs] CHRIS: Not get to the chopper, but get the new job. And it's surprisingly effective. And so I don't have a compartmentalized like, this is my work crew, this is my life crew. It's a nonsense collection of fake people in my head that I get to talk to. I'm saying this on the internet; here we are. [laughs] STEPH: That makes sense to me, though, because as you're describing that situation, I do something similar, but I've just never thought about it in these concrete terms where I have someone in mind, and it's a real person in my life who are my confidence person. They're the one that I know they are very confident. They're going to push for the best deal for themselves. They're going to look out for themselves. They're going to look out for me. They're going to support me. I have that person. And so, even if I can't talk to them in reality, then I will still channel that energy. And then I have someone else who's like my kind filter, and they're the person that's going to be very supportive. And you make mistakes, and it's not a big deal, and you learn, and you move on. And so I have those different...and in my mind, I just saw them as coaches. Instead of board of directors, I just see them as different things that I don't see as strong in my character. And so I have these coaches in those particular areas that then I will pull energy from to then bolster myself in a particular way or skill. This was fun. I'm so glad we talked about this because that is very insightful to you, and for me as well, and to myself. CHRIS: Yeah, we went deep on this episode. STEPH: No tech but lots of deep personal insight. CHRIS: I talked a little bit about bcrypt. [laughs] You can't stop me from talking about tech for an entire episode. But then I also talked about my board of directors and the conversations I have with myself, so I feel like I rounded it out pretty good. STEPH: It's a very round episode. CHRIS: Yeah, I agree. And with that roundedness, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph tells a cute story about escape artist huskies, and on a technical note, shares a journey in regards to class variables and modules inheritance. Chris talks about how he's starting to pursue analytics and one of the things that he's struggling with that he's always historically struggled with is the idea of historical data. He's also noticed a lack of formalization of certain things and is working with his team to remedy that. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Mike Burns: How to Skim a Pull Request (https://thoughtbot.com/blog/a-smelly-list) RSpec Documentation (https://rspec.info/documentation/) Don't Let the Internet Dupe You, Event Sourcing is Hard (https://chriskiehl.com/article/event-sourcing-is-hard) Datomic (https://www.datomic.com/) timefora_boolean (https://github.com/calebhearth/time_for_a_boolean) Sentry (https://sentry.io/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, it's an entirely new year. What is new in your new year? STEPH: Well, the year is off to an interesting start because we helped rescue a husky. CHRIS: Rescue as in now this is your dog or rescue as in the dog was trapped in a well, and another dog told you about the dog being trapped in a well, and then you helped the trapped? [laughs] Which of those situations are we working with? STEPH: [laughs] I'm really wishing it was the second version [laughs] where there's a dog that tells me about another dog trapped in a well. No, this is a third version where there was a husky that was wandering around the gym that we go to. And so Tim, my husband, called and said that "There's this husky, and he's super sweet, but he seems very lost." And our gym is located near a major road, and so we were worried that he was going to wander about and get hit. So I hopped into our car and took a crate and a leash, and he hopped right in. Clearly, he belonged to somebody; he'd just escaped. So he hops right in, and then we bring him home. But I put him in the backyard because I want to keep him separate from our dog, Utah, just because I don't know this dog, and I want to keep him safe. And I go back inside to grab a few things. I come back out, and the husky is gone. And I'm like, well, shit. [laughs] Now I'm starting to understand why this husky is missing or why this husky seemed lost. So then I started looking for the husky, and Tim comes home. He's helping me look for the husky. And it was one of those awful moments where we live near...it's not a major road, but people tend to speed on it. And the husky and I happen to see each other across the road. And so the husky was like, oh, human friend and starts coming across the road towards me. And there's this large SUV that's also coming from the other direction. I'm like, oh, this is it. This is my nightmare. This is becoming real. This dog is about to get hit. Thankfully, the driver saw the husky and stopped in time, so everything was fine. And the husky just finished trotting across the road to me, brought him in, kept him in the kennel in the garage. We didn't have any backyard adventures after that. The husky then thanked us by howling most of the night. [laughs] So this poor husky has had an adventure. We've had an adventure. And then, around 4:30 in the morning, I go out because I'm checking on the husky and going to let him out. And I'm scrolling on the app called Nextdoor. And I see that someone posted a picture of this exact husky that's like, "Please help me find my dog." And I was like, yes. Because we were going to have to take him to a county shelter or at least go see if he had a chip so then we could return him. But thankfully, we found the owner. I found out the husky's name is Sebastian. And then we had him for a few more hours, and then we had a wonderful husky and human reunion. CHRIS: That story had everything. It had ups; it had downs; it had huskies. It had escape artist huskies, in fact. I have...this is only through Reddit because that's how people learn about things in the world, but huskies are a rather vocal dog breed. So when you say the dog was howling, huskies have a particular way of almost singing, and it kind of sounds like yelling rather than more traditional dog sounds. Was that the experience you had? STEPH: Luckily, it wasn't too bad. His howling was more just; he didn't want to be in the crate. He seems like an indoor dog. So he's like, what am I doing outside in the garage? I should be indoors. And so he wasn't too loud. It was more he was just bemoaning his situation. But our dog Utah could hear him upset in the garage. And so that was also getting Utah upset because he didn't understand why there was a dog so close. And that was what led to the sleepless night because we couldn't get both of them to calm down. Because then, as soon as one of them calm down, the other one would get the other one riled. CHRIS: As it so often happens. STEPH: I'm so grateful that it turned out to be a happy story, though. That part was wonderful. And if we see the husky again, now we know his name is Sebastian and that he'll just come home with us. [chuckles] And we'll know how to return him since he seems to be an escape artist. CHRIS: And we were best friends forever. STEPH: On a more technical note, I have quite the journey to share in regards to class variables and modules inheritance. But before I dive in, I'm curious, what's new in your world? CHRIS: Oh. Well, I'm excited to dig into that story. But I've got two smaller things in my world this week that are top of mind. I don't really have answers on them. I have more questions. One is we're starting to pursue analytics. We want to try and understand our system a little bit better. What is the experience of our users? How are they coming into the system? What are they doing? How long does it take them to do the things that we want them to do? All those sorts of questions you want to be able to answer about your application. And one of the things that I'm struggling with that I've always historically struggled with is the idea of historical data. So data changes over time, and often we actually want to know about those transition points. We want to know about the different states that a user or any record in the system has been in. And I'm finding myself feeling the same pain that I felt many times and starting to think again about the relevant options out there in the world. To give a slightly more pointed example of what we're dealing with, users come in, and then there are a few steps for them to actually sign up for the application. And so their user record or their application, if you will, will go through a couple of different states. So they can be basically approved directly, and now they're an active user of the system, that's one option. But they can also end up in a state where they're pending review. And then eventually, depending on the outcome of that review, whether it's manual or someone intervenes or what have you, then eventually they can transition to either being denied or being accepted. And then they'll again be an active user. And so there's a question now of how many of the users that end up in that pending state end up transitioning into active. And as I looked at the database, I was like, I do not have this information right now. I know their current state. And the logs could tell me all of this. We don't have proper log archiving right now. And I also don't have a system for, like, let me pull down gigabytes of logs and try and sift through that to understand the answer, especially for something domain level like this. But this is one specific example that represents a category of things in my mind. The stuff that I've looked at in this space otherwise is Event Sourcing. So the idea that rather than having a discrete representation of the state of your application, you store every event as an individual log, essentially of like user did X, thing happened, Y occurred. And then, at any given point, you need to know about the state of your system; you just reduce all of those events through some magical reducer that produces the current state. I also very recently read an article called Event sourcing is Hard. So I have that in my head as a counterpoint. This seems like a thing that is non-trivial to do, makes sense for a certain scale. But of course, like anything else, it has its trade-offs. Another thing that I've looked at and never really pursued mostly because it's in a different ecosystem, is Datomic, D-A-T-O-M-I-C, which I think I've mentioned before. But it's a database that actually stores data in this historical format. And so you can ask for the current value, but then you can also ask for what are all the states that this user has been in? And what are the timestamps of those changes? One small thing that we do have that I really like...so this is one example of us; I think leaning into wanting to have more information, higher fidelity information, is often we want to know something like was this ticket paid? Did someone pay for this ticket? And so paid is a BooleanProperty on this ticket record within our system. So the ticket can be held for a little while and eventually gets paid. And now, yes, it has been paid for. It is good. You can use it. But often, we want to know not just that it's paid but when it was paid. And so there's a gem that we are using on the project called timeforaboolean by former thoughtboter, Caleb Hearth. And it does a wonderful job of basically instead of storing a Boolean value in the database, you store a timestamp. But then the Boolean can be inferred. If there is a value, if there's a timestamp for that record in the database, then there are a bunch of helper methods that get introduced that say, like, paid? That's now a method that I can ask, and it will tell us that. But we can also find the paidat, paid_at value. And so we have this higher fidelity data when we need it, but we can also collapse it down to the simpler representation. Because most often, all we need to know is, have they paid for it? Cool, then they're good. They can come into the concert, that sort of thing. But yeah, this is a broader question that I don't have a great answer to. I think Postgres and Rails and just the nature of how we approach these applications pushes us in a certain direction. Another thing I'm exploring is downstream analytic systems. What if I send a bunch of events to them, and they act as a half-event sourcing type thing? But yeah, this is going to be, I think, an open question for me for a while. STEPH: Yeah, you said a lot of really good options. When you're talking about in our ecosystem, we get pushed in one direction or the other that makes me think of the projects that I've been on. Typically, what they'll reach for first is something like a Papertrail. So then, that way, they can check for the historical versions of an object and how it was changed and see who changed it. That's one way to track the logs. I like the idea that if you can outsource it and send all of those events to a logging system and then essentially ask for that data back as you need it. You made me think of a recent project as well where we needed to track the state. So it was a patient matching system. And we really needed to know when a patient match was created or disconnected and then who did that and perhaps for what reason. And to ensure that we had as much information as possible, we took that opportunity to just create a record for it. So we had a patient match record or...I forgot the name of the other one where we created where a patient did not have a match. But we were creating a record every time someone did that. Granted, probably that's not going to happen nearly as often as someone paying for an event or the situations that you're describing. This was ideally infrequently that someone was going to unmatch a patient because it meant that our system had matched people that shouldn't be matched, and then a human had intervened. But yeah, it's interesting the space that you're in. And you listed all the good things that I would have thought of. CHRIS: I think you listed Papertrail, which is one that I hadn't actually thought of yet for this particular instance. This only came up earlier today also. So this is new in my head that I'm really being pushed in this direction. But I think Papertrail could be a good solution for where we're at. But it is one of those where you often don't know the thing you want to know. And I'm terrified of losing data of like; I had the data. I knew it at one point in time, but now I can't reknow it in the future because I didn't write it down. That's one of the things that I just don't want to happen in the world. And so finding those ways of like, how can we architect a system so that we can do the normal, straightforward, boring things most of the time but then when we need to expand out the analytics dimension of the system that we're working on...and trying to thread that needle and find the ideal optimization on both sides is a tricky one. But yeah, I'll definitely take another look at Papertrail and see if that...at a minimum, I think that's a good solution for where we're at now. And then this is going to be a thought that's going to roll around in the back of my head for a while. So if I come up with anything else, perhaps a grander solution, I'll certainly bring that back to The Bike Shed. But yeah, what else is up in your world? I want to hear the story of the class variables. STEPH: Well, it is quite a journey. So I hope you're ready. Specifically, I was pairing with Joël, who was working on fixing a test that had been marked as being skipped for a while. We weren't really sure why. We figured maybe because it's flaky. But then, as Joël had restored that test, he realized it was actually failing consistently. So it was a test that was failing for a reason folks maybe didn't understand, but they decided to cancel or to skip that test. But they didn't actually want to get rid of it because it seemed like a pretty important test based on the description. So Joël saw it and got excited because it seemed very relevant to some of the work he was already doing. So then, he is now investigating why this test is failing consistently. So in this story, we have four main characters: we have a class, two modules, and a class variable. So enter the class stage left. All right, so this class defines a class variable which I have to say is not something I work with very much in Ruby. So class variables kind of felt a bit novel and diving back into like, oh yeah, these are a thing. So the class defines a class variable that's called cache and assigns this variable to an instance of a cache. So then this class includes two modules who we'll call Module A and Module B. And we'll enter them stage right. And both of these models look to see if cache is already set. And if it's not, they also set the cache class variable. So with that information, in our test, we don't want to exercise the real cache just because then if other tests are reading from that cache, which is proving to be a source of flakiness for these tests, then they are overriding each other's expectations, and it's causing some of the tests to flake. So instead, we want to use a fake cache, just like an in-memory cache. So the test and its setup is already overriding. It's setting that class variable to say, hey, I want you to be a fake cache, just be in-memory. However, while executing that test, one of the modules is checking to see if that cache is set, which is being set in our test setup. So test setup sets the value. We're running the test but then in the module, the model checks to see if it's set, and it's suddenly nil instead of using the cache that we had set. So now it's defaulting back to say, "Oh, it's unset. So let me go back and set it to the real cache," which is exactly what we're trying to avoid. So then the question became, if we're setting the class variable in our class, why is it being populated in one of the modules but it's not being populated in the other module? So one of them has it set to the in-memory cache, but the other one does not. So I'm going to gloss over some of the details because this stuff is pretty tangling. But essentially, when the test is running, and it's loading the class, and we are overriding that class variable, it's getting shared with one of the modules because as soon as one of the models does set that class variable, there's a bidirectional link that gets set between the parent class which is the module in this case, and the class itself. And as soon as that module sets the class variable, then they're going to talk to each other, and they're going to reference the same value. However, this only seems to happen for one of the parents. You can't do this for both. So if you have two parents that are trying to share a class variable with the same class, that doesn't work. So that's a particular bug that we were running into. I do have some good news because if anybody is very nervous about the situation that I'm describing, I feel you. The good news is that in Ruby 3, they actually warn when this is happening and have introduced an error. So you don't have this inheritance confusion that can come out of the fact that these parent classes are also trying to share a class variable with this child class. So in Ruby 3, if you are writing a class variable in that class but then you try to overwrite that class variable in the parent of that class or by the module that's being included, then an error is going to be raised. So it's going to warn you if you're creating this bidirectional link between those two class variables and that you shouldn't be overriding the child's ownership of that class variable. Instead, if you're going to use class variables, which, one, is not my cup of tea, but if you're going to use class variables, it should be defined in the parent class, and then it can be shared downstream in the inheritance versus trying to go upstream and then having your ancestors essentially override some of those class variables. So all of that is to say we were on a very interesting journey of understanding how class variables work, how the inheritance works, how that bidirectional link is getting established, and then how Ruby 3 comes in to warn us if something funky is happening. CHRIS: Oh, that is interesting. And I'm now going to catalog that as a piece of information that my brain will retain for roughly the amount of time that we are recording this podcast and then immediately forget. STEPH: As you should. [laughs] CHRIS: It's one of the reasons that I try to avoid inheritance. And I try to avoid class variables as much as possible because of this category of problem, a very subtle bug that you have to try and really hone in. And you have to be very smart to debug this sort of thing. I don't want to be that smart. I want to code in a way that I can be less smart on any given Thursday. That's my goal in life. I will ask one other question, though. So there's just a cache that this class and pair of modules are hanging around with, and then you want to swap it out for in-memory. This sounds remarkably like the Rails cache. Is this cache distinct special? Could it not just be backed by rails.cache, THE cache within the rails context, which can be backed by Memcached, or Redis, or in-memory when you're in tests, or the NullStore, which I think is the default in development is probably how that goes? Is there a particular reason? Is this a special cache? Is there additional behavior that this cache has beyond the normal thing? Or is it just like, at some point, someone's like, oh, I need a cache. I'm just going to use a class variable, that'll be easy, which it definitely is, but then you run into complexities. And caches are one of those hard things to get right. So it's one where I would immediately be like, whoa, whoa, I would love to not make up our own cache here. So I'm wondering, is there a distinct reason, or is it just this happened, and here we are? STEPH: So I think we are using a custom cache that we are pointing to. So it is another service. It's not a Rails cache or an abstraction that we can point to and use. It is a different cache that we are using. And I'm trying to think back to the exact code. But there is a method that essentially checks to say, hey, should I use the real cache? Should I use the in-memory cache? And that is something that we've explored to find a way to make this more global for the test suite because we really want to control this for all the tests. Because it's very easy to not realize in the test that you should avoid using that shared global cache. And so that way, the tests don't interact with each other but instead always use an individualized cache for each test to make sure that it is self-sufficient and independent. But we haven't gotten that far yet in figuring out how we can take a more global approach with this. CHRIS: Gotcha. So I don't know the details. I assume there are reasons here. But just to play this out, if we find ourselves saying we have a reason to have a distinct cache, to have a special cache over here, but it's a cache...and caches fundamentally, that word always will raise my attention. It will be like, okay, this is a place that bugs will come and aggregate. And we need a distinct one that has special behavior as an external service, or that is just something like in... There's a wonderful blog post that Mike Burns wrote at one point that was about...I think it was something like things that will make me look at your pull request in more detail. And I really loved it because it did capsulate all of these like, yeah, there are good reasons to do everything on this list. But if you do any of them, I will look at your pull requests and be like, oh, that's interesting. Why are we doing that, though? Do we have to do that? Are you sure? Are you triple sure we have to do that? And this is definitely one of those things where caches automatically catch my attention. Even if we're using the built-in cache, I'm like, do we need to? Is that a definite thing? And then all the more so when we're using a custom bespoke one. Again, I assume that there are reasons that there's something special that's going on here. Perhaps the caching behavior is distinct from just it's Redis, and we throw data. And if it falls out the backside, that's fine. Maybe you need entirely different behavior here. But it is something that I would poke at a bunch. STEPH: Yeah, you're asking a lot of good questions. I will have to go back and look at some of the code because we spent enough time in Ruby specifics that I didn't pay as much attention to the cache. Because right now, as we are working on these tests, we're trying to fix just the test without changing the application code, one, because that feels like a safer space. And if the test is flaky, we're just trying to change the test first. But some of these tests we're starting to realize I'm not sure we can fix the test without also changing some of the application code, or the way that we do have to fix the test is really an incentive to back up and say maybe now's the time that we look at some of the application code. Because another question that comes to mind is why use a class variable, and does this need to be shared by the class and the modules? And there's a part of me that suspects that maybe some of this logic was extracted to a module, but then it wasn't cleaned up in the other places. And so that's why we still have a reference. And it's essentially then being shared and set and unset and reset in those different places. So I think you ask some good questions, and I have some more questions of my own when we have time to revisit that portion of the test and application. As another example of some of the tests that I've been working on, one of the tests that I...because we have a list, we can usually tell some of the tests that are flaky. So one of the ones that I was investigating was a similar issue where there was a shared resource, and someone had tried to mock it out. So they had taken the time to say, hey, I don't actually want to use that real resource that's over there; instead, I want to just return the scanned value. But instead, they'd accidentally stubbed out a class-level method instead of the instance-level method. And so it was running, but it wasn't actually stubbing anything else since that's the method that's not getting called. So that was just an oversight for that test. So I fixed that test. But I noticed that we were using allow any instance of, so then I did take the time to go through that file and change and move away from the use of allow any instance of. And for folks that are less familiar with allow any instance of, RSpec has some really great docs that talk about how it's very helpful for dealing with legacy code. But essentially, it is a code smell that you're using; allow any instance of because you are saying that my test is or my code is so complex that I can't really mock out the specific instances that I want to and then return specific behavior. So instead, I'm having to use this more global approach to say, hey, any instance of this method, I want you to mock it out versus this very specific instance that I know that I'm working with. But we can include a link in the show notes because there's a nice write-up that talks about some of the reasons that allow any instance of is not recommended. So that's been kind of fun. There's been a little bit of joy to get to refactor away from that and actually stub out a specific instance. Part of the work, too, that I'm noticing as Joël and I are going through these tests is leaving breadcrumbs for other developers as well because they have a very large team. And they're very junior friendly, which is just incredible. I love that so much about this company. And because they do hire a lot of juniors, then it is a tough codebase. It's a fairly old codebase. So as these juniors are coming in, they're seeing a lot of these patterns. And they're propagating these old patterns that aren't necessarily the best patterns to propagate. But they're doing their best, and then they are reusing what they're seeing. So part of the work as we are revising these tests, my hope is that people will see some of these newer patterns and use those instead of following some of the older patterns. CHRIS: I can only imagine that you're writing borderline novels in your pull request descriptions and commit messages there. I do wonder, is there an index of those that you're collecting? So there's like, here's the test remediation examples list, and you're slowly adding to them. This was a weird one with a class variable. And this was a weird one that had flakiness due to waiting or asynchronous behavior. And gathering examples of those, but specifically from the codebase. I could see that being a really useful artifact because I happily traverse through git blame all the time. But I don't know that that's always a thing. And frankly, I have to work for it sometimes. So if there is that list of here are pull requests that specifically did X, Y, and Z, I think that could be super useful. STEPH: Yeah, that's a great idea. And yes, they have some shared team documentation that speaks to specifically flaky tests because they're aware that this is a problem. They are working together to address this. And they have documentation that states ways to avoid flaky tests. If you encounter a flaky test, here are some of the ways that you can triage to find out what's wrong. So as Joël and I have been finding good examples, then we've been contributing to that document. And they also have team meetings. So our plan is to attend some of those meetings and be like, "Hey, this is just some of the stuff that we've seen this week, some of the things that we improved and changed," and share the progress that we're making. Since everyone is aware that there are these developers that are working hard to improve the test suite, but then share that information with the rest of the team so they too can feel...one, they can just see the changes that are taking place. But they too can also benefit and apply those strategies themselves when they see a flaky test. Oh, but you did just remind me of a thing. So one of the tests that I was going through...I'm very intentionally going through and making the smallest change possible. So I will do the gross, ugly fix whatever it is to get something to pass, and then I will commit it. And then I'll think about okay, well, how can I make this better? So essentially, I have the fix, whether it's pretty or not. And then, after that, I start to have other commits that make it prettier. And so, I had a pull request that had four commits that told the story that I was very happy about and progressed along in a more positive direction. And I issued that, and I discovered that Gerrit, when it sees four commits, it split all of them into their own change request. And so, instead of having what I thought would be this nice story, now got split across these four change requests. And I thought, well, that's less helpful. So I ended up squashing two of them, but I still kept three of them because they stood alone, and each told a story. But that's something that I've learned about Gerrit. CHRIS: Always so interesting how our tools shape our work. STEPH: And it made me think back to the listener who asked the question about ensuring that CI runs for each commit. Well, here you go, Gerrit. [chuckles] Gerrit does it for you. It ensures that every commit gets split into its own change request. CHRIS: I mean, as you said earlier, not my cup of tea but... [laughs] STEPH: Yeah, I'm still lukewarm. I'm still discovering Gerrit and how we get along. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. What else is going on in your world? CHRIS: In my world, we keep adding new users to the system. We keep doing more stuff. These are all wonderful things, the direction you certainly want to be heading. But as we're doing that, I've recognized that we had a lack of process and a lack of formalization of certain things. And a lot of the noise of the work was just coming to me because I was the person that everybody knew. I can ask a question; Chris will know the answer, et cetera. And then there were things that we needed to keep an eye on. But because it was everyone's job, it was no one's job. So we've introduced the idea of a point person on the engineering team. So this is a role that will rotate each week. I think you and I have worked on a handful of projects that had something similar to this. There was a team that we worked with that had an ad hoc list, which were just little tasks that needed to be done by developers. So there was one person who would run with that. I've heard it called captain before, the sprint captain. We're not really doing sprints. So for various reasons, that title didn't work for me. But point person is what I went with here. And so the idea is rather than having product management or anyone else in the organization just individually reaching out to developers, we want to try and choke that off, have a single point of communication. And so just today, I introduced into Slack, a group, but it's a group of one person. So @pointdev is technically the handle for this person. It's a group in Slack. And each week, we'll rotate who the members of that team are. And technically, you could add multiple, but the idea is this is just one person. So we'll rotate the person. And what ends up happening is if anyone...say the product manager says, "@pointdev, what's the status on..." blah, blah, blah, that will notify the person who is the point person that week. So that's a nice feature in Slack so that we can condense it down and say rather than asking individuals, ask this alias. We're introducing one layer of abstraction in our communication tools, much like we do in our software. So I'm drafting now the list of like, here's all the stuff that I think this person...because we're trying to push all of the quote, unquote, "other work" the non-product feature development work into this person's purview for a given week. So it's monitor Sentry for any new errors as they come up, triage them, and figure out what we want to do. Ideally, and this is perhaps aspirational, I would like to keep inbox zero in Sentry. I know how you feel about that more generally and perhaps even more specifically within the world of errors, but that's my dream. We're going to see how it goes. STEPH: I don't know if people know I am the opposite of inbox zero. This is the life that I'm living. CHRIS: What about with errors, though? What about something like Sentry? STEPH: I want to say that I would be a better human with my email. But I'm going, to be honest [laughs] and say that I would probably have the same approach where I am not an inbox zero person. I've come to terms with it. I used to really strive and think I needed to change. But I have reached a point of comfort with this is who I am. There are many like us, so shout out to all y'all. CHRIS: Oh yeah, by far the more common approach, I think. So specifically with the errors, I struggle a bit with it because what ends up happening is we are implicitly ignoring the errors. And if we're doing that, I would rather just sit around and have a conversation and be like, let's just explicitly ignore them. There's a button in the UI. We can ignore them. If this is not a real error, we can add it to the list of things that we do not report on. We can ignore that error. We can ignore it for a week and add a card to Trello that has a due date that says, "Hey, we got to work on this." But let's take that implicit indifference to that particular error mode of our application and make it explicit. Let's draw that line in the sand such that when I see a new error pop up, I'm like, oh, that seems like something I should do something about. I really want high signal-to-noise when I'm seeing errors coming. And so I'm willing to work for that. But it is a trade-off, and it does take effort. And it's noisy, especially browser extensions, and whatnot, just fighting the page. Facebook showed up one day. I don't know how Facebook got in there. Someone was browsing our website from within Facebook's browser, which I didn't know was a thing, but they had their own thing. And it fires a bunch of events, and Sentry was just like, let me slurp all of those up. Those seem fun. That was noisy. So we had to turn those off, but we explicitly turned them off. STEPH: I do like the approach that you're taking where it's one person, and then it's a rotating shift because I think that makes it more reasonable for someone's who's like, hey, this is going to be noisy for a week. And then you're going to look through these emails and check all these errors, and then either silence them because you don't think that they're interesting or mute them for now. Or if you're going to convert it into a ticket, set a due date, whatever the triage approach is going to be. But that feels more achievable versus inbox zero for life is just exhausting. But I feel like if you're doing it rotating week by week, that seems like a nice approach and also easier to keep it at inbox zero because that way, you are keeping up with all the errors. Because I agree; otherwise, what's the point of tracking all the errors if you're just going to ignore them? CHRIS: Yeah, definitely the rotating, I think, is critical. I think the other thing that's been critical specifically on the error front is we've had now a handful of meetings where we triage the backlog together, the backlog of errors. So like, what all is coming into Sentry? What's going on? And we go through the process of determining is this a real thing? Should we fix this? Should we ignore it? And we do that together so that it becomes not just one person's intuition about whether or not this is important or not or what the source of it might be but a shared intuition such that now any one of us, when it's our week, can ideally represent the team in that way and be like, never mind, never tell us about this again because it's very easy to silence things in Sentry that you would actually like to know about when they become real. But right now, we have this edge case that is an ignorable version. So trying to get there that's been fun. But yeah, once again, Sentry, that's one of the things on this person's list. There are ad hoc support tickets for our operations team. So anything that needs to happen on a user's behalf that currently needs a developer to console, let's funnel all of those to this one individual, respond to any new questions. So this is where that Slack handle will be useful. Check for any stuck jobs in Sidekiq. So is there anything that's been retrying for a while? Because it probably shouldn't. Maybe one or two retries is cool, but past that, something has gone wrong. And we should either get in there and fix it or just kill that job because it's never going to succeed, which is quite often the case but go in there and keep an eye on those and then look for anything. We're starting to use due dates within Trello, which is currently our project management system. We'll see. Someday we're definitely going to grow out of that. But for now, it's good enough and checking for anything that's overdue or coming up in the next week in terms of due dates and just making sure that we're being responsive to that. And so, I really like the idea of having this be a named set of things and a singular focus for one individual. Because again, that idea of like, if it's everybody's job, it's nobody's job. Or if it's nobody's job, then it's my job, and I don't want it to exclusively be my job. [chuckles] So I'm trying to make it not exclusively my job and to share the knowledge about it and make sure that these are skills that we all have and ideas and et cetera. But also, I would be fine to answer fewer questions in Slack each day. STEPH: I have to admit, as soon as you were telling me that you had established this role, I was quietly congratulating you on helping delegate some of these responsibilities to the team. Because like you said, you are then the person that takes on all these tasks. CHRIS: There's a laziness to that. Like, it's easy for me to just answer the questions. It's harder for me to put up a wall and say, "No, no, we have a process for this." And quite possibly, what's going to happen behind the scenes is that questions are going to come in to whoever is this point person. They're not going to know the answer. They're going to reach out to me, and then that conversation is still going to happen. But even by doing that now, now that person will see that answer, will understand the thinking or the background, the context that I have. And so it's that weird thing of like, it would be so much easier for me to just answer one question. But to answer all the questions, well, I can't do that. And so I'm working to try and do more of the delegation to try and hand things off when they're in a known state and to identify this sort of stuff so that the team broadly can be stronger and better able to support everyone else in the organization. So that's the dream. We'll see how it goes. STEPH: Yeah, I love that approach. I'm also thinking how interesting this role is because I'm imagining a mix between someone who is like the front point person at like an ER. So like, things are coming in, and they're in a tragic state and need help and need to be diagnosed. But at the same time, you mentioned they're going around. They're checking Sidekiq. They're looking at some email errors. So they're also that night shift guard that's walking around with a flashlight just poking in each room. So it seems like a very stressful and low-key role all at the same time, all mixed up into one week. That person probably needs a beer at the end of the week. CHRIS: There is a version of the story in my head that is...I wouldn't say this feels like a failure mode, but I would rather this not have to exist at all. I would rather things to be calmly humming along and not require a dedicated person each week to deal with the noise. I don't think that's realistic, certainly not as early on as we are in our organization. But I do wonder, is this a crutch? Is this something that we should be paying more attention to? And I know in teams that you and I have worked with in the past that has been a recognition of like, this is a crutch. But it's a costly crutch. Like, we're taking an entire...in our case, it's not requiring the entirety of a developer's week. They're able to do this pretty easily and then still get a bunch...like, 75% of their time is still feature work. But we're just choking down who's the person that will be responding to questions when they pop up so that fewer individuals are interrupted? But I have seen organizations where this definitely filled an entire week and spilled out more than. And then there was the recognition of that and the addition of another person that comes along and tries to fix stuff along the way as opposed to just responding. And so I want to make sure this isn't a band-aid but is, in fact, a necessary layer that we then try and shore up, you know, we should have fewer errors. That feels true. Okay, cool. Let's fix the bugs in the app. And these ad hoc things that an admin needs to have done can that be a button in the UI? Can they actually self-serve in those cases? And we're slowly moving towards those. Ideally, fewer jobs get stuck in Sidekiq. And so, my hope is that this isn't a job that gets harder and harder over time. It's a job that potentially, if we're being honest, probably stays about this hard. I don't think it's ever going to be just like, nope, nobody needs to do anything. The app just runs, and it's great. And it never has bugs. But that is a question in my mind as I start to embrace this thing of like one person is dedicated for a week to this. And if right now it's only 25% of their time, okay, that's probably fine. But if suddenly it's 50% of their time or 75% or 100% of their time for that whole week, that becomes too high of a bar in my mind. And I want to keep a close eye on it and make sure it's not trending in that direction. And I will be one of the people on the rotation. So I'll get to be in the trenches. STEPH: I appreciate all the thoughtfulness that you're putting into it. And I'm thinking back on a project where we had a similar rotation because we had an issue Slack channel. And so anytime there was an issue, then it would get posted in there. And before, it was going out to everyone, or there was one particular person that was always picking it up and then trying to delegate it to others as they needed to. But then we started a similar rotation. And one of the key benefits that I found from that is it signaled to the team, hey, this person might get pulled away. They can pick another ticket or two, but we need to give them lower priority tickets because there's a chance that they're going to get pulled away to work on something else. And that's okay, and we're going to plan for it. Versus without this role in mind, then you had people all taking on high priority tickets, but then someone had to be the one that's like, well, I'm going to punt on my high priority and feel stressed about the fact that I've got this other thing to deal with. But then, I didn't actually do the work that I planned for. So I feel like you're helping introduce calmness into the week, even if it is a stressful role. But then there's the goal that this becomes less of a stressful role, and if you see it trending in the opposite direction, then that's something to investigate. But I also feel like triage and communication is such an important part of being a developer that it also feels very relevant upskilling for the whole team to go through. So there's also that benefit of where this approach also empowers the rest of the team to also experiences, build empathy, look for additional fixes, and then also build these important skills. Overall, I really applaud your thoughtfulness. And I think it's a really good idea. And it will be interesting to see which direction that this role trends if it gets easier or if it's getting harder over time. CHRIS: Well, thanks. I appreciate that. And I'll certainly report back as we develop this but hopefully, it stays about where it is. That feels right. And I think I'll probably...that's one of those things that I will monitor. And if I feel it moving in the wrong direction, then step in and try and get it back to this space because this feels like a maintainable reasonable amount. And we shouldn't be fixing every bug and adding every button to the UI. That's just actually not how it works, unfortunately, would love to. That's not true. You shouldn't have every button in the UI. That's so many buttons. But broadly, I hope we can maintain roughly this, and I think identifying it and laying it out now I'm feeling good about having that structure. So yeah, we'll see how it goes. Will report back. But again, thank you for the kind words. With that tour of a bunch of different things, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes as it really helps other people find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeee!!!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph and Chris recap their favorite things of 2019 and 2020 and share their 2021 list. Happy Holidays, y'all! Steph: * Feature flags and calm deploys * Creating observable systems * Debugging * Working in seasons * Don't forget the fun “The longer I'm in the software game, the more I want things to be calm” - Steph Chris: * Pushing logic back to the server * Svelte (https://svelte.dev/) * Remote work (but maybe hybrid!) * Vim * Joining a startup as CTO This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Listen to episodes from 2020 and 2019
This week's episode of the Zero to a Million Podcast with Zach Rego features Chris Meade, CEO and co-founder of CROSSNET. He recalls how they came up with the idea of a crossnet as a fun mashup of their favorite games and how they made up the rules as they played it.Chris shares their go-to-market strategy of just playing the game and their marketing strategies which leaned heavily on Facebook ads. One of their first main obstacles was their messaging as different people with literally different voices would speak to the market on various platforms differently.LinkedIn has also been massive in connecting Chris with people who he thinks are great fits for Crossnet. He shares the difficulties in finding a trusted and reliable PR agency to get eyeballs, and how they work today with Amazon agencies for PPC and others for Google and Facebook ads and email.SHOW NOTESHIGHLIGHTS Inventing Crossnet: The cross between four square and volleyballGoing to market by demoing the game in Miami and word of mouthMarketing: Paid ads on Facebook and unifying brand message across platformsFinding true partnerships in PR, Amazon, Google ads, Facebook ads and emailConnecting on LinkedInRetail support and building communication with buyersQUOTESChris: "Everyone talks about proof of concept and all this garbage. That was our proof of concept. We just played and we had fun and we're like, alright, let's get to work."Chris: "It's all about the messaging, making sure they're playing correctly because nobody wants to buy a product that they're playing terribly. It looks awful. It's all about getting them outside and actually playing and getting them to be a part of our community."Chris: "We never paid for press a day in our life. It was reaching out and being genuine. The best thing has been LinkedIn for us. I built up a very, very nice LinkedIn following. I love to read, I don't know about you, but I genuinely enjoy reading good entrepreneurial pieces."You can connect with Chris in the links below:LinkedIn - https://www.linkedin.com/in/ACoAAAlGNrQBqj1pCKbD1o4lNdalsiF6Arb5rJs/Website - https://www.crossnetgame.com/
Steph started a new project and shares details about the new tools she's using, including working on a remote dev environment. Chris shares a journey with Lograge and Rails flash messages as he strives to capture user-facing errors. They also discuss "silencing" flaky tests, using Graphviz to visualize data dependencies, and porting Devise views to use Inertia and Svelte. It's also interesting how different their paths have been this year! This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Joel Quenneville (https://twitter.com/joelquen) GitHub - roidrage/lograge: An attempt to tame Rails' default policy to log everything (https://github.com/roidrage/lograge) Graphviz (https://graphviz.org/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Tech talk nonsense and songs, that's what people come to The Bike Shed for, variations on the Jurassic Park theme song, you know, normal stuff. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. Let's see. So I've started a new project. So frankly, there's a ton of new stuff in my world. And I've been on the project for about a week and a half now. I started over the holiday, and it's been going really well. Still in that whole early stage with getting to know the application, the codebase, the processes, the team, all the dynamics. It's a large company. So I'm working with a small group of individuals, but there are about over 100 developers that work at this company. And they do have a lot of documentation, which has been very helpful. But there's a lot to learn in terms of setup and processes, specifically. So they have provided a laptop that I'm using to access their codebase. So I'm using their laptop. And then, I am also using a dev machine, a remote dev machine, that they have set up for me. So I need to be on their VPN and SSH into that dev machine. So that's novel as well. CHRIS: Ooh, I'm very intrigued by that bit, not that they gave you a laptop bit but the dev machine. This is in the cloud sort of thing? What is this? I'm very intrigued. STEPH: I don't know if I have concrete answers for you. But yes, for me to be able to access their codebase, I have to go into the dev machine. And then that's where then I can do my normal development work. CHRIS: So is this like an EC2 instance or something like that that you're SSH-ing into, and then you can run processes on it? Or is it closer to the GitHub dev containers thing that they just released? Or are you running with your local Vim? Is it a remote Vim? Are you using Vim? Is it VS Code? I have so many questions. STEPH: [laughs] I think it's more like the first version, although I don't know the backbone of it. I don't know specifically if it's an EC2 instance or exactly how it's being hosted and how I have access to it. But I did have to set everything up on it. So they started the dev machine up for me. Their DevOps team started an environment where then I could access, and then I did need to cultivate it to my own habits. So I had to install several things. I had to install Brew and Vim and also the tmux and all those configurations that I'd really like to have. They do have a really nice Confluence document that walks you through how to set up a connection between VS Code and the remote environment. So then that way, you can really just hang out in VS Code all day. And initially, I was like, okay, I could do this. And immediately, I was like, no, I love Vim. I'm going back to it even if I have to spend the 20, 30 minutes setting it up. I'm so comfortable with Vim and tmux that I stuck to my roots, and I didn't branch out into VS Code. But I think VS Code is one of the more popular tools that they're using. So that way, it feels more local versus having to work in a remote machine. I think I answered some of your questions. I don't think I answered all of them. CHRIS: Yes. I think you did answer all the questions. But just for clarification, the Vim and tmux and whatnot setup is that you're running SSH, and then on the remote machine, you are using Vim and tmux? Or is it a local Vim that is doing…I think Vim has some remote editing capabilities but not anywhere near what VS Code can do. STEPH: It's the first setup. So I am SSH-ed in. And then I have Vim and tmux running on that remote machine. CHRIS: Gotcha. Novel. STEPH: Yeah, it's a thing. It's working. So that's good. And it feels cozy. I feel like I'm at home. I feel like I can be productive. So that's great as well. Some of the other tools that I'm also new to, so they use Zeus, which is used to then speed up the booting of your application. And you can also use it for speeding up test runs. So very similar to Spring, which I think we've had some discussions about Spring and who loves it and who doesn't. [laughs] CHRIS: I don't know. I'm not...[chuckles] I feel like I remember Zeus. But Zeus is like three iterations ago of this preloader thing. I'm intrigued by that. I thought Spring had fully supplanted it in the Rails ecosystem but maybe not. STEPH: So this company has been around for a very long time. So there are a number of tools that I think they're using because that was the tool to use the day when they got started. And then it just hasn't been a need to move on to one of the newer tools to use Spring. So at least that's my current explanation for why we're using Zeus. And also, Zeus works most of the time. I'm frankly still getting comfortable with it. [laughs] I still have gripes about Spring too. CHRIS: 60% of the time, they work most of the time. STEPH: [laughs] So, Zeus is another new tool that I'm adding to my tool belt during this engagement. Another new tool that I'm using is Gerrit. And so they use Gerrit…it is used for managing their Git repositories. It is used for code reviews. And being as accustomed and familiar with GitHub as I am, that one has been a little tricky to then navigate and change the whole UI that I'm used to when it comes to pushing up code, reviewing code, asking for feedback on changes. And at one point, I was reviewing a change request for someone else. And there's a button on there where I was adding comments, but they were in draft mode. And I'm trying to figure out how to get them out of draft mode so that they're actually submitted, and the other person could see it. And I saw a submit button. I was like, cool. So I hit the submit button. And then it said something in red text about ready to be merged into main. [laughs] I was like, oh, no, I mean, maybe, but that's not what I meant to do. So I had to reach out to that person and be like, "Hey, I'm new to Gerrit. I don't know what I did. I hit a button. I hope everything's fine. Here's my review. Best of luck. [laughs] I think everything is fine. Nothing dramatic came out of it. But I had my own little dramatic moment. CHRIS: Wow, that is a bunch of new stuff. It's interesting. On the one hand, I totally understand projects get started, and there's a certain set of tools that are current at that point, and so then you're using them. And then, over time, it takes a very active effort to try and keep up with the new current, that new-new as we call it. But the trade-off there is really interesting because, at any given time, it never feels like the right investment to pursue the new thing to just upgrade for upgrading sake. But then the counterpoint is the cost to someone like you coming onto the project. And it's like, it's a bunch of new stuff. It's kind of old stuff. It's new for me, but it is old, and less documented, and less familiar. And it's also certainly less compatible with other things that are going on, almost certainly. And so, how to stay on top of those updates is always the thing that's really intriguing to me. I say as someone who started a project recently, and I have not thought about upgrading anything at this point. And we have bundler-audit I want to say is the one thing that we have in there. So if there's a CVE for a gem, then security-wise, we will be upgrading those. But otherwise, I haven't thought about upgrading our Ruby version or anything. And I think we're on 2.6 or something like that, which is a couple back at this point. And so it's something that's in the back of my mind. I feel like I should have a formal answer to this. Like, company-wide, how do we think about the process of upgrading? And Dependabot and things like that answers some of it, but that doesn't tell me when to upgrade Ruby, I don't think. It could. That would be annoying. I don't want that. But it's one of those many things that depends and is subtle. And you have to decide where you put the trade-offs and whatnot. So just an interesting thing. And to observe you now going into this project building and being like, there's a bunch of new stuff. STEPH: I think it really takes passion or pain. Those are the two things that then prompt us to upgrade. Either it's pain, and you need to change it to get rid of that, or it's passion. So you're really excited about the next version of Ruby or the next version of Rails. And I think that's fine. I think that's fine that those are often our drivers. But yeah, that is interesting. I hadn't really thought about that in terms of there's often no real strict process around when we upgrade except those are then the natural human catalyst. CHRIS: I think you're right that those are the catalysts. But I think quite often those cannot be sufficient to push us to do the work. And so what do you do in the absence of that? It's not really painful. And I'm not really passionate about it. But I probably should do it is the 80% of the time middle space that we live in. And so yeah, I don't have an answer to it. I'm more observing the question. But like so many other things, I feel like often we just exist in that awkward middle and got to find a way through, so how like life. STEPH: I was having a conversation with someone earlier a bit about these life cycles that we live in. Specifically, we were talking about consulting and how changing from project to project is so daunting. Because you go from I'm accustomed to this project, I'm accustomed to the team. And then all of a sudden you jump into this new project and with all these new things it can be really interesting. But then there's also this feeling of like, wait, I used to be smart, and I knew everything that was going on. And the team knew me, and I knew all the team processes, and I felt good. And now I'm in this totally new space, and I have to relearn, and I have to reprove myself and relearn all the company politics. And there's always that initial jumping from a sure space over to a very new space that always makes me then question and be like, yeah, I can do this, right? I can do this. And then I have to keep letting that voice build until about two weeks in. And I'm like, oh okay, I'm back in a good spot. I said two weeks; it's probably more like four. But there's still that grace period of a new project where you're leveling up on all the things and learning the new team. And as daunting as it is; apparently, it's what I like. Apparently, I like that roller coaster ride that comes from jumping from one project to the next. So on that note of a bit of novel insight into myself, what's new in your world? CHRIS: What is new in my world? Let's see. I think I've got two updates, two anecdotes to share. One, I lost the battle, one I won the battle. So we'll go with the lost battle first because that seems fun. So we have Lograge on this application, which Lograge, for anyone that's not familiar, is a library that helps with producing more structured and more complete log lines from a Rails application. You can tell it to do JSON log lines, which is useful for many of the tools that will receive your logs. And then with it, you can say grab me the controller name and the params but sanitized and this and that. And so, you aggregate a bunch more data than would traditionally be in the logs. In general, I've just found it to be a much better foundation. I find the logs to be more readable, and more informative, more useful, all those lovely things. But slowly, I've been looking at what's the other stuff that I want to have in here? What else would be nice to know? So one example is we use Inertia on this project. And Inertia has a particular way in which errors get mapped back to the front end. And it's an interesting little trick that involves the session, but that's sort of an aside. Basically, this is something that the user will see that I would love to know about. So how many users are hitting their head against the wall? Because typically, whenever these errors happen, that means this is a flash message or something like that we're going to show to the user. So we were able to add that into our log lines. Now we can see those. We can aggregate on them. We can do counts. We can do alerting and monitoring, all those kinds of fun things. So cool. That was great. That worked well. I then specifically…I mentioned the flash a second ago, but that's actually not…the Inertia messages will not show up in the flash. They end up in forms inline on certain inputs or whatnot. But we do also use the flash message pretty regularly as a way to communicate to the user success or failure or what have you. And I really wanted to get those into the logs. And I tried very hard, and I failed. I gave up. I threw in the towel. I raised the white flag. So the nature of the flash, which is something that knew in the back of my mind but I had never really experienced as pointedly as this, is the flash is a magic value within the Rails ecosystem that can be written to and then once read clears itself. That's the nature of how the flash is supposed to work. And it persists across requests. So it's doing some fun stuff there, which I assume is tunneling through the session or maybe putting it into a cookie. I'm not actually sure. But there's some way that you post to an endpoint, and then you get redirected to the show page. And on the show page, we actually display that flash value. But the flash is set on the controller endpoint that is handling the POST request. So this value spans across two request-response life cycles, which is interesting. And so the manner in which that works is Rails is managing that on our behalf. We write to it on the one side. And then, when we do the subsequent requests, if there's a value in the flash, we show it to the user, which is why occasionally you'll see those weird things where that flash message shouldn't show up. But it's like a sticky value that was left in the system that didn't get cleared via one thing or another. But I really wanted to put those into the logs. Like, what are we saying to the user is the thing I want to know. This is that question of like, what's my system doing at runtime? I understand what it's doing. I can read the code and understand what should happen. But what actually happened? Are users seeing this flash message way more than they should? That's a question I want to be able to answer. And I have lost the battle. I cannot find a way to read the flash value, put it into my loglines, but then also have it persist through. The first attempt I did, I was able to get it into my loglines, but then it didn't show to the user, which is a bad outcome. Because now I've read the value, Rails clears it, cool, that's fine. There is a flash.keep method. And that I thought would do the thing I wanted, which is like, oh, I want to read this value. I want to tap this value, I want to observe it, I want to peek at it. And I thought this keep method would do the thing that I wanted. It did not. It just caused the flash to be persistent. So now, anywhere I went had the same flash message for forever, which was not the behavior that I was looking for. I then tried, like, all right, just for exploration purposes, what if I reach inside and read the instance variables of the flash objects? Also did not work. Everything I tried did not work. And it had these fun failure modes that just made me very sad. Thankfully, we had feature specs that told me about this failure mode because I would not have known about it otherwise. This was not obvious to me on first implementation. But yeah, I lost, and I feel sad. And then I did the thing that we do, which is I searched Google, and there's nothing. I cannot find…This is one of those cases where like, I can't be the first person who wants to know what's in the flash. I can't be breaking new ground here. And yet I couldn't find anything on the internet. So that's where I'm at. STEPH: That's interesting. Yeah, I'm trying to think…I think I'm one of those people. I don't think I've ever tried to peek into the flash and see what's there ahead of time. And it makes me wonder if it's partially…so we can't peek into the flash. You've exhausted several examples or tries there. When you're setting the value of the flash, it makes me wonder if there's an order of operations that you have to pursue. So before you set the flash, you know what messages that you're going to share. So you send that off to the logs, but then also share that to the flash. So instead of writing the message directly to the flash and then having to check the flash, if you just stored that value elsewhere and shared it to the logs first. Is that a reasonable approach? CHRIS: It definitely could work. But that was in the space of this is getting weird enough. I thought about things like that, but I didn't want to do anything weird. And part of the benefit that I get from using Lograge is rather than having multiple lines for each request…so a request came in and rendered this partial and did this thing. It gets constructed such that there's a single logline, which is one big JSON object that contains all of the data about that request. And I really liked that structure because then everything's correlated like, oh, did we 404, or did we 302? And what was the message that we said to the user? And what were the params? It's all there in one line. I found that to be really useful. So I wanted to do that. I could just separately log it. But then I'm also worried of there is a statefulness there. Because again, the flash is written on one side and read on the…it's like a Hail Mary to ourselves between requests. Look at me with a sports reference. And so, I didn't want to try anything out of the ordinary. I really just wanted to find a way to just like; I just want to read this value but not like Heisenberg uncertainty principle observing changes in the system. I found myself in that space, and I was like, can't there be a way that I can just flash.peek? And I just want to take a quick look. I don't want to mess with anything. You do your normal thing, flash. Just let me know. And I do not have an answer for it yet. And for now, this is one of those nice to have, not an absolute requirement. So I wasn't yet in the position of okay, fine, let's do some out-of-the-box ideas here. So I'm still in the in the box phase, I would say, but who knows? Maybe down the road, I'll be like; I would really love to know what the flash message was for that request because this user is seeing stuff that we do not understand. And that information would tell us the answer. So we're not there yet. But I was surprised by how thoroughly I was defeated by Rails and the flash message on this adventure. STEPH: I am equally surprised. I wouldn't have thought that particular achievement would have or is proving to be that hard or, frankly, not doable. So yeah, I'm intrigued to see if anybody has thoughts on it or if you do find a different solution because Lograge is one that I haven't used. But I would be surprised if other people haven't had a similar request of like; I want to be able to store what's in the flash message. Because like you said, that seems super helpful. CHRIS: Well, certainly, if I do figure anything out, then I will share that with the world. But yes, part of this is putting it out there into the universe. And if the universe happens to send me back an answer, I will happily accept that. But yeah, again, I had two stories, and that was the one where I lost. I'm going to send it back over to you because I'm interested in anything else that's up in your world. And later, I'll tell the story of a victory. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: I have a victory that I can share as well, and I'm excited to hear about yours. So to share a bit more context about the project that I'm on, we are focused very heavily on improving their test suite, not only the time that it takes to run the test suite but predominantly addressing a lot of the flaky tests that they have. Because that is a huge pain point for the team and often leads to the team having to rerun tests. And so, there are a couple of areas that we're very excited to make some contributions. The first part is that we are just looking at those flaky tests to figure out what is going on and how can we address these? And one of the nice things, one of the tools that they're using TeamCity is the tooling that they're using to run their automated test suite. And TeamCity will let you mute tests, so then that way, if you do encounter a flaky test, you can mute it. So then, at least it's not impacting other people. I say this with some asterisks that go along with it because, for people who can't see, Chris is making a very interesting face. I think you have thoughts on this. And the other thing that they will show is a flip rate for the flaky tests, which is really nice, too, because then you can see which tests are flaky the most. So then that helps us prioritize which ones we want to look into. All right, I'm going to pause so you can respond to that comment I made about muting tests. CHRIS: I'm intrigued. I talked in a recent episode about adding RSpec::Retry. So the idea of flakiness being a thing that exists and trying to decide how much engineering effort to apply to fixing it. But the idea of muting it and especially muting it in the UI, not in the test suite or not having that be something that's committed, there's something about that that caught my attention, and thus apparently, my eyebrows raised. You saw that. [laughs] But I don't actually know how I feel about it. This is such a complicated, murky area that I wish I had a stronger set of beliefs around. It was interesting when we talked about the RSpec::Retry thing. I think you rightly pushed back on me, and you were like, that's interesting, maybe don't do that. And I was like, that's a fair point. [laughs] And so now hearing you're in the quagmire of flaky tests, and yeah, it's an interesting space. STEPH: Well, I think my hard belief is that muting tests is a thing that we shouldn't do. It's going to lead to more problems, and you're not really addressing the issue that you have. It is a temporary solution to a much bigger problem that you have. And so it is a tool that you can use to then buy you some more time. And so that is the space that this team is in where they have used this particular tool to buy them more time and to be able to keep shipping changes while realizing that they do still need to address these underlying issues. So it is a tricky space to be in where essentially, you've gotten to the point that you do have these muted tests. It is a way to help you keep going forward, but you are going to have to come back to it at some point. And so that's the space that I'm in right now joining the team is that we have been brought in to help some of their engineers specifically address this issue while ideally letting the rest of the team continue to focus on shipping changes while we address the test. Although I really think there's going to be two angles that we've talked about in how we're going to help this particular codebase. One of them is that we are going to address a flaky test. But the other one is empowering people that they feel like they have the time and the knowledge that they can address a flaky test and also not contribute more flaky tests to the codebase. But I appreciate that you called me on that a bit because we've had those conversations around when we should actually address something versus muted, all the interesting trade-offs that come along with that conversation. So this particular flaky test that we addressed earlier this week is specific to hard coding primary IDs. The short version is that it's bad, don't do it. The longer version is that they were having a test that was failing intermittently because it would pass the first two runs, but then it would start to fail for all future runs. And the reason it would pass for the first two runs is because when they were setting the ID for a record that the test setup is creating, they were looking for existing records and saying, "Hey, what's your latest ID?" And then I'm going to guess the next ID. I'm going to add one to that to figure out what the next ID should be. Some additional context, when the tests boot up, there's some data that's being created before the test run. So then that's why they're checking to see, okay, what records already exist? And then let's add one to that. The reason that fails sometimes is because then once the tests have run, the Postgres IDs aren't being reset, so they're using a truncate approach. So then, when the test runs once or twice, that works. But then, at some point, there's a collision between those IDs where they tried to guess the next ID, but then Postgres is also on that same ID, and it ends up failing. There are also some callbacks. There's some trickery afoot. It took a little while [chuckles] to work through these tests to understand why they're failing. But the short version is that we thought we had to restructure the data in a way that no longer required us to guess what the next primary key should be for a record. We could actually use Factory Bot to generate that record, and then ask Postgres, okay, what ID did you assign? And we're going to pass that in. And that part was really challenging when you're in a new codebase, and you are learning the domain knowledge and exactly how data should be structured. So that was one challenge of it. The other part was that a lot of the data relies on each other. So then figuring out the right hierarchy in which we could create the data. So we didn't have a circular reference at some point. It took some time. And Joël Quenneville, who's on the project with me, used a tool that I found very helpful. It's called Dataviz. He went through and documented the let statements, the data that's being created, and then it generated a nice tree structure that shows you okay; these are your dependencies. This is the test setup that you're using. And then from there, just by changing a few lines in that particular file that used to generate that Dataviz tree, he would move it around. And we could simulate what we were already mentally trying to construct in our head. So as programmers, we're already thinking, okay, I know this record needs that data. And that data needs that data before I can build this. But this actually turned it into a concrete visualization where we could see it. And I was really struggling. And he was like, "Hey, I got it into a visual form that we can look at. And there's a circular reference. That's why this keeps happening and why we're not making progress." So then, using that, we were able to then reformat some of the dependencies, look at the graph, see that we didn't have that circular reference anymore. And then we could implement that in code. And it really helped me to be able to walk through that visual aspect because then I could say, okay, this is all the stuff that I'm trying to mentally hold on to, but instead, I can just look at this and know it's going to work. I don't have a circular reference. It also helped concretely show why the previous efforts were failing and why we kept running into some issues. So I'm really interested now in Dataviz because I found it very helpful in this particular case. And I'm very intrigued to see if I can apply this to more tests that I'm trying to fix and to see if I can start out with here's the current structure. Here's where I'm trying to go. And then essentially build that graph first before I start changing the code around. I would love to have that optimization. And I feel like it would speed up the process. CHRIS: It was funny as you started to say that I had observed some tweets going out into the world recently. And I was like, this is Joël. This is definitely Joël talking about these things. As an aside, for anyone who doesn't follow Joël Quenneville on Twitter, @joelquen, I would highly recommend it. We can include a link to Joël's Twitter in the show notes. Joël is one of the clearest thinkers and communicators about programming that I have ever worked with. And in particular, what you're describing of the data visualization is something that I think he does incredibly well. Often he'll make blog posts, but they'll include just simple little visualizations, little images, or diagrams, or flowcharts that just so concretely encapsulate an idea and express it so much better than text ever could. And so, in so many ways, I look to Joël's writing, both on Twitter, in the blog, in many places. And I just appreciate so much what he puts out there and the manner in which he does it. So I was by no means surprised when you said, "Oh, and I'm working with Joël on this project." I was like, yes, I bet you are. That sounds true, and in particular, some of the conversations about flaky tests and determinism and all of that. So yeah, the visualization stuff is also particularly interesting in taking a system that it's very hard to hold all of this in our heads. But that visualization, the tree and/or graph thing at play, having that in a picture and being like, oh, look, there's a cycle now. There we go. Can't have those. That's not okay. That's a really interesting solution that's just very cool to hear about and presumably led to a good outcome where you were able to break that cycle. And now you're happy and deterministic in your tests. STEPH: Yeah, it's one of those approaches where I wonder if it was helpful afterwards and how can I make it helpful beforehand? Because it felt like a confirmation of the pain in the process that we had been through. And I'm eager to see if now I can apply it ahead of time and save myself some of that pain. That's where I get really excited. But yes, it was a successful outcome. And we have fixed that particular flaky test. But I'm very excited to hear about your victory from the week. CHRIS: It's a shared victory. It was a team victory, just to be clear. But we are working in a system that is using Inertia. Inertia.js is a project that I've talked about a number of times on the show. I'm a huge fan of it. It is the core architecture of how we're building our application. But as a very brief revisiting of what it is, on the server-side, we have Rails, and Rails is acting in a pretty traditional way. We do not have an API. And on the front end, we have Svelte, which is a JavaScript view layer framework. Inertia sits between them and binds the traditional Rails MVC architecture and the Svelte front end. So again, there's no API in the traditional sense of this is a REST endpoint, and we hit it, and we get some data, and then the front end holds on to that in a store. None of that is going on. Inertia does a wonderful job of marrying these two concepts and allowing us to use familiar programming techniques on the server-side but then also have a more future-friendly front end. Animations and transitions and things like that are now totally possible while not throwing away the entirety of our programming model that we've had in Rails server-side applications. That's all well and good. Almost all of the UI in our application is rendered via Inertia and Svelte. That's great. We love it. The one caveat is Devise. So we have Devise on this project, and Devise comes with a lot of views built-in. And we have both an admin and a user model. So we have sign in and sign up, and confirm registration, and forgot password and all of these different views and flows and things that Devise just gives you out of the box. And being an early-stage startup, it was not a good time to revisit any of that or to try and build it from scratch or any of that. We just wanted to build on the good known trusted foundation that Devise gives us. But the trade-off there is that now all of our Devise logic lives in this uncanny valley. It's the only stuff that is in ERB views. Our styling, thankfully, we're using Tailwind, and so we are able to have some consistency between the styling. But recently, we redesigned the flash messages on the client-side in our Svelte pages. But on the server-side, they are a little on the Devise-side because Devise is the only pages that are being rendered truly server-side. They look a little different. And this is a pain that we felt, that inconsistency or that mismatch between the Devise views. And then the rest of the application is a pain that we felt but one that we consistently were like, I don't think it's worth the effort to try and change this. Finally, this week, we've been doing a lot of work on our user onboarding funnel. So the initial signup flow going through it's a progressive form screen where you go in between different pages. And a majority of it is implemented in the Inertia and Svelte side of things. And it's very nice and very fun to work with. But the signup form, the user signup form, is in Devise, and it's a traditional Rails server-rendered post, and then all the normal stuff happens. We finally decided to bite the bullet this week and see how painful it would be to port that over to Inertia and Svelte. And spoiler, it was awesome. It was very straightforward, and coming out of it, immediately, the page was largely the same. The server-side code was largely the same. But now we had things like when you submit this form, if there's a validation error, we don't clear out your passwords because we're staying on that page on the client-side. We're taking advantage of the way Inertia's error flow works. That's a subtlety of how Inertia works. That's probably more detail than we want to get into here, but it's an awesome thing that works and is great. And so immediately, this page just got better. We got inline errors for each of the fields. We were able to very easily add a library called Mailcheck, which I've talked about on an episode a while back. But this is a thing where if you have a typo in your email address, we can say, "Hey, you have a typo in your email address. And if you click this link where we suggest the alternative, we'll just replace it inline." That would have been really awkward to wire up in our Devise view. It would have been some jQuery-esque script tag at the bottom of the view page that doesn't stop…We don't have jQuery actually at this point. We wouldn't have jQuery. And we could certainly, but it would only be for that view. And it would be weird and different in a fundamentally different programming model. It was trivial to do in the Inertia and Svelte world once we had made that port over. This was always my hope. This was the dream that I had in mind. And it speaks to the architecture of Inertia. And Inertia is a really great abstraction that is very minimally leaky. I won't say it has zero leaks because no abstraction does. But this was my hope is I think the server-side should mostly stay the same. And I think the client-side, we just take an ERB template, turn it into a Svelte template, and we're good to go. And that has largely been the case. But suddenly, this page is so much more. There are subtle animations as things come in. And there are just lots of nice features that were trivial to add now and that fit with the rest of the programming model that we have throughout it. So that was awesome. STEPH: That is awesome. I love these styles of updates where there's like, oh, I had a loss this week. But I also had this really great win because that feels just so representative of a typical week. So I love this back and forth. CHRIS: It's also that sequence is how the week went. So the loss happened earlier in the week, and then the win happened later in the week, which is how I would prefer it because now I'm going into the weekend with a win. Like, cool, I'll take it. Had it gone in the other direction, I would have been like, oh man, Rails beat me. But I guess it's the weekend now. I'll forget about it for a little while. STEPH: Yeah, that definitely helps to end on a positive note. CHRIS: But yeah, I don't think too much more to say about that beyond it was both really nice to get the added functionality to get the better, more user-friendly behavior in this view that naturally falls out of this programming model. But also to have that reinforcement of my belief in Inertia as a good architecture. Not only did we get some really nice stuff out of doing this port, but it was also pretty straightforward because Inertia sits so comfortably between the pieces. And that's a story that I really like. I want more of that in my programming world, where to change this thing requires changing everything in our app. Oh no, this is sad. No, this was a great example of we were able to very minimally change things and get a much better experience out of it. So once again, I am very pro Inertia.js STEPH: It's interesting to me how different our paths have been this year where I have been working on applications that are brought on thoughtbot to then help out with some of the concerns that they have, either their application is going down, or they have a test suite that they need to improve, or there's a lot of triage that's involved. And so it makes me very excited to hear that, when you are building stuff, and it's going really well and how awesome that is. Because then I feel like most of my world has definitely been more in the triage space, which is a very interesting and fun space to be. But it brings me a lot of joy to hear about wins from let's build new stuff and hearing it be built from the ground up and how well that's going. CHRIS: Well, I'm definitely happy to provide that. But also, I want to be realistic and be like, I'm just writing next year's legacy code right now, let's be honest. I'm very happy with where we're at in this moment. But I also know how early I am in the project that I'm working on. And I'm burdened with the knowledge that I'm certain one decision that I'm making of the many that are being made I will deeply regret a year from now. I just know that that's true, and I can't let it slow me down. I got to just keep making decisions and do stuff. But I know that there's going to be one. I know that a year from now, I'm going to be like, why did we choose that option? But it's sort of the game. STEPH: [singing] We'll just know that there's something strange and your code won't change. Who are you gonna call? thoughtboters! CHRIS: Well, yes. I will definitely be calling you when I find myself in the uncertain times of legacy code of my own creation. So I look forward to that, frankly. But that's a problem for a year; I don't know, maybe two years from now. Who knows? But for now, what do you think? Let's wrap up. STEPH: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. CHRIS: This show is produced and edited by Mandy Moore. STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it really helps other people find the show. CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed on Twitter, and I'm @christoomey. STEPH: And I'm @SViccari. CHRIS: Or you can email us at hosts@bikeshed.fm STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris finally got his new computer!
About ChrisChris Williams is a Enterprise Architect for World Wide Technology — a technology solution and service provider. There he helps customers design the next generation of public, private, and hybrid cloud solutions, specializing in AWS and VMware. His first computer was a Commodore 64, and he's been playing video games ever since.Chris blogs about virtualization, technology, and design at Mistwire. He is an active community leader, co-organizing the AWS Portsmouth User Group, and both hosts and presents on vBrownBag. He is also an active mentor, helping students at the University of New Hampshire through Diversify Thinking—an initiative focused on empowering girls and women to pursue education and careers in STEM.Chris is a certified AWS Hero as well as a VMware vExpert. Fun fact that Chris doesn't want you to know: he has a degree in psychology so you can totally talk to him about your feelings.Links: WWT: https://www.wwt.com/ Twitter: https://twitter.com/mistwire Personal site: https://mistwire.com vBrownBag: https://vbrownbag.com/team/chris-williams/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate: is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards, while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other, which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at Honeycomb.io/screaminginthecloud. Observability, it's more than just hipster monitoring.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the things I miss the most from the pre-pandemic times is meeting people at conferences or at various business meetings, not because I like people—far from it—but because we go through a ritual that I am a huge fan of, which is the exchange of business cards. Now, it's not because I'm a collector or anything here, but because I like seeing what people's actual titles are instead of diving into the morass of what we call ourselves on Twitter and whatnot. Today, I have just one of those folks with me. My guest is Chris Williams, who works at WWT, and his business card title is Enterprise Architect, comma AWS Cloud. Chris, welcome.Chris: Hi. Thanks for having me on the show, Corey.Corey: No, thank you for taking the time to speak with me. I have to imagine that the next line in your business card is, “No, I don't work for AWS,” because you know a company has succeeded when they get their name into people's job titles who don't work there.Chris: So, I have a running joke where the next line should actually be cloud therapist. And my degree is actually in psychology, so I was striving to get cloud therapist in there, but they still don't want to let me have it.Corey: Former guest Bobby Allen is now a cloud therapist over at Google Cloud, which is just phenomenal. I don't know what they're doing in a marketing context over there; I just know that they're just blasting them out of the park on a consistent, ongoing basis. It's really nice to see. It's forcing me to up my game a little bit. So, one of the challenges I've always had is, I don't like putting other companies' names into the title.Now, I run the Last Week in AWS newsletter, so yeah, okay, great, there's a little bit of ‘do as I say, not as I do' going on here. Because it feels, on some level, like doing unpaid volunteer work for a $2 trillion company. Speaking of, you are an AWS Community Hero, where you do volunteer work for a $2 trillion company. How'd that come about? What did you do that made you rise to their notice?Chris: That was a brilliant segue. Um—[laugh]—Corey: I do my best.Chris: So I, actually prior to becoming an AWS Community Hero, I do a lot of community work. So, I have run and helped to run four different community-led organizations: the Virtualization Technology User Group of New England; the AWS Portsmouth User Group, now the AWS Boston User Group; I'm a co-host and presenter for vBrownBag; I also do the New England AWS Community Day, which is a conglomeration of all the different user groups in one setting; and various and sundry other things, as well, along the way. Having done all of that, and having had a lot of the SAs and team members come and do speaking presentations for these various and sundry things, I was nominated internally by AWS to become one of their Community Heroes. Like you said, it's basically unpaid volunteer work where I go out and tout the services. I love talking about nerd stuff, so when I started working on AWS technologies, I really enjoyed it, and I just, kind of like, glommed on with other people that did it as well. I'm also a VMware vExpert, which basically use the exact same accolade for VMware. I have not been doing as much VMware stuff in the recent past, but that's kind of how I got into this gig.Corey: One of the things that strikes me as being the right move with respect to these, effectively, community voice accolades is Microsoft got something very right—they've been doing this a long time—they have their MVP program, but they have to re-invite people who have to requalify for it by whatever criteria they are, every year. AWS does not do this with their Heroes program. If you look at their Heroes page, there's a number of folks up there who have been doing interesting things in the cloud years ago, but then fell off the radar for a variety of reasons. In fact, the only way that I'm aware that you can lose Hero status is via getting a job at AWS or one of AWS competitors.Now, the hard part, of course, is well, who is Amazon's competitors? Basically everyone, but it mostly distills down to Microsoft, Google, and Oracle, as best I can tell, for Hero status. How does VMware fall on that spectrum? To be more specific, how does VMware fall on the spectrum of their community engagement program and having to renew, not, “Are they AWS's competitor?” To which the answer is, “Of course.”Chris: So, the renewal process for the VMware vExpert program is an annual re-up process where you fill out the form, list your contribution of the year, what you've done over the previous year, and then put it in for submission to the board of VMware vExperts who then give you the thumbs up or thumbs down. Much like Nero, you know, pass or fail, live or die. And I've been fortunate enough, so my vBrownBag contributions are every week; we have a show that happens every week. It can be either VMware stuff, or cloud in general stuff, or developer-related stuff. We cover the gamut; you know, people that want to come on and talk about whatever they want to talk about, they come on. And by virtue of that, we've had a lot of VMware speakers, we've had a lot of AWS speakers, we've had a lot of Azure speakers. So, I've been fortunate enough to be able to qualify each year with those contributions.Corey: I think that's the right way to go, from my perspective at least. But I want to get into this a little bit because you are an enterprise architect, which is always one of those terms that is super easy to make fun of in a variety of different ways. Your IDE is probably a whiteboard, and at some point when you have to write code, I thought you had a team of people who would be able to do that all for you because your job is to cogitate, and your artifacts are documentation, and the entire value of what you do can only be measured in the grand sweep of time, et cetera, et cetera, et cetera.Chris: [laugh].Corey: But you don't generally get to be a Community Hero for stuff like that, and you don't usually get to be a vExpert on the VMware side, by not having at least technical chops that make people take a second look. What is it you'd say it is you do hear for, lack of a better term?Chris: “What would you say ya, do you here, Bob?” So, I'm not being facetious when I say cloud therapist. There is a lot of working at the eighth layer of the OSI model, the political layer. There's a lot of taking the requirements from the customer and sending them to the engineer. I'm a people person.The easy answer is to say, I do all the things from the TOGAF certification manual: the requirements, risks, assumptions, and constraints; the logical, conceptual, and physical diagrams; the harder answer is the soft skill side of that, is actually being able to communicate with the various levels of the industry, figuring out what the business really wants to do and how to technically solution that and figure out how to talk to the engineers to make that happen. You're right EAs get made fun of all the time, almost as much as consultants get made fun of. And it's a very squishy layer that, you know, depending upon your personality and the personality of the customer that you're dealing with, it can work wonderfully well or it can crash and burn immediately. I know from personal experience that I don't mesh well with financials, but I'm really, really good with, like, medical industry stuff, just the way that the brain works. But ironically, right now I'm working with a financial and we're getting along like a house on fire.Corey: Oh, yeah. I've been saying for a while now that when it comes to cloud, cost and architecture are the same things, and I think that ties back to a lot of different areas. But I want to be very clear here that we talk about, I'm not super deep into the financials, that does not mean you're bad at architecture because working on finance means different things to different folks. I don't think that it is possibly a good architect in the cloud environment and not have a conception of, “Huh, that thing seems really expensive if I do it that way.” That is very different than having the skill of reading a profit and loss statement or understanding various implications of the time value of money calculation that a company uses, or how things get amortized.There are nuances piled on top of nuances in finance, and it's easy to sit here and think that oh, I'm not great at finance means I don't know how money works. That is very rarely true. If you really don't know how money works, you'll go start a cryptocurrency startup.Chris: [laugh]. So, I plugged back to you; I was listening to one of your old shows and I cribbed one of your ideas and totally went with it. So, I just said that there's the logical, conceptual, and physical diagrams of an environment; on one of your shows, you had mentioned a financial diagram for an environment, and I was like, “That's brilliant.” So, now when I go into a customer, I actually do that, too. I take my physical diagram, I strip out all of the IP addresses, and our names, and everything like that, and I plot down how much it's going to cost, like, “This is the value of the EC2 instance,” or, “This is how much this pipe is going to cost if you run this over it.” And they go bananas over it. So, thanks for providing that idea that I mercilessly stole.Corey: Kind of fun on a lot of levels. Part of the challenge is as things get cloudier and it moves away from EC2 instances, ideally the lie we would like to tell ourselves that everything's in an auto-scaling group. Great—Chris: Right.Corey: —stepping beyond that when you start getting into something that's even more intricately tied to a specific user, we're talking about effectively trying to get unit economic measures of every user, every thousand users is going to cost me X dollars to service them on average, on top of a baseline of steady-state spend that is going to increase differently. At that point, talking to finance about predictive models turn into, “Well, this comes down to a question of business modeling.” But conversely, for engineering minds that is exactly what finance is used to figuring out. The problem they have is, “Well, every time we hire a new engineer, we wind up seeing our AWS bill increase.” Funny how that works. Yeah, how do you map that to something that the business understands? That is part of what they do. But it does, I admit, make it much more challenging from a financial map of an environment.Chris: Yeah, especially when the customer or the company is—you know, they've been around for a while, and they're used to just like that large bolus of money at the very beginning of a data center, and they buy the switches, and they buy the servers, and they virtualize them, and they have that set cost that they knew that they had to plunk down at the beginning. And it's a mindset shift. And they're coming around to it, some faster than others. Oddly enough, the startups nowadays are catching on very quickly. I don't deal with a lot of startups, so it takes some finesse.Corey: An interesting inflection that I've seen is that there's an awful lot of enterprises out there that say, “Oh, we're like a startup.” Great. You mean with weird cultural inflections that often distill down to cult of personality, the constant worry about whether you're going to wind up running out of runway before finding product-market fit? And the rooms filled with—Chris: The eighty-hour work weeks? The—[laugh]—Corey: And they're like, “No, no, no, it's like the good parts.” “Oh, so you mean out the upside.” But you don't hear it the other way around where you have a startup that you're interviewing with, “Ha-ha, we're like an enterprise. We have a six-month interview process that takes 18 different stages,” and so on and so forth. However, we do see startups having to mature rapidly, and move up the compliance path as they're dealing with regulated entities and the rest, and wanting to deal with serious customers who have no sense of humor about, “Yeah, we'll figure that part out later as part of an audit document.”So, what we also see, though, is that enterprises are doing things that look a lot more startup-y. If I take a look at the common development environments and tools and techniques that big enterprises use, it looks an awful lot like how startups were doing it five or ten years ago. That is the slow and steady evolution of time. And what startups are doing today becomes enterprise tomorrow, and I can't shake the feeling that there's a sea of vendors out there who, in the event that winds up happening are eventually going to find themselves without a market at all. My model has been that if I go and found a Twitter for Pets style startup tomorrow and in ten years, it has grown to become an S&P 500 component—which is still easier to take seriously than most of what Tesla says—great.During that journey, at what point do I become a given company's customer because if there is no onboarding story for me to become your customer, you're in a long-tail decline phase. That's been my philosophy, but you are a—trademarked term—Enterprise Architect, so please feel free to tell me if I'm missing any of the nuances there, which I'm sure I am because let's face it, nuance is hard; sweeping statements are easy.Chris: As an architect, [laugh] it would be a disservice to not say my favorite catchphrase, it depends. There are so many dependencies to those kinds of sweeping statements. I mean, there's a lot of enterprises that have good process; there are a lot of enterprises that have bad process. And going back to your previous statement of the startup inside the enterprise, I'm hearing a lot of companies nowadays saying, “Oh, well, we've now got this brand new incubator system that we're currently running our little startup inside of. It's got the best of both worlds.”And I'm not going to go through the litany of bad things that you just said about startups, but they'll try to encapsulate that shift that you're talking about where the cheese is moving so quickly now that it's very hard for these companies to know the customer well enough to continue to stay salient and continue to be able to look into that crystal ball to stay relevant in the future. My job as an EA is to try to capture that point in time where what are the requirements today and what are the known detriments that you're going to see in your future that you need to protect against? So, that's kind of my job—other than being a cloud therapist—in a nutshell.Corey: I love the approach. My line has been that I do a lot of marriage counseling between engineering and finance, which is a fun term that also just so happens to be completely accurate.Chris: Absolutely. [laugh]. I'm currently being a marriage counselor right now.Corey: It's an interesting time. So, you had a viral tweet recently that honestly, I'm a bit jealous about. I have had a lot of tweets that have done reasonably well, but I haven't ever had anything go super-viral, where it was just a screenshot of a conversation you had with an AWS recruiter. Now, before we go into this, I want to make a couple of disclaimers here. Before I entered tech myself, I was a technical recruiter, and I can say that these people have hard jobs.There is a constant pressure to perform, it is a sales job that is unlike most others. If you sell someone a pen, great, you can wrap your head around what that's like. But you don't have to worry about the pen deciding it doesn't want to go home with the buyer. So, it becomes a double sale in a lot of weird ways, and there's a constant race to the bottom and there's a lot of competition in the space. It's a numbers game and a lot of folks get in and wash out who have terrible behaviors and terrible patterns, so the whole industry gets tainted—in some respects—like that. A great example of someone who historically has been a terrific example of recruiting done right has been Jill Wohlner. And she's one of the shining beacons of the industry as far as how to do these things in the right way—Chris: Yes.Corey: —but the fact that she is as exceptional as she is is in no small part because there's a lot of random folks coming by. All which is to say that our conversation going forward is not and should not be aimed at smacking around individual recruiters or recruiting as a whole because that is unfair. Now, that disclaimer has been given. Great, what happened?Chris: So, first off, shout out to Jill; she actually used to be a host on vBrownBag. So, hey girl. [laugh]. What happened was—and I have the utmost empathy and sympathy for recruiting; I actually used to have a side gig where I would go around to the local recruiting places around my area here and teach them how to read a cloud resume and how to read a req and try to separate the wheat from the chaff, and to actually have good conversations. This was back when cloud wasn't—this was, like, three or four years ago.And I would go in there and say, “This is how you recruit a cloud person nowadays.” So, I love good recruiters. This one was a weird experience in that—so when a recruiter reaches out to me, what I do is I take an assessment of my current situation: “Am I happy where I'm at right now?” The answer is, “Yes.” And if they ping me, I'll say, “Hey, I'm happy right now, but if you have something that is, you know, a million dollars an hour, taste-testing margaritas on St. John island in the sand, I'm all ears. I'm listening. Conversely, I also am a Community Hero, so I know a ton of people out in the industry. Maybe I can help you out with landing that next person.”Corey: I just want to say for the record, that is absolutely the right answer. And something like that is exactly what I would give, historically. I can't do it now because let's be clear here. I have a number of employees and, “Hey, Corey's out there doing job interviews,” sends a message that isn't good when it comes to how is that company doing anyway. I miss it because I enjoyed the process and I enjoyed the fun, but even when I was perfectly happy, it's, “Well, I'm not actively on the market, but I am interested to have a conversation if you've got something interesting.”Because let's face it, I want to hear what's going on in the market, and if I'm starting to hear a lot of questions about a technology I have been dismissive of, okay, maybe it's time to pay more attention. I have repeatedly been able to hire the people interviewing me in some cases, and sometimes I've gone on interviews just to keep my interview skills sharp and then wound up accepting the job because it turned out they did have something interesting that was compelling to me even though I was reasonably happy at the time. I will always take the meeting; I will always at least have a chat about what they're doing, and I think that doing otherwise is doing yourself a disservice in the long arc of your career.Chris: Right. And that's basically the approach that I take, too. I want to hear what's out there. I am very happy at World Wide right now, so I'm not interested, interested. But again, if they come up with an amazing opportunity, things could happen. So, I implied that in my response to him.I said, “I'm happy right now, thanks for asking, but let's set up the meeting and we can have a chat.” The response was unexpected. [laugh]. The response was basically, “If you're not ready to leave right now, it makes no sense for me to talk to you.” And it was a funny… interaction.I was like, “Huh. That's funny.” I'm going to tweet about that because I thought it was funny—I'm not a jerk, so I'm going to block out all of the names and all of the identifying information and everything—and I threw it up. And the commiseration was so impressive. Not impressive in a good way; impressive in a bad way.Every person that responded was like, “Yes. This has happened to me. Yes, this is”—and honestly, I got a lot of directors from AWS reaching out to me trying to figure out who that person was, apologizing saying that's not our way. And I responded to each and every single one of them. And I was like, “Somebody has already found that person; somebody has already spoken to that person. That being said, look at all of the responses in the timeline. When you tell me personally, that's not the way you do things, I believe that you believe that.”Corey: Yeah, I believe you're being sincere when you say this, however the reality of what the data shows and people's lived experience in the form of anecdotes are worlds apart.Chris: Yeah. And I'm an AWS Hero. [laugh]. That's how I got treated. Not to blow my own horn or anything like that, but if that's happening to me, either A, he didn't look me up and just cold-called me—which is probably the case—and b, if he treats me like that, imagine how he's treating everybody else?Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn't think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don't use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you'll have a chance to prove yourself. Compete in four unique lab challenges, where they'll be awarding more than $2000 in cash and prizes. I'm not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That's cloudacademy.com/corey. We're gonna have some fun with this one!Corey: Every once in a while I get some of their sourcers doing outreach to see folks who are somewhat aligned on them via LinkedIn or other things, and, “Oh, okay, yeah; if you look at the things I talked about in various places, I can understand how I might look like a potentially interesting hire.” And they send outreach emails to me, they're always formulaic, and once in a while, I'll tweet a screenshot of them where I redact the person's name, and it was—and there's a comment, like, “Should I tell them?” Because it's fun; it's hilarious. But I want to be clear because that often gets misconstrued; they have done absolutely nothing wrong. You've got to cast a wide net to find talent.I'm surprised I get as few incidents of recruiter outreach as I do. I am not hireable and that's okay, but I don't begrudge people reaching out. I either respond with a, “No thanks,” if it's a particularly good email, or I just hit the archive button and never think about it again. And that's fine, too. But I don't make people feel like a jerk for asking, and that is an engineering behavioral pattern that drives me up a wall.It's, “So, I'm thinking about a job here and I'm wondering if you might be a fit,” and your response is just to set them on fire? Well, guess what an awful lot of those people sending out those emails in the sourcing phase of recruiting are early career, and guess what, they tend to get promoted in the fullness of time. Sometimes they're no longer recruiting at all; sometimes they wind up being hiring managers in different ways or trying to figure out what offer they're going to extend to someone. And if you don't think that people in those roles remember when they're treated poorly as a response to their outreach, I have news for you. Don't do it. Your reputation lingers long after you no longer work there.Chris: Just exactly so. And I feel really bad for that guy.Corey: I do hope that he was not reprimanded because he should not be. It is clearly a systemic problem, and the fact that one person happened to do this in a situation where it went viral does not mean that they are any worse than other folks doing it. It is a teachable opportunity. It is, “I know that you have incredible numbers of roles to hire for, all made all the more urgent by the fact that you're having some significant numbers of departures—clearly—in the industry right now.” So, I get it; you have a hard job. I'm not going to waste your time because I don't even respond to them just because, at AWS particularly, they have hard work to do, and just jawboning with me is not going to be useful for them.Chris: [laugh].Corey: I get it.Chris: And you're trying to hire the same talent too. So.Corey: Exactly. One of the most egregious things I've seen in the course of my career was when that whole multiple accounts opened for Wells Fargo's customers and they wound up firing 3500 people. Yeah, that's not individual tellers doing something unethical. That is a systemic problem, and you clean house at the top because you're not going to convince me that you're hiring that many people who are unethical and setting out to do these things as a matter of course. It means that the incentives are wrong, it means that the way you're measuring things are wrong, and people tend to do things out of fear or because there's now a culture of it. And if you fire individuals for that, you're wrong.Chris: And that was the message that I conveyed to the people that reached out to me and spoke to me. I was like, there is a misaligned KPI, or OKR, or whatever acronym you want to use, that is forcing them to do this churn-and-burn mentality instead of active, compassionate recruiting. I don't know what that term is; I'm very far removed from the recruiting world. But that person isn't doing that because they're a jerk. They're doing that because they have numbers to hit and they've got to grind out as many as humanly possible. And you're going to get bad employees when you do that. That's not a long-term sustainable path. So, that was the conversation that I had with them. Hopefully, it resonated and hits home.Corey: I still remember from ten years ago—and I don't always tell the story, but I absolutely will now—I went up to San Francisco when I lived in Los Angeles; I interviewed with Yammer. I went through the entire process—this was not too long before they got acquired by Microsoft so that gives you some time basis—and I got a job offer. And it was a not ridiculous offer. I was going to think about it, and I [unintelligible 00:24:19], “Great. Thank you. Let me sleep on this for a day or two and I'll get back to you definitely before the end of the week.”Within an hour, I got a response rescinding the offer claiming it had been sent by mistake. Now, I believe that that is true and that they are being sincere with this. I don't know that if it was the wrong person; I don't know if that suddenly they didn't have the req or they had another candidate that suddenly liked better that said no and then came back and said yes, but it's been over a decade now and every time I talk to someone who's considering something in that group, I tell this story. That's the sort of thing that leaves a mark because I have a certain philosophy of I don't ever resign from a job before I wind up making sure everything is solid—things are signed, good to go, the background check clears, et cetera—because I don't want to find myself suddenly without income or employment, especially in that era. And that was fine, but a lot of people don't do that.As soon as the offer comes in, they're like, “I'm going to go take a crap on my boss's desk,” which, let's be clear, I don't recommend. You should write a polite and formulaic resignation letter and then you should email it to your boss, you should not carve it into their door. Do this in a responsible way, and remember that you're going to encounter these people again throughout your career. But if I had done that, I would have had serious problems. And so that points to something systemically awful at a company.I have never in my career as a hiring manager extended an offer and then rescinded it for anything other than we can't come to an agreement on this. To be clear, this is also something I wonder about in the space, when people tell stories about how they get a job offer, they attempt to negotiate the offer, and then it gets withdrawn. There are two ways that goes. One is, “Well if you're not happy with this offer, get out of here.” Yeah, that is a crappy company, but there's also the story of people who don't know how to negotiate effectively, and in turn, they come back with indications that you do not know how to write a business email, you do not know how negotiations work, and suddenly, you're giving them a last-minute opportunity to get out before they hire someone who is going to be something of a wrecking ball in the company, and, “Whew, dodged a bullet on that.”I haven't encountered that scenario myself, but I've seen it from other folks and emails that have been passed around in various channels. So, my position on this is everyone should negotiate offers, but visit fearlesssalarynegotiation.com, it's run by my friend, Josh; he has a whole bunch of free content on his site. Look at it. Read it. It is how to handle this stuff effectively and why things are the way that they are. Follow his advice, and you won't go too far wrong. Again, I have no financial relationship, I just like what he's done a lot and I've been talking to him for years.Chris: Nice. I'll definitely check that out. [laugh].Corey: Another example is developher—that's develop H-E-R dot com. Someone else I've been speaking to who's great at this takes a different perspective on it, and that's fine. There's a lot of advice out there. Just make sure that whoever it is you're talking to about this is in a position to know what they're talking about because there's crap advice that's free. Yeah. How do you figure out the good advice and the bad advice? I'm worried someone out there is actually running Route 53 is a database for God's sake.Chris: That's crazy talk. Who would do that? That's madness.Corey: I can't imagine it.Chris: We're actually in the process of trying to figure out how to do a panel chat on exactly that, like, do a vBrownBag on salary negotiations, get some really good people in the room that can have a conversation around some of the tough questions that come around salary negotiation, what's too much to ask for? What kind of attitude should you go into it with? What kind of process should you have mentally? Is it scrawling in crayon, “No. More money,” and then hitting send? Or is it something a little bit more advanced?Corey: I also want to be clear that as you're building panels and stuff like that—because I got this wrong early on in my public speaking career, to be clear—I built talks aligned with this based on what worked for me—make sure that there are folks on the panel who are not painfully over-represented as you and I are because what works for us and we're considered oh, savvy business people who are great negotiators comes across as entitled, or demanding, or ooh, maybe we shouldn't hire her—and yes, I'm talking about her in a lot of these scenarios—make sure you have a diverse group of folks who can share lived experience and strategies that work because what works for you and me is not universal, I promise.Chris: So, the only requirement to set this panel is that you have to be a not-white guy; not-old-white guy. That's literally the one rule. [laugh].Corey: I like the approach. It's a good way to do it. I don't do manels.Chris: Yes. And it's tough because I'm not going to get into it, but the mental space that you have to be in to be a woman in tech, it's a delicate balance because when I'm approaching somebody, I don't want to slide into their DMs. It's like this, “Hey, I know this other person and they recommended you and I am not a weirdo.” [laugh]. As an old white guy, I have to be very not a weirdo when I'm talking to folks that I'm desperate to get on the show.Because I love having that diverse aspect, just different people from different backgrounds. Which is why we did the entire career series on vBrownBag. We did data science with Ayodele; we did how to get into cybersecurity with Christoph. It was a fantastic series of how to get into IT. This was at the beginning of the pandemic.We wanted to do a series on, okay, there's a lot of people out there that are furloughed right now. How do we get some people on the show that can talk to how to get into a part of IT that they're passionate about? We did a triple series on how to get into game development with Dennis Diack, the founder of Apocalypse Studios. We had a bunch of the other AWS Heroes from serverless, and Lambda, and AI on the show to talk, and it was really fantastic and I think it resonated well with the community.Corey: It takes work to have a group of guests on things like podcasts like this. You've been running vBrownBag for longer than I've been running this, and—Chris: 13 years now.Corey: Yeah. This is I think, coming up on what, four years-ish, maybe three, in that range? The passing of time, especially in a pandemic era, is challenging. And there's always a difference. If I invite a white dude to come on the podcast, the answer is yes before I get the word podcast fully out of my mouth, whereas folks who are not over-represented, they're a little more cautious. First, there's the question of, “Am I a trash bag?” And the answer is, “No.” Well, no, not in the way that you're concerned about other ways—Chris: [laugh]. That you're aware of. [laugh].Corey: Oh, God, yes, but—yeah. And then—and that's part of it, and then very often, there's a second one of, “Well, I don't think I have anything, really, to talk about,” is often a common objection here. And it's, yeah, if I'm inviting you on this show, I promise that's not true. Don't worry about that piece of it. And then it's the standard stuff that just comes with being me, of, “Yeah, I've read your Twitter feed; you got to insult me here?” It's, “No, no, not really the same tone. But great question; throw the”—it goes down to process. But it takes constant work, you can't just put an open call out for guest nominations, and expect that to wind up being representative of our industry. It is representative of our biases, in many respects.Chris: It's a tough needle to thread. Because the show has been around for a long time, it's easier for me now, because the show has been around for 13 years. We actually just recorded our two thousandth and sixtieth episode the other night. And even with that, getting that kind of outreach, [#techtwitter 00:31:32] is wonderful for making new recommendations of people. So, that's been really fun. The rest of Twitter is a hot trash fire, but that's beside the point. So yeah, I don't have a good solution for it. There's no easy answer for it other than to just be empathic, and communicative, and reach people on their level, and have a good show.Corey: And sometimes that's all it takes. The idea behind doing a podcast—despite my constant jokes—it's not out of a love affair of the sound of my own voice. It's about for better or worse, for reasons I don't fully understand, I have a platform. People listen to the show and they care what people have to say. So, my question is, how can I wind up using that platform to tell stories that lift up narratives that are helpful for folks that they can use as inspiration—in my case, as critical warnings of what to avoid—and effectively showcasing some of the best our industry has to offer, in many respects.So, if the guest has a good time and the audience can learn something, and I'm not accidentally perpetuating horrifying things, that's really more than I have any right to ask from a show like this. The fact that it's succeeded is due in no small part to not just an amazing audience, but also guests like you. So, thank you.Chris: Oh no, Thank you. And it is. It's… these kinds of shows are super fun. If it wasn't fun, I wouldn't have done it for as long as I have. I still enjoy chatting with folks and getting new voices.I love that first-time presenter who was, like, super nervous and I spend 15 minutes with them ahead of the show, I say, “Okay, relax. It's just going to be me and you facing each other. We're going to have a good time. You're going to talk about something that you love talking about, and we're going to be nerds and do nerd stuff. This is me and you in front of a water cooler with a whiteboard just being geeks and talking about cool stuff. We're also going to record it and some amount of people is going to see it afterwards.” [laugh].And yeah, that's the part that I love. And then watching somebody like that turn into the keynote speaker at a conference ten years down the road. And I get to say, “Oh, I knew that person when.”Corey: I just want to be remembered by folks who look back fondly at some of the things that we talk about here. I don't even need credit, just yeah. People who see that they've learned things and carry them forward and spread to others, there's so many favors that people have done for us that we can only ever pay forward.Chris: Yeah, exactly. So—and that's actually how I got into vBrownBag. I came to them saying, “Hey, I love the things that you guys have done. I actually passed my VCIX because of watching vBrownBags. What can I do to help contribute back to the community?” And Alistair said, “Funny you should mention that.” [laugh]. And here we are seven years later.Corey: Well, to that end, if people are inspired by what you're saying and they want to hear more about what you have to say or, heaven forbid, follow in your footsteps, where can they find you?Chris: So, you can find me on Twitter; I am at mistwire.com—M-I-S-T-W-I-R-E; if you Google ‘mistwire,' I am the first three pages of hits; so I have a blog; you can find me on vBrownBag. I'm hard to miss on Twitter [laugh] I discourage you from following me there. But yeah, you can hit me up on all of the formats. And if you want to present, I'd love to get you on the show. If you want to learn more about what it takes to become an AWS Hero or if you want to get into that line of work, I highly discourage it. It's a long slog but it's a—yeah, I'd love to talk to you.Corey: And we of course put links to that in the [show notes 00:35:01]. Thank you so much for taking the time to speak with me, Chris. I really appreciate it.Chris: Thank you, Corey. Thanks for having me on.Corey: Chris Williams, Enterprise Architect, comma AWS Cloud at WWT. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me that while you didn't actively enjoy this episode, you are at least open to enjoying future episodes if I have one that might potentially be exciting.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Chris regains several of his developer merit badges and embarks on a perilous CSRF (Cross-Site Request Forgery) adventure. Steph shares highlights from Plucky, a management training course, including ways we can "click" and "break apart" from our current role, and how to have hard conversations. They also discuss how software development processes change at different team sizes, processes that break down as teams grow, and processes that are resilient at any team size. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy The Nightmare Before Christmas - What's This (https://youtu.be/QLvvkTbHjHI) Giant Robots Smashing into other Giant Robots - Plucky with Jen Dary (https://www.giantrobots.fm/search?utf8=%E2%9C%93&term=plucky) Plucky (https://www.beplucky.com/) Services are Not a Silver Bullet (https://thoughtbot.com/blog/services-are-not-a-silver-bullet) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: STEPH: Boom. I'm recording. Magic is happening. [singing] What's this? What's this? It's a Bike Shed episode. What's this? What's this? CHRIS: You did that on the mic. [laughter] So you just started recording too, so it's not like you're like, "Oh, I forgot I was recording." STEPH: Oh, I didn't have a finishing line that rhymes with shed. CHRIS: Head, dead, bread, spread. STEPH: [singing] Is TDD dead? I don't know. [laughs] CHRIS: Cool. I liked it. STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world? CHRIS: What's new? I had a fun experience over the past week or two of regaining some of my developer merit badges, which is always enjoyable. So one was I had to configure AWS, specifically S3 and IAM such that I could upload files to an S3 bucket, which seems like one of those things that a developer should be able to do, and it's just not that hard. And, man, I failed so many times, and I stared at the screen. And the ARNs I think that's another acronym that I had to try and figure out what it means and fight against. Anyway, I got there. So that's one merit badge earned. I really hope [laughs] I correctly and securely configured access to an S3 bucket such that we could upload files in our Rails app. Cool, neat. Moving on, the next merit badge that I went for was restoring the sea of green dots. Our RSpec output had gathered some noise. There was a whole bunch of noise across a variety of things. There were some dev tools that were dumping some stuff in there. And there was something related to apparition, which is the...I want to say it's the Capybara feature spec driver that we're using now, which sits on top of ChromeDriver or something like that. I don't really understand the details, but it was complaining about something. And I found a fix, and then I fixed it and whatnot. But it was one of those. I did this on a Saturday because I was just like, you know what? This will be cathartic and healing. And then I got to the sea of green dots, and I was so happy to get to it. STEPH: This is me...I'm giving you a round of applause. CHRIS: Well, thank you. Arguable whether it delivered any real value to users, but again, this was Saturday effort, so I was allowed to indulge my fastidious caretaker of the code role. STEPH: Sorry, before we move on to more serious, can we pause to talk about developer merit badges? I really, really want cute felt badges that we can...I mean, I can't design them. I don't have the talent. But I think between us and other folks, we could design amazing merit badges, and then people could collect those. I'm very much in love with that idea. CHRIS: I love the idea. I am now certain that if we were to really pursue this, that we would fall into the deepest of bike sheds as we try and define well; what are all the merit badges? And what are the different levels? STEPH: [laughs] CHRIS: And how many do you need to collect before you can get to what are the different...There are just so many different taxonomies that we could introduce, and, oh man, I could spend a couple of weeks on that. STEPH: [laughs] It has a very strong Pokémon vibe too of you got to catch them all. CHRIS: Absolutely. STEPH: Okay. All right. We won't digress into bikeshedding merit badges, but I'm still very, very interested in that idea. CHRIS: Indeed. If anyone out there in the listener space wants to just make these, that would be great. This is the way that I avoid bikeshedding now is I just say I'm not allowed to make these decisions or even think about it. But if these happened into the world, I would be happy about that. STEPH: Oh, I just remembered we do have something similar at thoughtbot. They're not physical where you can hold them, but I think we've talked about turning them into physical badges. But we have our internal tool hub that we used to track our schedules. And one of the fun Ralphapalooza events that we had, a team came up with the idea of introducing badges in the tool hub, so then you could award people badges. You could give people badges. And it's very cute. So they could probably help us with the taxonomy. They've probably already figured out a number of badges we could get started with. CHRIS: And of course, this is where my brain went initially to like, oh, what would the taxonomy be? But I think that's how this goes bad. And if we just keep it in the this is cute and fun, and what are all the possible merit badges, but they're all equal, and the points are made up anyway, and then it's just a fun thing, then I'm like, I'm super into this. Let's do that. Have you used a regular expression to parse HTML? Congratulations, you get a merit badge. Have you not used regular expressions to parse HTML? You get a different merit badge. [chuckles] STEPH: [laughs] I feel very positive that I could be chief of cute and fun. I could manage that department. CHRIS: Yes, that feels like definitely a role that you could really excel at. But shifting around ever so slightly, I did run into a fun bug this week. And it was a mystery tour of, I'm going to say, sadness and then eventual learning and understanding, and I think we've come to a better place. But I want to tell a story, take us on a quick tour of the adventure that I went through. So we recently saw a handful of exceptions come through in our exception monitoring service and then piped into Slack, where we see those around CSRF token expiry. So this occasionally happens in a Rails app. The CSRF token that was on the page gets rotated. And therefore, when someone...if they have an older version of the page open and they try and submit a form or something like that, then CSRF protection is going to kick in. And you do get some false negatives there or some cases where like, nope, this is actually a fine user, this is not hacking, this is nothing bad. It's just that that user had a tab open or something like that. I'll be honest; I want to understand better the timeline of expiry and how Rails expires those and whatnot. But it's one of those things; it's deep enough in Rails that I trust that they are doing a very reasonable thing. And I think the failures that we're seeing that's part of the game. And so, mostly, we wanted to add a nicer handling around that. So thankfully, Inertia actually has a really wonderful page in their docs about handling Cross-Site Request Forgery expiration token, this whole thing. This is a particular failure mode that your app might have. And so it's nice to be able to provide a nicer user experience. And so what we ended up doing is if we catch that exception, we have a rescue_from in our application controller that will instead of having this be a 500 and just a full, like, something went wrong error page, we instead respond in an Inertia-like way to basically show a flash message that says, "This page has expired. Please refresh the page to continue." And if the user just refreshes the page, then they will get a new CSRF token. And from there, everything is going to be fine. So it's not ideal. But it is, I think, both secure and now a nicer user experience. STEPH: Yeah, that sounds really nice. When they refresh the page, do they lose all that form data? I'm curious how painful of a flow that is for the user. CHRIS: Currently, yes. Inertia actually has a really nice feature for remembering form data. If you've ever been on GitHub and you're filling in a box, and then you go away to a different tab, and you come back, and it's still there, and you're happy about that, it's that sort of thing. So we could configure that. At this point, we don't have...most of our forms are pretty small. So this is not something that we opted to do proactive management around. But that is definitely something that we could add but not something that's default or anything like that. STEPH: Cool. Yeah, that makes sense. I was just curious because yeah, either small form doesn't really matter, or also, this may be just a small enough error that only a handful of people are experiencing it that it's also just not that big of a deal. CHRIS: Yes, this definitely should be an edge case. And we've also recently been working on functionality to log folks out after a period of inactivity, which would also, I think, obviate this in a different way. So all total, this shouldn't be a big deal. And this was basically a quick, little snippet of code that we thought we could just drop in, and everything would be great because it shouldn't happen much. But then I was testing out a different feature on staging, and everything I tried to do was popping up this little alert flash message that was like, "Hey, your page is expired." And I was like, that seems bad. And then I realized literally every action, any non-GET request, was getting this response that the CSRF token didn't match. And I was like, well, this seems bad. Luckily, it was only on staging and hadn't made it to production. But it had made it to staging, which meant it had gotten through CI, which was very concerning because we have a pretty robust set of feature specs at this point. We built up a bunch of fakes for all of the external data systems that we're interacting with. And we're really putting the app through its paces and trying to do so in a very production-like way. And so I was like, this is such a deep fundamental breakage. I don't know what's going on here. And so I started to investigate. And it turns out that in a recent commit, I had started using Axios, which is a little wrapper around the Fetch API. They may not actually use the Fetch API under the hood, but it allows you to have a nicer interface to make XHRs. And we implicitly had that in our package already by virtue of Inertia. Inertia uses it under the hood, but I wanted to make it explicit because now I was using it directly. So I figured that's cool. I will yarn add Axios, and then I will continue on with my day. And I worked on my feature and everything was great. And then I pushed it up into a pull request, and everything was great, and CI passed. And I got it onto staging, and everything was very sad. So then I started on the adventure of like, what is going on here? It turns out that somewhere between version 0.21.1 of Axios and 0.23.0, which there's a bunch of things about those version numbers that make me uncomfortable but here we are, somehow the behavior where you can configure the XSRF header name, which is what they're calling it on their side, the configuration stopped working. And so our override that says this is what our CSRF or XSRF token should be called when it's sent back up to the server in a header that was getting lost. And so they were falling back to their default name, Axios was. And, therefore, Rails was like, "There's no CSRF token here. So this is going to be a no for me. I'm going to reject all of the requests." So the fix was relatively easy to roll back and to pin the version of Axios to the previous version that we had been using. I didn't actually intend to upgrade it. I just intended to make it an explicit dependency. But by doing that, I accidentally upgraded it. I don't love that there was this pretty deep breakage in that. I haven't done the good work of trying to open an issue. I still want to scan through and see if there is an open issue or a conversation around this before I start making any noise. But I think if I don't find anything, this is the sort of thing that should be reported because I can't imagine I'm the only one running into this. Likewise, I was very sad that my test suite did not find this. Turns out in Rails, CSRF protection is just turned off in test mode, which may be overall makes sense. But for feature specs, in particular, I definitely want to have it. And so, it was nice that I was able to find the relevant configuration. And we introduced an RSpec configuration that says, "If it's a feature spec, save off the existing configuration and enable CSRF. And then after the spec, go back to whatever the previous was." So now all feature specs run with CSRF. And I did make sure to push up that as a singular change to CI, and CI was very unhappy with me. Many, many features-specs failed, which was good. That was what we were going for. They failed for the right reason because things were fundamentally broken. And then, I was able to update the package-lock or the package.json on the yarn lock, pin the version, fix everything. But man, there was this period of like, oh man, the app is broken in such a fundamental way. Users just can't do stuff anymore. They can view anything, but they couldn't change any data. And it just snuck through CI. And that feeling is the worst feeling. We had, at this point, built up a lot of trust in our test suite. It was really telling us when stuff was wrong, and if it was green, I felt very good merging. And suddenly, this just really shook me to my core on that front. STEPH: I love these journeys that you take us on. I mean, they're painful for you, and I am sorry to hear that. But I love these journeys that you take us on. [chuckles] CHRIS: I usually only take us on them when I've figured out the answer. And I'm like, all right, here's where we're at. It was rough for a little while, but now we are happy. And thankfully, the one configuration of saying, hey, Rails, also, please include this as part of our production like, configuration for test mode. So I feel better that moving forward, this breakage won't happen again. STEPH: We should add that as another merit badge for telling a bug story. All right, I'm taking off my hat of chief of fun and cuteness. So this may not be terribly relevant to all the things that you just shared. But I am curious where you mentioned that with Axios because you'd specified the name of the token, and then that overriding behavior is what then broke. And so then that's what led to this whole adventure that you went on. I'm curious, why did y'all customize the name of that token? CHRIS: A, this is a great question. B, I'm not super sure. C, I think the reason is because we were trying to align to Rails. So we have a little middleware on the Rails side that will serialize the CSRF token into a cookie. And then that cookie value gets read by Axios and sent back up as a header on the request. So this is the way that with Inertia CSRF just kind of works and is good. And it's different than Rails' normal. We put a hidden input into any form. And so Rails holistically knows about both sides of that, and everything works fine. But now I have to manually round trip the CSRF token. And Axio's default configuration is a header name X-XSRF-TOKEN, and we needed X-CSRF-TOKEN because that's what Rails is looking for. I probably could have configured it the other way on the Rails side. But one way or another, I had to get Rails and Axios to come to an agreement, to meet at a table, and to agree to collectively protect the app. And so I had to mediate that discussion, and that's what ended us here. STEPH: A meeting of the minds. [chuckles] Cool, cool, cool. Yeah, that makes sense. I was just curious because then that would have changed the whole journey. But yeah, that is super interesting. And I definitely resonate with the idea of when you've really invested in your test suite, and you trust it that then when it doesn't catch something that obviously breaks the application, then that feels like something worth prioritizing and digging into and then figuring out how to bring back that parity. I don't know that I've turned on enable CSRF for feature spec. So I'm also very interested in looking at that configuration and considering if I need that for any of my future client projects if that's something that I need to remember for the future because that's very niche but good to know about. CHRIS: I feel like this only really comes up if you're working in the...it's called the odd middle ground that Inertia ends up occupying. If you're in a traditional Rails app that is generating HTML server-side, forms are generated. They got the CSRF token inlined there in a hidden input. And then when you post that form, it's coming back up. The names automatically are going to match. You don't need to worry about it. And it's probably fine to not have it included in test mode. And if you're at the other end of the spectrum and you've got API interaction, and that's the way you're doing everything, then you have a different auth mechanism and cookies, and whatnot just don't apply in the same way. And so it won't really matter on that side but for a different reason. And it's only because we're in this interesting middle ground, which, again, I really love. And it's the thing that I love about Inertia. But this is a rare case where it's like, oh, we do have to bring the two sides to meet in the middle. And this is a case where, unfortunately, due to a very subtle breakage on a minor release of...a package that we're using silently broke so, yeah. But yeah, thankfully, everything is back to working. And again, we've been able to enhance the test suite in that little way that I feel confident again because this won't sneak in another time. We have coverage around this. We're good to go. So while I was very scared when this initially happened, I feel better now. I'm happy to go into the weekend feeling better about this. But that's my story. What's new in your world? STEPH: So I feel like I've been having one of those weeks where I have less code adventures. In fact, it's one of those days where I went to thoughtbot's daily sync...because we often have our client daily syncs, but then we still have a thoughtbot sync as well. And I went to the group, and I was like, I get to write code today. It's going to be a great day. All the other things I'm doing are also interesting, but I get particularly excited when I get some maker's time and get to write some code. So I feel like I've had less coding adventures recently and more hiring and process-related adventures. And specifically, I just completed the Plucky Manager Training, which is a program that's founded and led by Jen Dary, who was recently on thoughbot's podcast, The Giant Robots Smashing Into Other Giant Robots. I'll be sure to include a link in the show notes for anyone that's interested. CHRIS: I believe this was the third time she was on. It's at least the second, possibly the third. And all of them are great listens, just as an aside, so we should include links to all of them. STEPH: Yes, I think she's one of the rare guests that has been on the show three times. And I think I've only listened to the first couple minutes of that episode. But I think they talk about the fact that this is her third episode, which is really, really cool. And I'm still frankly synthesizing all the information and the ideas that I've collected from the course. But I do have a few quick takes that I'm interested in sharing with you. So the first one is my cohort...we were the Panda Cohort, so go, Pandas. And some of the things that we talked about were…, and I think that this may have been the first day. So it was three days, and it was three hours for those three days. And they're spread out over a couple of weeks, which is really nice because then you show up for those three hours of the class, but then you leave with some ideas and some things to experiment with. You get a week to then try out an experiment and then come back to class next time and talk about this is how it went; it went to wonderful, or it went terrible. And you get to share that with others and work through it. And in the first class, we talked about coaching versus managing, which I found just a helpful definition to review. So managing is more direct, and telling someone what to do while coaching is encouraging someone to determine their own path and find their own solution. And I find that as a team lead at thoughtbot, I'm very often more in that coaching space than I am in that managing space. I think it's frankly pretty rare that I actually need to put on a manager's hat. And I often feel like I'm wearing my coaching hat instead. And some of the other things we talked about one of them is what is work? Which is a fun question to ask. And Jen had an analogy for this speaking about imagine that you have a plastic Easter egg. So it's got two sides, and side one is all the skills and desires and things that you're fulfilled by. And side two is a company that needs those skills. And it's great when those line up and click together, like when you take a job or get a promotion. Have you ever played...do you know what I'm talking about? Those little plastic Easter eggs. Have you ever played with those as a kid? CHRIS: Yes, certainly. STEPH: [laughs] I realize I just launched into that analogy. [chuckles] And then Jen goes on to say that's totally normal for then those sides to unclick. And Jen continues to say that it's totally normal for them to unclick. So maybe the company changes direction, the company is acquired. You've fallen out of love with something that you do about your job, or you have kids, and that has changed the things that you are fulfilled by and what you're looking for. And that's not necessarily bad. So it can be like, hey, you are working on x now, and you're not fulfilled by that anymore. But then another company comes along and says, "Hey, we're working on this, and you are fulfilled by that." So then another click happens. And essentially, it's a nice analogy to represent someone's career path and the ways that we are going to shift and re-prioritize what we're interested in. But it's also a really nice way to help it feel less personal because both sides are allowed to change. The company can change. You, as an employee, can change. And then you can look for that next click that is going to match up with a company that meets your skills and things that help you feel fulfilled. One of the other topics that we talked about are hard conversations, which I love that we dug into this one because that's certainly one that I struggle with or...I mean, we all get that feeling if you have to confront someone if you have to have that uncomfortable discussion with someone. It is a very hard thing to do. And so we had some very honest conversations around what is a hard conversation? What does that represent? And essentially, they represent that there is stalled progress and something can be improved. So Jen likens a hard conversation to a tool. It's something that you can use to then help something move forward again if something feels stalled or if there's something that needs to change. And during those hard conversations, you may not get to the resolution that you're looking for. So you may be looking for a specific outcome. But you also have another person that needs time to respond and to take in everything that you have said and process that information. So when you have a hard conversation, you may actually only move forward an inch. So if you had a lofty goal of we're going to talk and then we're going to have this hard conversation, and we're going to get to this space...But instead, you actually just make incremental progress. Like, okay, at least this person is now aware of this concern. That might be your win for the hard conversation versus actually tackling; how are we going to address it? I just want them to be aware of this concern. And it's a very vulnerable conversation, and they often take time before you can get to that ideal resolution. But essentially, the idea is get in the game, start the conversation, and then have follow-up conversations for that hard conversation. And I really appreciated that framing because I often will think of hard conversations of oh, we have to have this hard conversation and get to this specific outcome. But if you shift the goal line to be like, no, I really just need to at least make this person aware of a concern, that makes it a lot more approachable. And then also probably yields more fruitful outcomes because that gives the other person time to think about what you've shared to also come to the table with their own ideas and then work together to then get to that ideal resolution. CHRIS: I like that framing a lot. I can definitely see the case where you, as someone who has recognized something that needs to change (perhaps you're a manager),lineup you've now thought about that a good bit; you've observed it, but the individual that you're bringing that to this may be novel. This may be a surprise for them. And so if you come into that interaction both about to share this information but then also trying to resolve it and trying to get to I need you to internalize it, and I need you to fundamentally change your behavior as a result of this conversation we're going to have, that's quite possibly not a realistic outcome. And if you're trying for that, it might inherently lead to just a bad outcome because that individual is not in a position to do that. But they are potentially ready to hear it. And so you can just achieve step one and then later have step two. So I like that a lot. STEPH: Yeah, in general, I found the course incredibly helpful, very insightful. It was also really nice to hear from other managers that are facing similar problems or perhaps novel problems and then getting to weigh in and help each other. So it's a wonderful course. I'll be sure to include a link in the show notes for anyone that is interested. And I'll probably come back with some more insights from the class because it's really...we just wrapped up. So I'm sure I still have some ideas that will percolate over time, and I want to come back and share those with the group. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Pivoting just a bit, we have a listener question that I'm excited to dive into. This question comes from the one and only, the Edward Loveall, fellow thoughtboter. And Edward wrote in, "How does the process of software development change at different team sizes? What's a process that breaks down soon after the team starts growing? What's a process that is resilient at all sizes? And by process, I mean anything that involves other people including organizing tasks, code review, deployment, or anything else that isn't you alone writing code in a vacuum." I'm really excited about this question because I think there's a lot here. And there's actually one part that I'm struggling with a bit, so I'm curious to see what you think, Chris, about it. But I'm going to start off with saying that I think there are a number of management processes that definitely break down as a team grows. But in the spirit of Edward's question, I'm going to focus more on the software development process and how those might need to change and what starts to break as your team grows. So starting off with processes that break after the team starts growing, this one, frankly, what really starts to break is not a process specifically, but it's the lack of process that really starts to become visible and painful. So, how do we track work? Before, maybe the product manager or someone would just send you a message and say, "Hey, can you work on this?" or "Hey, can you fix this thing?" And how does code need to be reviewed before being merged? Does it need to be reviewed? Are people just merging as they get stuff done? How are deploys performed? Oh, we have a super urgent production fix that needs to go out, and the only person that knows how to deploy is out sick today? Cool. That's the type of process that I think that really breaks down, or at least you start to notice when the team starts to grow. What are your thoughts? CHRIS: I definitely feel that first one very strongly. We're feeling it right now on the team, which is still very small. There are only three developers working on the project, and then we have a product manager. And each week, we're slowly iterating, and tweaking, and honing, and trying to introduce just enough process in terms of how we define the work to be done, communicate the status of it, all of that fun stuff. We started with Trello. And we just had a board with some columns, and then we had more columns, and then we got rid of a few of them. And then we recently added a Power-Up to the Trello board, which allows for epics. So there are cards which are epics which tie to sub cards. And I'm staring at it, and I'm like, how long until we're Jira? How long can I hold out here and not be Jira? But it does feel like we're slowly iterating towards a more useful process for this team rather than process for process' sake, which I feel like is a really useful distinction. There's also a question of like, what can be known or what can be adequately measured and whatnot versus what can't be? So we've talked many a time on the show about estimation and velocity and trying to track that and the pitfalls inherent with that. And so there's, in my mind, two different camps. There's the process we want to avoid. And again, to reference German Velasco's wonderful blog post, Say No To More Process. And I really feel like there is a tendency often when things go wrong to then try and paper over that with process. Oh, this team didn't use the design system. So we need to write ESLint rules to make sure you can't import from the directories that aren't the thing. And it's like, we can do that, and I've definitely done that. And I will do that again in the future. But I always have the lens of do we need this? Is it worth the trade-off, the cost, the overhead, the complexity that it's bringing in? But definitely, organizing and communicating tasks is one of the ones that becomes really difficult. The more people that are working on something, the more you need probably more than one person staying out in front of them and trying to define the next bit of work that needs to be done after that. Code review feels like it probably should stay similar, with the exception that I lose the ability to review all code at some point. Right now, I'm trying to review every single PR that goes through or close to it. At some point, I'm just going to have to give up on that. But for now, that's my goal. But fundamentally, code review, I think, will hopefully take the same shape. Deployment, similarly, like, I've talked about the merge queue thing. I want to get a little bit of process in there but not too much. There is definitely some necessity for change. But I definitely want to resist the urge to change everything and to just say, like, slowly over time; we're going to have to be a big Byzantine organization with lots of rules and standard operating procedures and all of that. I've heard anecdotally, and I don't know if this is true, so maybe someone out there on the internet can correct me if I'm wrong, but my understanding is that at Google, they're pretty tight in terms of what languages and frameworks can be used and what processes, and workflows, and build tools and all of that whereas Facebook, as a counterpoint, is relatively lax. Obviously, React is used very heavily on the core web application. But there's some flexibility in terms of different languages and frameworks and things for sub-projects or small individual teams having a little bit more autonomy. And I think that's a really interesting thing of are you one large, cohesive, organized company or do you try to act like a bunch of small disparate but roughly connected teams that share good ideas but can work independently? And that changes how I would think about this question. STEPH: I really like how you're describing the addition of process. It sounds like a just-in-time process. So as you're learning that something needs to be added, then that's when you look for answers. And then you sprinkle on a bit of process that everyone agrees that feels very helpful within also the right to review and see if that still makes sense for the team. There's one additional area where I think the lack of process really shines through in addition to the number of ways that you've mentioned is also onboarding. So if you have a very small team and you are onboarding, it's likely that...Chris, you can let me know if I'm wrong, but when someone's joining the team, there's probably a good chance that they get to pair with you at some point, or they even get welcomed by you to the team. And then, they get an overview of the product and the codebase. And there's probably this really nice session where they get to ask you questions, and then they have that onboarding session. Does that sound about right? CHRIS: Yes. But I would go so far as to say it's not just a day or a session, but it's probably a couple of days. So yes, and. STEPH: That's even better. And with some of the smaller teams that I've seen, that onboarding process is where they are pairing with that lead person on the team. And that's going well until suddenly that lead person can't pair with everybody. And nobody has really thought about how to streamline that onboarding or how to coach or teach someone else to be a really good onboarding pair. And I have strong feelings about this area because we often focus so much on hiring, but then we drop the ball when it comes to onboarding that new, wonderful colleague that we've worked so hard to recruit. And at the end of that day, someone's going to reach out to them and say, "Hey, how was your first day?" And it makes a big difference for that person's retention as to how those first couple of days ago. So I think onboarding is another really important part that when you're a smaller team, you probably don't need much process because you have more of that personable onboarding experience. But as the team grows, there needs to be more of a process to help other teammates join the team. CHRIS: It's interesting. I think I totally agree with you that over time, there is a necessity to be more intentional and to have a little bit more structure in the process. And I don't think you're saying this, but I just want to make sure we are saying the thing that I think we believe, which is that shouldn't replace the human that helps you onboard. Like, I still like the idea that everybody gets a pair for some amount of time when they start at a new company. And you're working together on a feature, or you're working together on bug fixes. You're shipping to production as soon as possible. But you're not doing that based on some guides in a wiki. You're doing that with another human that's helping you. There should also be guides, and a wiki, and documentation, and formalization as the organization grows but not in place of having another person that you get to talk to. STEPH: We're just going to send you a little yellow rubber duck and then with a little Post-It note that says, "Good luck [laughs] with your onboarding process." Definitely. I agree with everything you said. It does not replace that human element where there's someone that's helping you onboard. I just see that onboarding is one of those things that gets forgotten, or we often point someone to a README which I do think is great because then it is battle-testing our README. But then there still needs to be someone that is readily there to say, "Hey, how's it going? What are you struggling with? Can I pair with you?" There still has to be that human element that is helping guide you through the process. And I think smaller teams may forget that they actually need to assign somebody to you to make sure that you have someone that you know. Like, hey, this is who I can reach out to with all my questions. Because they're probably not going to be comfortable posting in the company channel at that point or a larger communication to say, "Hey, I'm stuck on something." CHRIS: There's one other area that comes to mind, or I guess it's more of an anecdote that I have heard, but it speaks back to GitHub's early, early days. And they were somewhat famous for being very flat in terms of the organization and very self-organized, and everybody's figuring it out, and you're working on the thing that's most important in your mind. And for a long time, this was a celebrated facet of the company and a thing that they talked about rather publicly. And then I think there was this collective recognition, and maybe they reached a tipping point where that just didn't work anymore. Or maybe it actually hadn't been working for a bit, and there was just the collective realization of that. But it was interesting to watch from the outside as GitHub added more formalization, more structure, more managers, and hierarchy, and career ladders, and things of that nature. And I think there's a way to do all of those things in a complicated, overloaded, heavy way. But I think a different version of it is...like, you were using the word coaching earlier. Having formal structures within your organization to encourage people on their career path, to help them grow, to have structure around that, I think is a really difficult thing to get right. But I think it is critical, and I think just not having it can't be the answer past a certain probably pretty small size. So that is an interesting one where I think you do need to introduce some process and formalization around how you think about the group of people and how they work together within your organization. STEPH: I agree. I think where some folks may see a lack of hierarchy; others feel a lack of support. And adding levels of management should really be focused on the outcome is that we're helping people feel supported. So even getting feedback as you're adding those different levels of management, like, hey, did we make your life better? Did we make your life worse? I think that's a great question for management to ask as they're exploring a less flat structure. CHRIS: So, Steph, I have a question for you now on a variant of this topic. In general, we seem to be fans of having a codebase. Probably a Rails app that's got a database behind it, and that's where you put the data. Everybody commits to that same repository. It's all kind of one collected thing. And often, organizations grow to a certain size, and they're like, this is untenable. We cannot have this many people working on this same codebase. So we shall do the logical thing, which is we will break it up into small pieces. And those pieces will communicate over HTTP, and it will be great because then our teams can be separate from each other and can manage their little piece of the world. What do you think about that? Is there truth there? Is it not true at all? What do you think? STEPH: All right, so your team is getting too big, and to the point that you feel like you need to split it out so then you can have small teams, and they can all work independently on different parts and services of the codebase. I don't love the idea. I'm trying to think through because I feel like there's a lot of nuance here. But I don't love the idea that that's the driving force as to why are we making the change? And that is often a question that comes to mind whenever we are making a big change, either architecture or process-related is like, what's driving this? And then how are we going to measure it? And if we are driving it just because we have a large team, let's talk more. Why are people blocked? Why can't people work together? What's preventing people from being able to contribute to the same codebase? Are people blocked for a long time because they're having to wait on someone else to complete that work? I have a lot of questions that I don't know if I can fully answer your question. But my instinct is to say let's not break up the architecture just because our team grew in size. CHRIS: Yeah, I think I definitely agree with that. There's probably a breaking point where it's just too many individuals, and there'll be too much contention. But I think resisting that or at least naming that as like, okay, that's what we're saying but is that really what's true? Or are we actually feeling that this system is so deeply coupled that there's no way to change some small piece of the code without impacting other parts of it? Like, is the CSS completely untenable because we're just using global class names, and it's leaking everywhere? Okay, do we need a different solution there? And then it's actually fine. We don't need to have different services that have their own different style sheets. We just need a different approach to CSS. That's a particularly easy one to go for because there's inherently a global namespace there. But the same thing is true in a lot of different contexts. So services are a way to break things apart and enforce those boundaries. But if inherently coupling is your problem, then you're just going to be coupled over HTTP, and I think it's going to be difficult. There's a wonderful blog post by Josh Clayton, which I think does a better job than I'm doing in this moment of highlighting some of the questions I would want to ask. The blog post is titled Services are Not a Silver Bullet. And so Josh goes through and enumerates a bunch of the different versions of the story that he's heard throughout the years of well, we need to go to services because x, because our test suite is slow because pull requests are constantly having merge conflicts and whatnot, because the code is very deeply coupled and any change here affects everything else. And a fix over here broke something over there. This is no good. And so he does a really good job of presenting alternatives or at least questions that you can ask to say, like, is this the problem, or is this a symptom? And we need to address the more underlying cause. And so I think there is a point where you just can't have 1,000 people trying to commit to the same Rails codebase. That feels like it's maybe too big. But it takes a while to get to 1,000 people. And there will be times where extracting a service makes sense or integrating with an external service that exists. Like, I've talked about Stripe before as my canonical like, yeah, it's actually deeply intertwined with the data model, but they're just dealing with such a distinct complexity set over there. And they have such expertise on that that I'm happy to accept the overhead of the fact that that service lives outside of my core application, and I need to deal with synchronizing state and all of that. I will take on that complexity, but it's not worth it for everything, and it's not a silver bullet. Again, to reference the name of Josh's blog post there, Services are Not a Silver Bullet. And so, coming back to Edward's original question, I would say that having a monolithic codebase works for a really long time, but there is probably a breaking point somewhere well along, but fight it for as long as you can. I think. STEPH: I really like how you touched on coupling because it really helps ask those questions to get to the heart of what are the pain points that you are feeling? And it is less of a decision that is based on people and process but more if you're going to split out a portion of your architecture. It is in response to an actual business need and a business value versus some other pain points that you're trying to fix. A particular example might be like maybe you have a portion of your application that really just needs to spend a lot of time crunching data. And it's really not as specific to your application; it's something that can happen on its own. And then it's beneficial to move that outside so it can scale and relate it to the work that it needs to perform versus keeping it in-house with the application. I do want to circle back to another question that Edward included which is what's a process that is resilient at all sizes? And the ones that really come to mind for me...and these are a bit amorphous intentionally because it will look different for each company. But three areas that are very resilient at all sizes, whether you are 1 to 2 employees versus you've got hundreds or thousands it's communication, testing, and accountability. So communication, where are we headed, and how do we know what we're working on? For testing, it's how do we test our changes? Do we write tests? Do we use QA? Do we have a staging environment? What does that look like? What's our parity between staging and production? And then how do we know what's in progress, and how do we know when it's done? Those are three core areas that, regardless of your team size,,I think are very crucial to the team success. What do you think? What are some of the processes that are resilient at all sizes? CHRIS: I actually really like the list that you just provided. That is a wonderful trifecta, and I think it will take you very far, so probably not much to add from me. But I guess on that note, should we wrap up? STEPH: Let's wrap up. CHRIS: The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeeee! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris evaluates the pros and cons between using Sidekiq or Active Job with Sidekiq. He sees exceptions everywhere. Steph talks about an SSL error that she encountered recently. It's officially spooky season, y'all! sidekiq-symbols (https://github.com/aprescott/sidekiq-symbols) Transcript: CHRIS: Additional radiation just makes Spider-Man more powerful. STEPH: [laughs] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world? CHRIS: Fall is in the air. It's one of those, like, came out of nowhere. I knew it was coming. I knew it was going to happen. But now it's time for pumpkin beer and pumpkin spice lattes, and exclusively watching the movie Hocus Pocus for the next month or so or some variation of those themes. But unrelated to that, I did a thing that I do once, let's call it every year or so, where I had to make the evaluation between Sidekiq or Active Job with Sidekiq, as the actual implementation as the background job engine that is running. And I just keep running through this same cycle. To highlight it, Active Job is the background job system within Rails. It is a nice abstraction that allows you to connect to any of a number of them, so I think Delayed Job is one. Sidekiq is one. Resque is probably another. I'm sure there's a bunch of others. But historically, I've almost always used Sidekiq. Every project I've worked on has used Sidekiq. But the question is do you use Active Job with the adapter set to Sidekiq and then you're sort of living in both worlds, or do you lean in entirely and you use Sidekiq? And so that would mean that your jobs are defined to include Sidekiq::Worker because that's the actual thing that provides the magic as opposed to inheriting from Application Job. And then do you accept all of the trade-offs therein? And every time I go back and forth. And I'm like, well, but I want this feature, but I don't want that feature. But I want these things. So I've made a decision, but I want to talk ever so briefly through the decision points that were part of it. Have you done this back and forth? Are you familiar with the annoying choice that exists here? STEPH: It's been a while since I've had the opportunity to make that choice. I'm usually joining projects where that decision has already been made. So I can't think of a recent time that I've thought through it. And my current project is using that combination of where we are using Active Job and Sidekiq. CHRIS: So I think there's even a middle ground there where that was the configuration that I'd set up on the project that I'm working on. But you can exist in both worlds. And you can selectively opt for certain background jobs to be fully Sidekiq. And if you do that, then instead of saying, "Performlater," You say, "Performasync." And there are a couple of other configurations. It gives you access to the full Sidekiq API. And you can do things like hey, Sidekiq, here's the maximum number of retries or a handful of other things. But then you have to trade away a bunch of the niceties that Active Job gives. So as an example, one thing that Active Job provides that's really nice is the use of GlobalID. So GlobalID is a feature that they added to Rails a while back. And it's a way to uniquely identify a given record within your system such that when you say performlater, you can say, InvitationMailer.performlater and then pass it a user record so like an instance of a user model. And what will happen in the background is that gets serialized, but instead of serializing the whole user object because we don't actually want that, it will do the GlobalID magic. And so it'll turn into, I think it's GID:// so almost like a URL. But then it'll be, I think, your application name/model name down the road. And the Perform method actually gets invoked via the background system. Then you will just get handed that user record back, but it's not the same instance of the user record. It sort of freezes and thaws it. It's really nice. It's a wonderful little feature. Sidekiq wants nothing to do with that. STEPH: I'm so glad that you highlighted that feature because that was on my mind; I think this week where I was reviewing...somebody had made the comment where they were concerned about passing a record to a job and saying how that wouldn't play nicely with Sidekiq. And in the back of my mind, I'm like, yeah, that's right. But then I was also I'm pretty sure this got addressed, though. And I couldn't recall specifically if it was a Sidekiq enhancement or if it was a Rails enhancement. So you just cleared something up for me that I had not had time to confirm myself. So thanks. CHRIS: Well, to be clear, this works if you are using Active Job with Sidekiq as the adapter, but not if you are using a true Sidekiq worker. So if you opt-out of the Active Job flow, then you have to say, "Perform_async," and if you pass it a record, that's not going to work out particularly nicely. The other similar thing is that Sidekiq does not allow the use of keyword args, which, I'm going, to be honest, I really like keyword arguments, especially for background jobs or shuttling data through your system. And there's almost a lazy evaluation. I want some nicety to make sure that when I am putting something into a background job that I'm actually using the correct call signature, essentially passing the correct data in the correct shape. Am I passing a record, or am I passing the ID? Am I passing a list of options or a single option? Those sort of trade-offs that are really easy to subtly get wrong. I came around on this one because I realized although Active Job does support keyword arguments, the way it does that is it just has a JSON serialization format for them. So a keyword argument turns into a positional array with an associated hash that allows for the lookup or whatever. Basically, again, they handle the details. You get to use keyword args, which is great, with the exception that when you're actually calling performlater, that method performlater is a method missing type magic method. So it does not actually check the keyword arguments at that point. You're basically just passing an options hash as opposed to true keyword arguments that would error because they don't match up. And so when I figured that out, I was like, oh, never mind. This doesn't actually do the thing that I care about. It's a little bit nicer in terms of the signature of the method when you're defining your background job itself, but it doesn't actually do any logical checking. It doesn't give me any safety or robustness within my system. So I don't care about that. I did find a project called sidekiq-symbols, which does some things under the hood to how Sidekiq serializes and deserializes jobs, which I think gives largely the same behavior as Active Job. So I can now define my Sidekiq jobs with keyword arguments. Things will work. I can't use GlobalID. That's still out. But that's fine. I can do a little helper method that basically does the same thing as GlobalID or at least close approximation. But sidekiq-symbols lets me have keyword arg-like signatures in my methods; basically, it is. But again, it doesn't actually do any check-in when I'm enqueueing a job, and I am sad about that. STEPH: Yeah, that's another interesting distinction. And I'm unsurprisingly with you that I would favor having keyword args and having that additional safety in place. Okay, so I've been keeping track. And so far, it sounds like we have two points because I'm doing a little scorecard here between Active Job and Sidekiq. And we have two points in favor of Active Job because they offer a GlobalID, which then allows us to pass in a record, and then it takes care of the serialization for us. And then also, keyword args, which I agree with you that's a really nice feature to have in place as well. So I'm curious, so it sounded like you're leaning towards Active Job, but I don't want to spoil the ending. CHRIS: Yes, I could see why that's what you would be taking away from the conversation thus far. So again, just to reiterate, Active Job and Sidekiq with this sidekiq-symbols extension they both support keyword args, kind of. They support defining your job with keyword args and then enqueueing a job passing something that looks like keyword args. But it ends up...nobody's actually checking anything, so it's mostly like a syntactic nicety as opposed to any sort of correctness, which is still nicer, but it's not the thing that I actually want. Either way, nobody supports it, so it is not available to me. Therefore, it is not a consideration point. The GlobalID thing is nice, but it is really, again, it's a nicety more than anything. I have gone, and I'm leaning in the direction of full Sidekiq and Sidekiq everywhere as opposed to Active Job in most cases, but then Sidekiq when we need it. And that's because Sidekiq just has a lot more power and a lot more functionality. So, in particular, Sidekiq has a feature which allows you to say...it's a block that you put at the top of your Sidekiq job that says retries exhausted or something. I think Sidekiq retries exhausted is the actual full name of that at that point, which is really unfortunate in my mind, but anyway, I'll deal. At that point, you know that Sidekiq has exhausted all of the retries, and you can treat it as failed. I'm going, to be honest, I went on a quest to find a way to say, hey, I'm going to put some work into the background. It's really important for me to know if this work succeeds or if it fails. It's very easy to know if it succeeds because that just happens in-line in the method. But we can have an exception raised at basically any point; Sidekiq does a great job of catching those, of retrying, of having fundamental mechanisms there. But this is the best that I can get for this job failed. And so Active Job, as far as I can tell, does not have anything for this in order to say, yep, we are done. We are not going to keep working on this. This work has failed. It is dead. Dead is; actually, I think the more correct term for where we're at because failed is a temporary state, and then you retry after a failure. Whereas dead is, this has gone through all of its retries, and it will never be run again. Therefore, we should treat this as not having run. And in my case, the thing that I want to do is inform the user that this operation that we were trying to do on their behalf has not succeeded, will not succeed. And please reach out or otherwise deal with the fact that we were unable to do the thing that they asked us to do. That feels like a really important thing for me to be able to do, to be able to communicate back to my users. This is one of those situations where I'm looking at the available options, and I'm like, I feel like I can't be the only one who wants to know when something goes wrong. This feels like a thing that's important. But this is the best example that I've found, the Sidekiq retries exhausted block. And unfortunately, when I'm using it, it gets yielded the Sidekiq JSON blob deserialized, so it's like Ruby hash. But it's still like this blob of data. It's not the same data that gets passed into perform. And so, as a result, when I want to look up the record that was associated with it, I have to do this nested dig into the available hash of data. And it just feels like this is not a well-paved path. This is not something that is a deeply thought about or recommended use case. But again, I don't feel like I'm doing something weird here. Am I doing something weird, Steph, wanting to tell my users when I was unable to do the thing they asked me to do? [chuckles] STEPH: That feels like a very rhetorical question. [laughs] CHRIS: It does. I apologize. I'm leading the witness. But in your sincere heart of hearts, what do you think? STEPH: No, that certainly doesn't sound weird. I'm actually thinking back to some of the jobs that cause me stress in regards to knowing when they failed and then having that communication of knowing that we've exhausted all the retries. And, of course, knowing when those retries are exhausted is incredibly helpful. I am intrigued, though,, because you're highlighting that Active Job doesn't have the same option around setting the retry. And I'm trying to recall exactly how it's set. But I feel like I have set the retry count for Active Job. And maybe, as you mentioned before, that's because it's an abstraction, or I'm not sure if Active Job actually has that native support. So I feel a little confused there where I think my default instinct would have been Active Job does have that retry capability. But it sounds like you've discovered otherwise. CHRIS: I'm not actually sure what Active Jobs core retry logic or option looks like. So fundamentally, as far as I understand it, Active Job is an abstraction. And under the hood, you're always connecting an adapter. So it's either going to be Sidekiq, or Resque, or Delayed Job, or other. And each of those systems, whichever system you have as the adapter, is the one that's actually going to be managing retries. And so I know Sidekiq happens to have as a default 25 retries. And that spans, I think it's a two-week exponential back off. And Sidekiq has some very robust logic that they have implemented as the way retries exist within Sidekiq. I'm not sure what that would look like if you're trying to express it abstractly because it is slightly different. I know there was some good work that was done on Sidekiq to allow the Sidekiq options that's a method at the top level of the job, even if it's an Active Job job to express the retries. So that may be what you've seen, or there may be truly an abstraction that exists within Active Job, and then each adapter needs to know how to handle retries. But frankly, the what can Sidekiq do that Active Job can't? There's a whole bunch of stuff around limiting when you would retry limiting, enqueuing a job if there already exists one, when and how do those records get locked. There's a whole bunch of stuff. Sidekiq has a lot of power under the hood. And so if we want to be leaning into that, that's why I'm leaning towards let's just be Sidekiq all the time. Let's become Sidekiq experts. Let's accept that as a deep architectural decision within the app as opposed to just relying on the abstraction. Because fundamentally, if we're just using Active Job, we're not going to have access to the full power of Sidekiq or whatever the underlying system is, so sort of that decision that I'm making, but I don't know specifically around the retries. STEPH: Okay, thanks. That's really helpful. It's been a while since I've had to make this decision. I'm really enjoying you sharing your adventure because I'm trying to think what's the risk? If you don't use Active Job, what are the trade-offs? And you'd mentioned some of them around the GlobalID and keyword args, which are some niceties. But overall, if you don't go with the abstraction, if you lean into Sidekiq, the risk is then you want to migrate to a different enqueuing service. And something that we talk about is mitigating that risk, so then you can swap it out. That's also something I have never done or encountered where we've had to make that change. And it feels like a very low risk in my mind. CHRIS: Sidekiq feels like the thing you would migrate to, not a thing you would migrate from. It feels like it is the most powerful. And if anything, I expect at some point we'll be upgrading to Sidekiq pro or enterprise or whatever the higher versions that you pay for, but you get more features there. So in that sense, that is the calculation. That's the risk trade-off in my mind is that we're leaning into this technology and coupling ourselves more closely to it. But I don't see that as one that will reassess in the same way that people talk about Active Record and it being an ORM. And it's like, oh, we're abstracting the database underneath, and I'm like, no, I'm not. I'm always using Postgres. Please do not take Postgres. I'm not going to switch over to MySQL next week. That's totally fine if you start on MySQL. It's unlikely you're going to port over to Postgres. We may port to an entirely…like it's a Cassandra column store with a Kafka queue, I don't know, something weird down the road. But it's not going to be swapping out Postgres for MySQL or vice versa. Like you said, that's probably not a change that's going to happen. But that I think is the consideration. The other consideration I have in my mind is Active Job is the abstraction that exists within Rails. And so I can treat it as the lowest common denominator, and folks joining the project, it's nice to have that familiarity. So perform_later is the method on the Active Job jobs, and it has a certain shape to it. People may be familiar with that. Mailers will automatically use Active Job just implicitly under the hood. And so there's a familiarity, a discoverability. It's just kind of up the middle choice. And so if I can stick with that, I think there's a nicety there. But in this case, I think I'm choosing I would like the power and consistency on the Sidekiq side, and so I'm leaning into that. STEPH: Yeah, that makes a lot of sense to me. And I liked the other example you provided around things that were not likely to swap out and Postgres, MySQL, your database being one of them. And in favor of an example that I do have for something that...I do enjoy wrapping. It's not something that I adhere to strictly, but I do enjoy it when I have the space to make this choice. So I do enjoy wrapping HTTPClients, not just because then I can swap it out for a different HTTPClient, which frankly, that's also rare that I do that. Once I choose an HTTPClient, I'm probably pretty happy, and I don't need to swap it out. But I really like being able to extend to the API specifically if they don't handle error responses in a way that I would like to or if they raise, and then I want to change the API to have a more thoughtful interface and where I don't have to rescue those errors. But instead, I can interact with this object that then represents an error state. So that was just one example that came to mind for things that I do enjoy having an abstraction around and not just so I can swap it out because that feels like a very low risk, but more frankly, so I can extend the API. CHRIS: I definitely share the I almost always wrap APIs, or I try and hide whatever the implementation detail whether it be HTTPParty, or Faraday or whatever it is that I'm using and trying to hide that deeply within the system. And then I have whatever API client that we define. And that's what we're interacting with. It's interesting that you bring up errors and exceptions there because that's the one other thing that has caused me this...what I'm describing now seems perhaps like, oh, here's just a list of pros and cons, a simple decision was made, and there we are. This represents some real soul searching on my part, if we will. And one of the last things that I ran into that was just so frustrating is that Sidekiq is explicitly built around the idea of exceptions; Sidekiq retries if there is an exception raised in the job, otherwise, it treats it as success, and that's it. That is the entirety of it. That is the story. But if you raise an exception in a job, then you can't test that job because now it's raising an exception. You can't test retries or this retry exhausted block that I'm trying to lean into. I'm like, I want to put that in a feature spec and say, oh, this job goes in the background, but it's in a failure state, and therefore, the user sees the failure message. Sorry, I can't do that because the only way to actually fail a job is via an exception. And I've actually gone to some links in this application to try to introduce more structured data flow. I've talked a bunch about the command objects and the dry-monads and all those things. And I've really loved them where I've gotten to use them. But then I run into one of these edge cases where Sidekiq is like, no, no, no, you can't do that. And so now I have parts of my system that very purposefully return data as opposed to raising an exception. And I just have to turn around and directly raise that failure as an exception, and it just feels less expressive. I actually just ran into the identical thing with Pundit. They have a little bit better control over it; I can choose whether or not I want the raising version or not. But I see exceptions everywhere, and I want a little more discrete data flow. [chuckles] That is my dream. So anyway, I chose Sidekiq is the summary here. And slowly, we're going to migrate entirely to Sidekiq. And I'm going to be totally fine with it. And I'm done griping now. STEPH: This is your own little October Halloween movie, that I see exceptions everywhere. CHRIS: They're so spooky. STEPH: [laughs] That's cool about Pundit. I'm not sure I knew that, that you get to essentially turn on or off that exception flow behavior. On one hand, I'm like, that's nice. You get the option. On the other hand, I'm like, well, let's just not do it. Let's just never raise on people. But at least they give people options; that seems really cool. CHRIS: They do give the option. I think you can choose different strategies there. And also, if we're being honest, I'm newer to Pundit. And I used a different thing, which was to get the Policy Object and ask it a question. I wanted to ask, is this enabled or not? Can a user do this or not? That should not raise an exception. I'm just asking a question. We're just being real chill about this. I just want to know some information. Let's flow some data through our system. We don't need exceptions for that. STEPH: Why are you yelling at me? I just have a question. [laughs] CHRIS: Yeah. I figured out how to be easy on that front. Sidekiq apparently has no be easy mode, but that's fine. You know what? We're going to make it work, and it's going to be fine. But it is interesting deciding which of these facets of the system that I'm building do I really care about? Which are the ones where I'm like, whatever, just pick something, and we'll move forward, it's not a big deal? Versus, we're actually going to be doing a lot of work in the background. This is the thing that I care about deeply. I want to know about failure and success. I want to really understand that and have a robust answer to what our architecture looks like there. Similarly, Pundit for authorization. I believe that authorization will be a critical aspect of our system. It's typically a pretty important thing. But for us, I think we're going to have different types of users who can log in and see different subsets of data and having a consistent and concrete way that we have chosen to implement that we are able to test, that we're able to verify. I think that's another core competency within the app. But you only get to have so many of those. You can only be really good at a couple of things. And so I'm in that place where I'm like, which are our top five when I say are the things that I care a lot about? And then which are the things where I'm like, I don't know, whatever, just run with it? STEPH: Just a little bit ago, I came so close to singing because you said the I want to know phrase again. And that, I'm realizing, [laughs] is a trigger for me and a song where I want to sing. I held it back this time. CHRIS: It's smart. You got to learn anytime you sing on mic that is part of the permanent record. STEPH: Edward Loveall at thoughtbot, since I sang in a recent episode, did the delightful thing where then he grabbed that clip of where you talk a little bit, and then I sing and then encouraged everyone to go listen to it. And in which I responded, like, I would highly recommend that you save your ears and don't listen to it. But yes, singing on the mic is a thing. I do it from time to time. I can't hold it back. CHRIS: We all do. But since it doesn't seem that you're going to sing in this moment, I think I can probably wrap up my Odyssey of choosing between Sidekiq and Active Job. I hope those details were useful to anyone other than me. It was an adventure, so I figured I'd share it. But yeah, that about wraps it up on my side. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: So, I would love to talk about an SSL error that I encountered recently. So one of the important processes in our application is sending data to another system. And while sending data to that other system, we started seeing the following error that the read "Certificate verify failed." And then in parens, it states, "Unable to get local issuer certificate." So upon seeing that error, I initially thought, okay, something is wrong with their SSL certificate or their SSL configuration. And that's not something that I have control over and can fix. So we should reach out and let them know to take a look at their SSL config. But it turns out that their team already knew about the issue. They had recently updated or renewed their SSL cert, and they saw our messages were no longer being processed, and they were reaching out to us for help. So at that point, I'm still pretty sure that it's related to something on their end, and it's not something that I can really fix on our end. But we can help them troubleshoot. Maybe there's a workaround that we can add to still get messages processing while they're looking into their SSL config. It seemed like they still just needed help. So it was something that was still worth diving into. So going back to the first error, I want to talk a little bit about it because I realized that I understand SSL just enough, just the surface to get by as a developer. But then, every time that I run into a specific error with it, then I really have to refresh my understanding as to what could be wrong, so then I can troubleshoot more effectively. So for anyone that could use a refresher on that certificate verification process, when your browser or your server is connecting to a site that uses SSL, then your browser server, whichever one you're using, is going to download that site certificate and verify a couple of things. So it's going to check does the certificate contain the domain name of the website? So essentially, you gave us a certificate. Is this your certificate? Does it match the site that we're connecting to? Is this cert issued by a trusted certificate authority? So did someone that we trust give you this certificate? And is the cert still valid, or has it expired? So that part is pretty straightforward. The second part, "Unable to get local issuer certificate," so that's the part I was less certain about. And I took this to mean that they had passed two of those three checks that their cert included the site's name, and it had not expired. But for some reason, we aren't able to determine if their cert was issued by someone that we should trust. So following that journey, my next question was, so what are they giving us? So this is a tool that I don't get to use very often, but I reached for OpenSSL and, specifically, the s_client command, which connects to a specified domain and prints all certificates in the certificate chain. You may already know this, but the certificate chain is basically a fancy way of saying, show me all the certificates necessary to prove your site certificate was authorized by a trusted certificate authority. CHRIS: I did not know that. STEPH: Okay, I honestly didn't either. [laughs] CHRIS: I liked that you thought I would, though. So thank you, but no. [chuckles] STEPH: Yeah, it's one of those areas of SSL where I know just enough. But that was something that was new to me. I thought there was a site certificate, and I didn't realize that there is this chain of certificates that has to be honored. So going back and looking through that output of the certificate chain, that's what highlighted to me that their server was giving us their certificate and saying, hey, you should trust our site certificate. It's legit because it was authorized by, let's say, XYZ certificate. And so if it were a proper certificate chain, then they would give us that XYZ cert. And essentially, we can use this chain of certificates to get back to a trusted authority that then everybody knows that we can trust. However, they weren't actually giving us a reference certificate; they were giving us something else. So essentially, they were saying, "Hey, look at our certificate and look at this very trustworthy reference that we have." But they're actually failing to give us that reference. So to bring it all home, we can download that intermediate certificate that they reference; that is something that is publicly accessible. That's why we're able to then verify each certificate that's provided in that chain. We could go and download that intermediate certificate from that certificate authority. We could combine that with their site-specific certificate, include that in our request to their system, and then complete the certificate chain. And boom, we're back in business. But it was quite a journey. CHRIS: That is quite the journey. And yeah, I definitely knew very little of that, although everything you're saying makes sense. And I have a bunch of cubbyholes in my brain for SSL knowledge. And the words you said all fit into the spaces that I have in my brain, but I didn't know a bunch of those pieces. So thank you for sharing that. SSL and cryptography, more generally or password hashing or things like that, occupy this special place in my brain where I'm both really interested in them. And I will occasionally research them. If I see a blog article, I'll be like, oh yeah, I want to read more about this password hashing. And what's a Salt? And what's a Pepper? And what are we doing there? And what is BCrypt versus SCrypt? What are all these things? This is cool. And almost the arms race on the two sides of how do we demonstrate trust in a secure manner on the internet? But at the same time, I am not allowed to do anything with this information. I outsource this as much as humanly possible because it's one of those things that you just should not do yourself and SSL perhaps even more so. So I have configured aspects of my password hashing. But I 100% just lean on the fact that Let's Encrypt exists in the world. And prior to that, it was a little more work. But frankly, earlier on in my career, I wasn't dealing with the SSL parts of things. But I'm so grateful to Let's Encrypt as a project that exists. And now, on almost every platform that I work with, there's just a checkbox for please do the SSL work for me, make it good, make it work, and then I will be happy. And I'm so glad that that organization exists and really pushed the envelope also. I forget what it was, but it was only like three years ago where SSL was not actually nearly as common as it is now. And now it is pervasive and everywhere. And all of the sites have it, and so that is a wonderful thing. But I don't actually know much. I know that I should have it. I must have it. I should force it. That's true. So I push that out… STEPH: Hello. CHRIS: Are you trying to get me to sing? [chuckles] STEPH: [laughs] No, but I did want to know if you get the reference, the Salt-N-Pepa. CHRIS: Push It Real Good the song? Yeah, okay. STEPH: Yeah, you got it. [chuckles] CHRIS: I will just say the lyrics. I shall not sing the lyrics. I would say that, though, that yes, yes, they do that. STEPH: Thank you for acknowledging my very terrible reference. Circling back just a little bit too in regards to...I'm with you; this is a world that is not one that I am very deeply technical in and something that I learned a fair amount while troubleshooting this particular SSL error. And it was very interesting. But there's also that concern where it's like, that was interesting. And we worked around the issue, but this also feels very fragile. So we still haven't fixed it on their end where they are sending the wrong certificate. So then that's why we had to do more investigative work, and then download the certificate that they meant to send us, and then send back a complete certificate chain so that we don't have this error anymore. But should they change anything about their certificate, should they renew anything like that, then suddenly, we're going to break again. And then, the next developer is going to have to go through the same journey. And this wasn't a light journey. This was a good half-day journey to figure out what was going on and to spend the time, and then to also get that fix out to production. So it's a meaningful task that I don't want anyone else to have to go through. But we are relying on someone else updating their configuration. So, on one hand, we're in a good spot until they are able to update. But on the other hand, I wrote a heck of a commit message for the next person just describing like, friend, just grab some coffee if we're going to chat. It's a very small code change, but you need to know the scoop. So should you need to replicate this because they've changed something, or if this happens…because we work with a number of systems that we send data to. So if someone else should run into a similar issue, they will understand some of the troubleshooting techniques that I used and be able to look up that chain and find out if there's a missing cert or something else they need to provide. So it feels like a win, but I'm also nervous for future selves, future developers. So there's another approach that I haven't mentioned yet, but it was often a top recommendation for when dealing with SSL errors. And specifically, it was turning off SSL verification. And I saw that, and I was like, well, that won't work. I'm definitely sending sensitive, important data. And I need to verify that who I'm sending this to is really the person that I want to send this data to. So that was not an option for me. But it made me very nervous how often that was an approach that people would recommend and be like, oh, it's okay, just turn off SSL. You'll be fine. Like, don't worry about it. CHRIS: I feel like this so perfectly fits into the...some of our work is finding the information and connecting the pieces together and making it work. But some of it is that heuristic sense, that voice in the back of your head that is like, wait, I'm sorry, what? You want me to just turn off the security perimeter and hope that the velociraptors won't come in? That doesn't seem like it's going to end well. I get that that's an easy option that we have available to us right now and will solve the immediate problem but then let's play this out. There are four or five Jurassic Park movies now that tell the story of that. So let's be careful. STEPH: It always ends super well, though, right? Like, it's totally fine. [laughs] CHRIS: [laughs] Exclusively. Although it's funny that you mentioned OpenSSL no verify because just this past week, I used that very same configuration. I think it was okay in my case; I'm pretty sure. But it is interesting because when I saw it, I was like, oh no, can't do that. Certainly not that. Don't turn off the security feature. That's the wrong way to deal with the issue. But in the particular case that I'm working with, I'm using Redis, Heroku Redis, in particular, in a Heroku configuration. And the nature of how Heroku configures the Redis instances and the connectivity to our app into our dyno...I forget why. I read an article. They wrote it; Heroku wrote it. I trust them; they're good. I've outsourced my trust to people that I do trust. The trust chain actually maps really well to the certificate trust chain. I trust that Heroku has taken security deeply seriously. And for some reason, their configuration of Redis requires that I turn on OpenSSL no verify mode. So I'm using this now both in Sidekiq, and then we're using our Redis instance for our Rails cache as well. So in both cases, I said, "It's fine. Don't worry about it." I used the Don't worry about it configuration. And I didn't love it but I think it's okay. And partly, I'm trying to say this into the internet radio right now just in case anyone's listening who's like, no, no, no, you can't do that. That's bad. So I'm willing to be deeply wrong on the internet in favor of someone telling me and then I get to get out in front of it. But I think it's fine. Pretty sure it's fine. It should be fine. STEPH: I love love love that you gave a very visual example of velociraptors, and then you're like, oh, but I turned it off. [laughs] So I'm going to start sending you a velociraptor gif each day. CHRIS: I hope you do. I hope the internet holds you accountable to that. STEPH: [laughs] CHRIS: And I really look forward to [laughs] moving forward because that's a great way to start the day. Well, it doesn't need to start the day, but I look forward to them. STEPH: [laughs] I am really intrigued because I'm with you. Like you said, there are certain entities that are in our trust chain where it's like, hey, you are running this for us, and so I do have faith and trust in you that you wouldn't steer me wrong and provide a bad recommendation. Someone on Stack Overflow telling me to turn off SSL verify uh; that's not my trust chain. Heroku or someone else telling me I'm going to take it a little more seriously. And so I'm also interested in hearing from...what'd you say? You're speaking into the internet phone. [laughs] What'd you say? CHRIS: I think I said internet radio. But yeah, in a way. I mean, we're recording over Skype right now. So in a manner of speaking, we're on the internet phone to make our internet radio show. STEPH: [laughs] Oh goodness, the internet radio. I'm also intrigued to hear if other people are like, oh, no, no, no. Yeah, that sounds like an interesting scenario. Because I would think you'd still want your connection to...you said it's for Redis. So you still want that connection to be verified. But then if Redis itself can't have a specific...yeah, we're testing the boundaries of my SSL knowledge here as to how the heck you would even establish that SSL connection or the verification process. CHRIS: Me too. And it also exists in an interesting space where Heroku is rather clear in their documentation about this. And it was a surprising claim when I saw it. And so, I don't expect them to be flippant about a thing that is important. Like, if they're like, "No, no, no, it is okay. You can turn off the security thing, don't worry." I trust that they're not just like, oh, we didn't think about it too much. But we figured why not? It's not a big deal. I'm sure that they have thought about it deeply because it is an important thing. And so in a weird way, my trust of them and the severity of what this thing represents, I'm like, oh yeah, I super trust that because you're not going to get a major thing wrong. You might get a minor, small, subtle thing wrong. But this is a pretty major configuration change. As I say it, I'm now getting more worried. I'm now like, I feel fine about this. This doesn't seem like a problem at all. But then I keep saying stuff, and I'm like, oh no. That's why I love having a podcast; I find out things about myself as I talk into a microphone to you. STEPH: We come here to share our deep, dark developer secrets. Chris: Spooky developer therapy. STEPH: But just to clarify, even though you've turned off the SSL verify, you're still connecting over SSL. CHRIS: Yes, I believe that's the case. And if I'm remembering, I think the nature of how this works is they're using a self-signed certificate because of shared infrastructure or something, something that made sense when I read it. But it was the idea that they are doing a self-signed certificate. Therefore, to what you were talking about earlier, there isn't the certificate authority in the chain of those because it's self-signed. And so, they are not a trusted certificate authority. Therefore, that certificate that they have generated would not be trusted. But it does still allow for the SSL handshake and then communication to happen over SSL. It's just that fundamental question of trust. I'm saying, in this case, for reasons, it's okay. Trust me that I trust them. We're good. Which, again, I don't feel great about, but I think yes, it is still SSL, but it is a self-signed certificate. So we have to make this configuration change. STEPH: Yeah, all of that makes sense. And it certainly sounds like you have been very thoughtful about that change and put in some investigative work. So on that note, I have a very unrelated bad joke for you. CHRIS: I'm very excited. STEPH: All right, here we go. All right, so what do you call an alligator wearing a vest? CHRIS: I don't know. What do you call an alligator wearing a vest? STEPH: An investigator. [laughter] On that note, shall we wrap up? CHRIS: Oh, let's wrap up. We should also include a link in the show notes to the episode where you told the joke about the elephant hiding in the trees because that's one of my favorite jokes. You slayed me with that one. [laughs] But on that note, yes, let us wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes,,as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Longtime listener and friend of the show, Gio Lodi, released a book y'all should check out and Chris and Steph ruminate on a listener question about tension around marketing in open-source. Say No To More Process, Say Yes To Trust by German Velasco (https://thoughtbot.com/blog/say-no-to-more-process-say-yes-to-trust) Test-Driven Development in Swift with SwiftUI and Combine by Gio Lodi (https://tddinswift.com/) Transcript: CHRIS: Our golden roads. STEPH: All right. I am also golden. CHRIS: [vocalization] STEPH: Oh, I haven't listened to that episode where I just broke out in song in the middle. Oh, you're about to add the [vocalization] [chuckles]. CHRIS: I don't know why, though. Oh, golden roads, Golden Arches. STEPH: Golden Arches, yeah. CHRIS: Man, I did not know that my brain was doing that, but my brain definitely connected those without telling me about it. STEPH: [laughs] CHRIS: It's weird. People talk often about the theory that phones are listening, and then you get targeted ads based on what you said. But I'm almost certain it's actually the algorithms have figured out how to do the same intuitive leaps that your brain does. And so you'll smell something and not make the nine steps in between, but your brain will start singing a song from your childhood. And you're like, what is going on? Oh, right, because when I was watching Jurassic Park that one time, we were eating this type of chicken, and therefore when I smell paprika, Jurassic Park theme song. I got it, of course. STEPH: [laughs] CHRIS: And I think that's actually what's happening with the phones. That's my guess is that you went to a site, and the phones are like, cool, I got it, adjacent to that is this other thing, totally. Because I don't think the phones are listening. Occasionally, I think the phones are listening, but mostly, I don't think the phones are listening. STEPH: I definitely think the phones are listening. CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey. So we have a bit of exciting news where we received an email from Gio Lodi, who is a listener of The Bike Shed. And Gio sent an email sharing with us some really exciting news that they have published a book on Test-Driven Development in Swift. And they acknowledge us in the acknowledgments of the book. Specifically, the acknowledgment says, "I also want to thank Chris Toomey and Steph Viccari, who keep sharing ideas on testing week after week on The Bike Shed Podcast." And that's just incredible. I'm so blown away, and I feel officially very famous. CHRIS: This is how you know you're famous when you're in the acknowledgments of a book. But yeah, Gio is a longtime listener and friend of the show. He's written in many times and given us great tips, and pointers, and questions, and things. And I've so appreciated Gio's voice in the community. And it's so wonderful, frankly, to hear that he has gotten value out of the show and us talking about testing. Because I always feel like I'm just regurgitating things that I've heard other people saying about testing and maybe one or two hard-learned truths that I've found. But it's really wonderful. And thank you so much, Gio. And best of luck for anyone out there who is doing Swift development and cares about testing or test-driven development, which I really think everybody should. Go check out that book. STEPH: I must admit my Swift skills are incredibly rusty, really non-existent at this point. It's been so long since I've been in that world. But I went ahead and purchased a copy just because I think it's really cool. And I suspect there are a lot of testing conversations that, regardless of the specific code examples, still translate. At least, that's the goal that you and I have when we're having these testing conversations. Even if they're not specific to a language, we can still talk about testing paradigms and strategies. So I purchased a copy. I'm really looking forward to reading it. And just to change things up a bit, we're going to start off with a listener question today. So this listener question comes from someone very close to the show. It comes from Thom Obarski. Hi, Thom. And Thom wrote in, "So I heard on a recent podcast I was editing some tension around marketing and open source. Specifically, a little perturbed at ReactJS that not only were people still dependent on a handful of big companies for their frameworks, but they also seem to be implying that the cachet of Facebook and having developer mindshare was not allowing smaller but potentially better solutions to shine through. In your opinion, how much does marketing play in the success of an open-source project framework rather than actually being the best tool for the job?" So a really thoughtful question. Thanks, Thom. Chris, I'm going to kick it over to you. What are your thoughts about this question? CHRIS: Yeah, this is a super interesting one. And thank you so much, Thom, although I'm not sure that you're listening at this point. But we'll send you a note that we are replying to your question. And when I saw this one come through, it was interesting. I really love the kernel of the discussion here, but it is, again, very difficult to tease apart the bits. I think that the way the question was framed is like, oh, there's this bad thing that it's this big company that has this big name, and they're getting by on that. But really, there are these other great frameworks that exist, and they should get more of the mindshare. And honestly, I'm not sure. I think marketing is a critically important aspect of the work that we do both in open source and, frankly, everywhere. And I'm going to clarify what I mean by that because I think it can take different shapes. But in terms of open-source, Facebook has poured a ton of energy and effort and, frankly, work into React as a framework. And they're also battle testing it on facebook.com, a giant website that gets tons of traffic, that sees various use cases, that has all permissions in there. They're really putting it through the wringer in that way. And so there is a ton of value just in terms of this large organization working on and using this framework in the same way that GitHub and using Rails is a thing that is deeply valuable to us as a community. So I think having a large organization associated with something can actually be deeply valuable in terms of what it produces as an outcome for us as consumers of that open-source framework. I think the other idea of sort of the meritocracy of the better framework should win out is, I don't know, it's like a Field of Dreams. Like, if you build it, they will come. It turns out I don't believe that that's actually true. And I think selling is a critical part of everything. And so if I think back to DHH's original video from so many years ago of like, I'm going to make a blog in 15 minutes; look at how much I'm not doing. That was a fantastic sales pitch for this new framework. And he was able to gain a ton of attention by virtue of making this really great sales pitch that sold on the merits of it. But that was marketing. He did the work of marketing there. And I actually think about it in terms of a pull request. So I'm in a small organization. We're in a private repo. There's still marketing. There's still sales to be done there. I have to communicate to someone else the changes that I'm making, why it's valuable to the system, why they should support this change, this code coming into the codebase. And so I think that sort of communication is as critical to the whole conversation. And so the same thing happens at the level of open source. I would love for the best framework to always win, but we also need large communities with Stack Overflow answers and community-supported plugins and things like that. And so it's a really difficult thing to treat marketing as just other, this different, separate thing when, in fact, I think they're all intertwined. And marketing is critically important, and having a giant organization behind something can actually have negative aspects. But I think overall; it really is useful in a lot of cases. Those are some initial thoughts. What do you think, Steph? STEPH: Yeah, those are some great initial thoughts. I really agree with what you said. And I also like how you brought in the comparison of pull requests and how sales is still part of our job as developers, maybe not in the more traditional sense but in the way that we are marketing and communicating with the team. And circling back to what you were saying earlier about a bit how this is phrased, I think I typically agree that there's nothing nefarious that's afoot in regards to just because a larger company is sponsoring an open-source project or they are the ones responsible for it, I don't think there's anything necessarily bad about that. And I agree with the other points that you made where it is helpful that these teams have essentially cultivated a framework or a project that is working for their team, that is helping their company, and then they have decided to open source it. And then, they have the time and energy that they can continue to invest in that project. And it is battle-tested because they are using it for their own projects as well. So it seems pretty natural that a lot of us then would gravitate towards these larger, more heavily supported projects and frameworks. Because then that's going to make our job easier and also give us more trust that we can turn to them when we do need help or have issues. Or, like you mentioned, when we need to look up documentation, we know that that's going to be there versus some of the other smaller projects. They may also be wonderful projects. But if they are someone that's doing this in their spare time just on the weekends and yet I'm looking for something that I need to be incredibly reliable, then it probably makes sense for me to go with something that is supported by a team that's getting essentially paid to work on that project, at least that they're backed by a larger company. Versus if I'm going with a smaller project where someone is doing some wonderful work, but realistically, they're also doing it more on the weekends or in their spare time. So boiling it down, it's similar to what you just said where marketing plays a very big part in open source, and the projects and frameworks that we adopt, and the things that we use. And I don't think that's necessarily a bad thing. CHRIS: Yeah. I think, if anything, it's possibly a double-edged sword. Part of the question was around does React get to benefit just by the cachet of Facebook? But Facebook, as a larger organization sometimes that's a positive thing. Sometimes there's ire that is directed at Facebook as an organization. And as a similar example, my experience with Google and Microsoft as large organizations, particularly backing open-source efforts, has almost sort of swapped over time, where originally, Microsoft there was almost nothing of Microsoft's open-source efforts that I was using. And I saw them as this very different shape of a company that I probably wouldn't be that interested in. And then they have deeply invested in things like GitHub, and VS Code, and TypeScript, and tons of projects that suddenly I'm like, oh, actually, a lot of what I use in the world is coming from Microsoft. That's really interesting. And at the same time, Google has kind of gone in the opposite direction for me. And I've seen some of their movements switch from like, oh Google the underdog to now they're such a large company. And so the idea that the cachet, as the question phrase, of a company is just this uniformly positive thing and that it's perhaps an unfair benefit I don't see that as actually true. But actually, as a more pointed example of this, I recently chose Svelte over React, and that was a conscious choice. And I went back and forth on it a few times, if we're being honest, because Svelte is a much smaller community. It does not have the large organizational backing that React or other frameworks do. And there was a certain marketing effort that was necessary to raise it into my visibility and then for me to be convinced that there is enough there, that there is a team that will maintain it, and that there are reasons to choose that and continue with it. And I've been very happy with it as a choice. But I was very conscious in that choice that I'm choosing something that doesn't have that large organizational backing. Because there's a nicety there of like, I trust that Facebook will probably keep investing in React because it is the fundamental technology of the front end of their platform. So yeah, it's not going to go anywhere. But I made the choice of going with Svelte. So it's an example of where the large organization didn't win out in my particular case. So I think marketing is a part of the work, a part of the conversation. It's part of communication. And so I am less negative on it, I think, than the question perhaps was framed, but as always, it depends. STEPH: Yeah, I'm trying to think of a scenario where I would be concerned about the fact that I'm using open source that's backed by a specific large company or corporation. And the main scenario I can think of is what happens when you conflict or if you have values that conflict with a company that is sponsoring that project? So if you are using an open-source project, but then the main community or the company that then works on that project does something that you really disagree with, then what do you do? How do you feel about that situation? Do you continue to use that open-source project? Do you try to use a different open-source project? And I had that conversation frankly with myself recently, thinking through what to do in that situation and how to view it. And I realize this may not be how everybody views it, and it's not appropriate for all situations. But I do typically look at open-source projects as more than who they are backed by, but the community that's actively working on that project and who it benefits. So even if there is one particular group that is doing something that I don't agree with, that doesn't necessarily mean that wholesale I no longer want to be a part of this community. It just means that I still want to be a part, but I still want to share my concerns that I think a part of our community is going in a direction that I don't agree with or I don't think is a good direction. That's, I guess, how I reason with myself; even if an open-source project is backed by someone that I don't agree with, either one, you can walk away. That seems very complicated, depending on your dependencies. Or two, you find ways to then push back on those values if you feel that the community is headed in a direction that you don't agree with. And that all depends on how comfortable you are and how much power you feel like you have in that situation to express your opinion. So it's a complicated space. CHRIS: Yeah, that is a super subtle edge case of all of this. And I think I aligned with what you said of trying to view an open-source project as more generally the community that's behind it as opposed to even if there's a strong, singular organization behind it. But that said, that's definitely a part of it. And again, it's a double-edged sword. It's not just, oh, giant company; this is great. That giant company now has to consider this. And I think in the case of Facebook and React, that is a wonderful hiring channel for them. Now all the people that use React anywhere are like, "Oh man, I could go work at Facebook on React? That's exciting." That's a thing that's a marketing tool from a hiring perspective for them. But it cuts both ways because suddenly, if the mindshare moves in a different direction, or if Facebook as an organization does something complicated, then React as a community can start to shift away. Maybe you don't move the current project off of it, but perhaps you don't start the next one with it. And so, there are trade-offs and considerations in all directions. And again, it depends. STEPH: Yeah. I think overall, the thing that doesn't depend is marketing matters. It is a real part of the ecosystem, and it will influence our decisions. And so, just circling back to Thom's question, I think it does play a vital role in the choices that we make. CHRIS: Way to stick the landing. STEPH: Thanks. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: Changing topics just a bit, what's new in your world? CHRIS: Well, we had what I would call a mini perfect storm this week. We broke the build but in a pretty solid way. And it was a little bit difficult to get it back under control. And it has pushed me ever so slightly forward in my desire to have a fully optimized CI and deploy pipeline. Mostly, I mean that in terms of ratcheting. I'm not actually going to do anything beyond a very small set of configurations. But to describe the context, we use pull requests because that's the way that we communicate. We do code reviews, all that fun stuff. And so there was a particular branch that had a good amount of changes, and then something got merged. And this other pull request was approved. And that person then clicked the rebase and merge button, which I have configured the repository, so that merge commits are not allowed because I'm not interested in that malarkey in our history. But merge commits or rebase and merge. I like that that makes sense. In this particular case, we ran into the very small, subtle edge case of if you click the rebase and merge button, GitHub is now producing a new commit that did not exist before, a new version of the code. So they're taking your changes, and they are rebasing them onto the current main branch. And then they're attempting to merge that in. And A, that was allowed. B, the CI configuration did not require that to be in a passing state. And so basically, in doing that rebase and merge, it produced an artifact in the build that made it fail. And then attempting to unwind that was very complicated. So basically, the rebase produced...there were duplicate changes within a given file. So Git didn't see it as a conflict because the change was made in two different parts of the file, but those were conflicting changes. So Git was like, this seems like it's fine. I can merge this, no problem. But it turns out from a functional perspective; it did not work. The build failed. And so now our main branch was failing and then trying to unwind that it just was surprisingly difficult to unwind that. And it really highlighted the importance of keeping the main branch green, keeping the build always passing. And so, I configured a few things in response to this. There is a branch protection rule that you can enable. And let me actually pull up the specific configuration that I set up. So I now have enabled require status checks to pass before merging, which, if we're being honest, I thought that was the default. It turns out it was not the default. So we are now requiring status checks to pass before merging. I'm fully aware of the awkward, painful like, oh no, the build is failing but also, we have a bug. We need to deploy this. We must get something merged in. So hopefully, if and when that situation presents itself, I will turn this off or somehow otherwise work around it. But for now, I would prefer to have this as a yeah; this is definitely a configuration we want. So require status checks to pass before merging and then require branches to be up to date before merging. So the button that does the rebase and merge, I don't want that to actually do a rebase on GitHub. I want the branch to already be up to date. Basically, I only ever want fast-forward merges on our main branch. So all code should be ahead of main, and we are simply updating what main points at. We are not creating new code. That code has run on CI, that version of the code specifically. We are fully rebased and up to date on top of main, and that's how we're going. STEPH: All of that is super interesting. I have a question about the workflow. I want to make sure I'm understanding it correctly. So let's say that I have issued a PR, and then someone else has merged into the main branch. So now my PR is behind me, and I don't have that latest commit. With the new configuration, can I still use the rebase and merge, or will I need to rebase locally and then push up my branch before I can merge into main but at least using the GitHub UI? CHRIS: I believe that you would be forced to rebase locally, force push, and then CI would rebuild, and that's what it is. So I think that's what require branches to be up to date before merging means. So that's my hope. That is the intention here. I do realize that's complicated. So this requirement, which I like, because again, I really want the idea that no, no, no, we, the developers, are in charge of that final state. That final state should always run as part of a build of CI on our pull request/branch before going into main. So no code should be new. There should be no new commits that have never been tested before going into main. That's my strong belief. I want that world. I realize that's...I don't know. Maybe I'm getting pedantic, or I'm a micromanager of the Git history or whatever. I'm fine with any of those insults that people want to lob at me. That's fine. But that's what I feel. That said, this is a nuisance. I'm fully aware of that. And so imagine the situation where we got a couple of different things that have been in flight. People have been working on different...say there are three pull requests that are all coming to completion at the same time. Then you start to go to merge something, and you realize, oh no, somebody else just merged. So you rebase, and then you wait for CI to build. And just as the CI is completing, somebody else merges something, and you're like, ah, come on. And so then you have to one more time rebase, push, wait for the build to be green. So I get that that is not an ideal situation. Right now, our team is three developers. So there are a few enough of us that I feel like this is okay. We can manage this via human intervention and just deal with the occasional weight. But in the back of my mind, of course, I want to find a better solution to this. So what I've been exploring…there's a handful of different utilities that I'm looking at, but they are basically merged queues as an idea. So there are three that I'm looking at, or maybe just two, but there's mergify.io, which is a hosted solution that does this sort of thing. And then Shopify has a merge queue implementation that they're running. So the idea with this is when you as a developer are ready to merge something, you add a label to it. And when you add that label, there's some GitHub Action or otherwise some workflow in the background that sees that this has happened and now adds it to a merge queue. So it knows all of the different things that might want to be merged. And this is especially important as the team grows so that you don't get that contention. You can just say, "Yes, I would like my changes to go out into production." And so, when you label it, it then goes into this merge queue. And the background system is now going to take care of any necessary rebases. It's going to sequence them, so it's not just constantly churning all of the branches. It's waiting because it knows the order that they're ideally going to go out in. If CI fails for any of them because rebasing suddenly, you're in an inconsistent state; if your build fails, then it will kick you out of the merge queue. It will let you know. So it will send you a notification in some manner and say, "Hey, hey, hey, you got to come look at this again. You've been kicked out of the merge queue. You're not going to production." But ideally, it adds that layer of automation to, frankly, this nuisance of having to keep things up to date and always wanting code to be run on CI and on a pull request before it gets into main. Then the ideal version is when it does actually merge your code, it pings you in Slack or something like that to say, "Hey, your changes just went out to production." Because the other thing I'm hoping for is a continuous deployment. STEPH: The idea of a merge queue sounds really interesting. I've never worked with a process like that. And one of the benefits I can see is if I know I'm ready for something to go like if I'm waiting on a green build and I'm like, hey, as soon as this is green, I'd really like for it to get merged. Then currently, I'm checking in on it, so I will restart the build. And then, every so often, I'm going back to say, "Okay, are you green? Are you green? Can I emerge?" But if I have a merge queue, I can say, "Hey, merge queue, when this is green, please go and merge it for me." If I'm understanding the behavior correctly, that sounds really nifty. CHRIS: I think that's a distinct but useful aspect of this is the idea that when you as a developer decide this PR is ready to go, you don't need to wait for either the current build or any subsequent builds. If there are rebases that need to happen, you basically say, "I think this code's good to go. We've gotten the necessary approvals. We've got the buy-in and the teams into this code." So cool, I now market as good. And you can walk away from it, and you will be notified either if it fails to get merged or if it successfully gets merged and deployed. So yes, that dream of like, you don't have to sit there watching the pot boil anymore. STEPH: Yeah, that sounds nice. I do have to ask you a question. And this is related to one of the blog posts that you and I love deeply and reference fairly frequently. And it's the one that's written by German Velasco about Say No to More Process, and Say Yes to Trust. And I'm wondering, based on the pain that you felt from this new commit, going into main and breaking the main build, how do you feel about that balance of we spent time investigating this issue, and it may or may not happen again, and we're also looking into these new processes to avoid this from happening? I'm curious what your thought process is there because it seems like it's a fair amount of work to invest in the new process, but maybe that's justified based on the pain that you felt from having to fix the build previously. CHRIS: Oh, I love the question. I love the subtle pushback here. I love this frame of mind. I really love that blog post. German writes incredible blog posts. And this is one that I just keep coming back to. In this particular case, when this situation occurred, we had a very brief...well, it wasn't even that brief because actually unwinding the situation was surprisingly painful, and we had some changes that we really wanted to get out, but now the build was broken. And so that churn and slowdown of our build pipeline and of our ability to actually get changes out to production was enough pain that we're like, okay, cool. And then the other thing is we actually all were in agreement that this is the way we want things to work anyway, that idea that things should be rebased and tested on CI as part of a pull request. And then we're essentially only doing fast-forward merges on the main branch, or we're fast forward merging main into this new change. That's the workflow that we wanted. So this configuration was really just adding a little bit of software control to the thing that we wanted. So it was an existing process in our minds. This is the thing we were trying to do. It's just kind of hard to keep up with, frankly. But it turns out GitHub can manage it for us and enforce the process that we wanted. So it wasn't a new process per se. It was new automation to help us hold ourselves to the process that we had chosen. And again, it's minimally painful for the team given the size that we're at now, but I am looking out to the future. And to be clear, this is one of the many things that fall on the list of; man, I would love to have some time to do this, but this is obviously not a priority right now. So I'm not allowed to do this. This is explicitly on the not allowed to touch list, but someday. I'm very excited about this because this does fundamentally introduce some additional work in the pipeline, and I don't want that. Like you said, is this process worth it for the very small set of times that it's going to have a bad outcome? But in my mind, the better version, that down the road version where we have a merge queue, is actually a better version overall, even with just a tiny team of three developers that are maybe never even conflicting in our merges, except for this one standout time that happens once every three months or whatever. This is still nicer. I want to just be able to label a pull request and walk away and have it do the thing that we have decided as a team that we want. So that's the dream. STEPH: Oh, I love that phrasing, to label a pull request and be able to walk away. Going back to our marketing, that really sells that merge queue to me. [laughs] Mid-roll Ad And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API. With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city. The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby. CHRIS: To be clear, and this is to borrow on some of Charity Majors' comments around continuous deployment and whatnot, is a developer should stay very close to the code if they are merging it. Because if we're doing continuous deployment, that's going to go out to production. If anything's going to happen, I want that individual to be aware. So ideally, there's another set of optimizations that I need to make on top of this. So we've got the merge queue, and that'll be great. Really excited about that. But if we're going to lean into this, I want to optimize our CI pipeline and our deployment pipeline as much as possible such that even in the worst case where there's three different builds that are fighting for contention and trying to get out, the longest any developer might go between labeling a pull request and saying, "This is good to go," and it getting out to production, again, even if they're contending with other PRs, is say 10, 15 minutes, something like that. I want Slack to notify them and them to then re-engage and keep an eye on things, see if any errors pop up, anything like that that they might need to respond to. Because they're the one that's got the context on the code at that point, and that context is decaying. The minute you've just merged a pull request and you're walking away from that code, the next day, you're like, what did I work on? I don't remember that at all. That code doesn't exist anymore in my brain. And so,,, staying close to that context is incredibly important. So there's a handful of optimizations that I've looked at in terms of the CircleCI build. I've talked about my not rebuilding when it actually gets fast-forward merged because we've already done that build on the pull request. I'm being somewhat pointed in saying this has to build on a pull request. So if it did just build on a pull request, let's not rebuild it on main because it's identically the same commit. CircleCI, I'm looking at you. Give me a config button for that, please. I would really love that config button. But there are a couple of other things that I've looked at. There's RSpec::Retry from NoRedInk, which will allow for some retry semantics. Because it will be really frustrating if your build breaks and you fall out of the merge queue. So let's try a little bit of retry logic on there, particularly around feature specs, because that's where this might happen. There's Knapsack Pro which is a really interesting thing that I've looked at, which does parallelization of your RSpec test suite. But it does it in a different way than say Circle does. It actually runs a build queue, and each test gets sent over, and they have build agents on their side. And it's an interesting approach. I'm intrigued. I think it could use some nice speed-ups. There's esbuild on the Heroku side so that our assets build so much more quickly. There are lots of things. I want to make it very fast. But again, this is on the not allowed to do it list. [laughs] STEPH: I love how most of the world has a to-do list, and you have this not-allowed to-do list that you're adding items to. And I'm really curious what all is on the not allowed to touch lists or not allowed to-do list. [laughs] CHRIS: I think this might be inherent to being a developer is like when I see a problem, I want to fix it. I want to optimize it. I want to tweak it. I want to make it so that that never happens again. But plenty of things...coming back to German's post of Say No to More Process, some things shouldn't be fixed, or the cost of fixing is so much higher than the cost of just letting it happen again and dealing with it manually at that moment. And so I think my inherent nature as a developer there's a voice in my head that is like, fix everything that's broken. And I'm like, sorry. Sorry, brain, I do not have that kind of time. And so I have to be really choosy about where the time goes. And this extends to the team as well. We need to be intentional around what we're building. Actually, there's a feeling that I've been feeling more acutely than ever, but it's the idea of this trade-off or optimization between speed and getting features out into the world and laying the right fundamentals. We're still very early on in this project, and I want to make sure we're thinking about things intentionally. I've been on so many projects where it's many years after it started and when I ask someone, "Hey, why do your background jobs work that way? That's a little weird." And they're like, "Yeah, that was just a thing that happened, and then it never changed. And then, we copied it and duplicated, and that pattern just got reinforced deeply within the app. And at this point, it would cost too much to change." I've seen that thing play out so many times at so many different organizations that I'm overwhelmed with that knowledge in the back of my head. And I'm like, okay, I got to get it just right. But I can't take the time that is necessary to get it, quote, unquote, "Just right." I do not have that kind of time. I got to ship some features. And this tension is sort of the name of the game. It's the thing I've been doing for my entire career. But now, given the role that I have with a very early-stage startup, I've never felt it more acutely. I've never had to be equally as concerned with both sides of that. Both matter all the more now than they ever have before, and so I'm kind of existing in that space. STEPH: I really like that phrasing of that space because that deeply resonates with me as well. And that not allowed to-do list I have a similar list. For me, it's just called a wishlist. And so it's a wishlist that I will revisit every so often, but honestly, most things on there don't get done. And then I'll clear it out every so often when I feel it's not likely that I'm going to get to it. And then I'll just start fresh. So I also have a similar this is what I would like to do if I had the time. And I agree that there's this inclination to automate as well. As soon as we have to do something that felt painful once, then we feel like, oh, we should automate it. And that's a conversation that I often have with myself is at what point is the cost of automation worthwhile versus should we just do this manually until we get to that point? So I love those nuanced conversations around when is the right time to invest further in this, and what is the impact? And what is the cost of it? And what are the trade-offs? And making that decision isn't always clear. And so I think that's why I really enjoy these conversations because it's not a clear rubric as to like, this is when you invest, and this is when you don't. But I do feel like being a consultant has helped me hone those skills because I am jumping around to different teams, and I'm recognizing they didn't do this thing. Maybe they didn't address this or invest in it, and it's working for them. There are some oddities. Like you said, maybe I'll ask, "Why is this? It seems a little funky. What's the history?" And they'll be like, "Yeah, it was built in a hurry, but it works. And so there hasn't been any churn. We don't have any issues with it, so we have just left it." And that has helped reinforce the idea that just because something could be improved doesn't mean it's worthwhile to improve it. Circling back to your original quest where you are looking to improve the process for merging and ensuring that CI stays green, I do like that you highlighted the fact that we do need to just be able to override settings. So that's something that has happened recently this week for me and my client work where we have had PRs that didn't have a green build because we have some flaky tests that we are actively working on. But we recognize that they're flaky, and we don't want that to block us. I'm still shipping work. So I really appreciate the consideration where we want to optimize so that everyone has an easy merging experience. We know things are green. It's trustworthy. But then we also have the ability to still say, "No, I am confident that I know what I'm doing here, and I want to merge it anyways, but thank you for the warning." CHRIS: And the constant pendulum swing of over-correcting in various directions I've experienced that. And as you said, in the back of my mind, I'm like, oh, I know that this setting I'm going to need a way to turn this setting off. So I want to make sure that, most importantly, I'm not the only one on the team who can turn that off because the day that I am away on vacation and the build is broken, and we have a critical bug that we need to fix, somebody else needs to be able to do that. So that's sort of the story in my head. At the same time, though, I've worked on so many teams where they're like, oh yeah, the build has been broken for seven weeks. We have a ticket in the backlog to fix that. And it's like, no, the build has to not be broken for that long. And so I agree with what you were saying of consulting has so usefully helped me hone where I fall on these various spectrums. But I do worry that I'm just constantly over-correcting in one direction or the other. I'm never actually at an optimum. I am just constantly whatever the most recent thing was, which is really impacting my thinking on this. And I try to not do that, but it's hard. STEPH: Oh yeah. I'm totally biased towards my most recent experiences, and whatever has caused me the most pain or success recently. I'm definitely skewed in that direction. CHRIS: Yeah, I definitely have the recency bias, and I try to have a holistic view of all of the things I've seen. There's actually a particular one that I don't want to pat myself on the back for because it's not a good thing. But currently, our test suite, when it runs, there's just a bunch of noise. There's a bunch of other stuff that gets printed out, like a bunch of it. And I'm reminded of a tweet from Kevin Newton, a friend of the show, and I just pulled it up here. "Oh, the lengths I will go to avoid warnings in my terminal, especially in the middle of my green dots. Don't touch my dots." It's a beautiful beauty. He actually has a handful about the green dots. And I feel this feel. When I run my test suite, I just want a sea of green dots. That's all I want to see. But right now, our test suite is just noise. It's so much noise. And I am very proud of...I feel like this is a growth moment for me where I've been like, you know what? That is not the thing to fix today. We can deal with some noise amongst the green dots for now. Someday, I'm just going to lose it, and I'm going to fix it, and it's going to come back to green dots. [chuckles] STEPH: That sounds like such a wonderful children's book or Dr. Seuss. Oh, the importance of green dots or, oh, the places green dots will take you. CHRIS: Don't touch my dots. [laughter] STEPH: Okay. Maybe a slightly aggressive Dr. Seuss, but I still really like it. CHRIS: A little more, yeah. STEPH: On that note of our love of green dots, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris talks feature flags featuring Flipper (Say that 3x fast!), and Steph talks reducing stress by a) having a work shutdown ritual and b) the fact that thoughtbot is experimenting with half-day Fridays. (Fri-yay?) Flipper (https://featureflags.io/2016/04/08/flipper-a-feature-flipper-feature-toggle-library/) Drastically Reduce Stress with a Work Shutdown Ritual (https://www.calnewport.com/blog/2009/06/08/drastically-reduce-stress-with-a-work-shutdown-ritual/) Iceland's Journey to a Short Working Week (https://autonomy.work/wp-content/uploads/2021/06/ICELAND_4DW.pdf) Burnout: The Secret to Unlocking the Stress Cycle (https://www.burnoutbook.net/) Transcript: STEPH: Hey, do you know that we could have an in-person recording at the end of October? CHRIS: I do. Yes, I'm planning. That is in the back of my head. I guess I hadn't said that to you yet. But I'm glad that we have separately had the same conversation, and we've got to figure that out, although I don't know how to do noise cancellation and whatnot in the room. [laughs] How do we...we'll have to figure it out. Like, put a blanket in between us but so that we can see across it, but it absorbs sound in the middle. It's weird. I don't know how to do stuff. Just thinking out loud here. STEPH: We'll just be in the same place but still different rooms. So it'll feel no different. [laughter] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world? CHRIS: Feature flags. Feature flags are an old favorite, but they have become new again in the application that I'm working on. We had a new feature that we were building out. But we assumed correctly that it would be nice to be able to break it apart into smaller pieces and sort of deliver it incrementally but not necessarily want to expose that to our end users. And so, we opted with that ticket to bring along the feature flag system. So we've introduced Flipper, in particular, which is a wonderful gem; it does the job. We're using the ActiveRecord adapter. All that kind of makes sense, happy about that. And so now we have feature flags. But it was one of those mindset shifts where the minute we got feature flags, I was like, yes, okay, everything behind a feature flag. And we've been leaning into that more and more, and it really is so nice and so freeing, and so absolutely loving it so far. STEPH: I'm intrigued. You said, "Everything behind a feature flag." Like, is it really everything or? Yeah, tell me more. CHRIS: Not everything. But at this point, we're still very early on in this application, so there are fundamental facets of the platform, different areas of what users can do. And so the actual stuff that works and is wired up is pretty minimal, but we want to have a little more surface area built out in the app for demo purposes, for conversations that are happening, et cetera. And so, we built out a bunch of new pages to represent functionality. And so there are sidebar links, and then the actual page itself, and routing, and all of the things that are associated with that, and so all of those have come in. I think there are five new top-level nav sections of the platform that are all introduced behind a feature flag right now. And then there's some new functionality within existing pages that we've put behind feature flags. So it's not truly every line of code, but it's basically the entry point to all new major features we're putting behind a feature flag. STEPH: Okay, cool. I'm curious. How are you finding that in terms of does it feel manageable? Do you feel like anybody can go into the UI and then turn on feature flags for demos and feel confident that they know what they're turning on and off? CHRIS: We haven't gotten to that self-serve place. At this point, the dev team is managing the feature flags. So on production, we have an internal group configured within Flipper. So we can say, "Ship this feature for all internal users so that we can do testing." So there is a handful of us that all have accounts on production. And then on staging, we have a couple of representative users that we've been just turning everything on for so that we know via staging we can act as that user and then see the application with all of the bells and whistles. Down the road, I think we're going to get more intentional with it, particularly the idea of a demo account. That's something that we want to lean into. And for that user, we'll probably be turning on certain subsets of the feature flags. I think we'll get a little more granular in how we think about that. For now, we're not as detailed in it, but I think that is something that we want to expand as we move forward. STEPH: Nice. Yeah, I was curious because feature flags came up in our recent retro with the client team because we've gotten to a point where our feature flags feel complex enough that it's becoming challenging and not just from the complexity of the feature flags but also from the UI perspective. Where it feels challenging for users to understand how to turn a feature on, exactly what that impacts, and making sure that then they're not changing developer-focused feature flags, so those are the feature flags that we're using to ship a change but then not turn it on until we're ready. It is user-facing, but it's something that should be managed more by developers as to when we turn it on or off. So I was curious to hear that's going for you because that's something that we are looking into. And funnily enough, you asked me recently, "Why aren't y'all using Flipper?" And I didn't have a great answer for you. And that question came up again where we looked at each other, and we're like, okay, we know there was a really good reason we didn't use Flipper when we first had this discussion. But none of us can remember, or at least the people in that conversation couldn't remember. So now we're asking ourselves the question of we've made it this far. Is it time to bring in Flipper or another service? Because we're getting to the point that we're starting to build too much of our own feature flag system. CHRIS: So did you uncover an answer, or are you all just agreeing that the question makes sense? STEPH: Agreeing that the question makes sense. [laughs] CHRIS: That's the first step on a long journey to switching from internal tooling to somebody else managing that for you. STEPH: Yeah, because none of us could remember exactly. But it was funny because I was like, am I just forgetting something here when you asked me that? So I felt validated that others were like, "Oh yeah, I remember that conversation. But I too can't recall why we didn't want to use Flipper in the moment or a similar service." CHRIS: I'll definitely be interested to hear if you do end up trying to migrate off to another system or find a different approach there or if you do stick with the current configuration that you have. Because those projects they're the sort of sneaky ones that it's like, oh, we've been actually relying on this for a while. It's a core part of our infrastructure, and how we do the work, and the process, and how we deploy. That's a lot. And so, to switch that out in-flight becomes really difficult. It's one of those things where the longer it goes on, the harder it is to make that change. But at some point, you sometimes make the decision to make it. So I will be very interested to hear if you do make that decision and then, if so, what that changeover process looks like. STEPH: Yeah, totally. I'll be sure to keep you up to date as we make any progress or decisions around feature flags. CHRIS: But yeah, your questions around management and communication of it that is a thing that's in the back of my mind. We're still early enough in our usage of it, and just broadly, how we're working, we haven't really felt that pain yet, but I expect it's coming very soon. And in particular, we have functionality now that is merged and is part of the codebase but isn't fully deployed or fully released rather. That's probably the correct word. We have not fully released this functionality, and we don't have a system right now for tracking that. So I'm thinking right now we're using Trello for product management. I'm thinking we want another column that is not entirely done but is tracking the feature flags that are currently in flight and just use that as a place to gather communication. Do we feel like this is ready? Let's dial this up to 50%, or let's enable it for this beta group or whatever it is to sort of be able to communicate that. And then ideally, also as a way to track these are the ones that are active right now. You know what? We feel like this one's ready. So do the code change so that we no longer use the feature flag, and then we can actually turn it off. Currently, I feel like I can defer that for a little while, but it is something that's in the back of my mind. And then, of course, I nerd sniped myself, and I was like, all right, how do I grep the codebase for all the feature flags that we're using? Okay. There are a couple of different patterns as to how we're using…You know what? I think I actually need an AST-based parser here, and I need to use the Visitor...You know what? Never mind. Stop it. Stop it. [laughs] It was one of those where I was like...I was doing this not during actual work hours. It was just a question in my mind, and then I started to poke at it. I was like, oh, this could be fun. And then I was like, no, no, no, stop it. You need to go read a book or something. Calm down. STEPH: As part of the optimization around our feature flag system that we've created, we've added a few enhancements, which I think is also one of the reasons we're starting to question how far we want to go in this direction. One of them is we want a very easy way to track what's turned on and what's turned off for an environment. So we have a task that will easily check, or it prints out a really nice list of these are all your flags, and this is the state that they're in. And by using the system that we have, we have one file that represents...well, you mentioned migration because we're migrating from the old system to this new one. So it's still a little bit in that space of where we haven't fully moved over. So now, moving over to a third thing like Flipper will be even more interesting because of that. But the current system, we have a file that lists all the feature flags and a really nice description that goes with it, which I know is supported by Flipper and other services as well. But having that one file does make it nice where you can just scan through there and see what's in use. I really think it's the UI and the challenges that the users are facing and understanding what a feature flag does, and which ones they should turn off, and which ones they shouldn't touch that that's the point where we started questioning okay, we need to improve the UI. But to improve the UI, do we really want to fully embrace our current system and make those improvements, or is now that time that we should consider moving to something else? Because Flipper already has a really nice UI. I think there is a free tier and a paid tier with Flipper, and the paid tier has a UI that ships. CHRIS: There's definitely a distinct thing, Flipper Cloud, which is their hosted enterprise-y solution, and that's the paid offering. But Flipper just the core gem there's also Flipper web, I want to say is what it is, or Flipper UI. And I think it's an engine that you mount within your Rails app and that displays a UI so that you can manage things, add groups and teams. So we're definitely using that. I've got my eye on Flipper Cloud, but I have some fundamental questions around I like to keep my data in the system, and so this is an external other thing. And what's the synchronization? I haven't really even looked into it like that. But I love that Flipper exists within our application. One of the niceties that Flipper Cloud does have is an audit history, which I think is interesting just to understand over time who changed what for what reasons? It's got the ability to roll back and maintain versions and whatnot. So there are some things in it that definitely look very interesting to me. But for now, the open-source, free version of Flipper plus Flipper UI has been plenty for us. STEPH: That's cool. I didn't know about the audit feature. CHRIS: Yeah. It definitely feels like one of those niceties to have for a more enterprise offering. So I could see myself talking me into it at some point but not quite yet. On that note though, so feature flags we introduced a week and a half, something like that, ago, and we've been leaning into them more and more. But as part of that, or in the back of my mind, I've wanted to go to continuous deployment. So we had our first official retro this week. The project is growing up. We're becoming a lot of things. We used retro to talk about continuous deployment, all of these things that feel very real. Just to highlight it, retro is super important. And the fact that we haven't had one until now is mainly because up till now, it's been primarily myself and another developer. So we've been having essentially one-on-ones but not a more formal retro that involves others. At this point, we now have myself and two other developers that are working on the project, as well as someone who's stepped into the role of product manager. So we now have communication collaboration. How are we doing the work? How are we shipping features and communicating about bugs and all of that? So now felt like the right time to start having that more formal process. So now, every two weeks, we're going to have a retro, and hopefully, through that, retro will do the magic that retro can do at its best which is help us get better at all the things that we're doing. But yeah, one of the core things in this particular one was talking about moving to continuous deployment. And so I am super excited to get there because I think, much like test-driven development, it's one of those situations where continuous deployment puts a lot of pressure on the development process. Everything that is being merged needs to be ready to go out into production. And honestly, I love that as a constraint because that will change how you build things. It means that you need to be a little more cautious. You can put something behind a feature flag to protect it. You decouple the idea of merging and deploying from releasing. And I like that distinction. I think that's a really meaningful distinction because it makes you think about what's the entry point to this feature within the codebase? And it's, I think, actually really nice to have fewer and more intentional entry points into various bits of functionality such that if you actually want to shut it off in production, you can do that. That's more straightforward. I think it encourages an intentional coupling, maybe not a perfect decoupling but an intentional coupling within the system. So I'm very excited to explore it. I think feature flags are going to be critical for it, and I think also observability, and monitoring, and logging, and all those things. We need to get really good at them so that if anything does go wrong when we just merge and deploy, we want to know if anything goes wrong as quickly as possible. But overall, I'm super excited about all of the other niceties that fall out of it. STEPH: [singing] I wanna know what's turned on, and I want you to show me. Is that the song you're singing to Flipper? [laughs] CHRIS: [laughs] STEPH: Sorry, friends. I just had to go there. CHRIS: That was just in your head. You had that, and you needed to get it out. I appreciate it. [laughter] Again, I got Flipper UI, so that's not the question I'm asking. I think that's the question you have in your heart. STEPH: [laughs] Mid-roll Ad And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API. With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city. The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby. STEPH: That's funny about the CI deployment adding pressure to the development process because you're absolutely right. But I see it as such a positive and improvement that I don't really think about the pressure that it's adding. And I just think, yes, this is awesome, and I want this to happen and if there are steps that we have to take in that direction. It dawned on me that what you said is very true, but I've just never really thought about it from that perspective about the pressure. Because I think the thing that does add more pressure for me is figuring out what can I deploy, or do I need to cherry-pick commits? What does that look like? And going through that whole cycle and stress is more stressful to me than figuring out how do we get to continuous deployments and making sure that everything is in a safe space to be deployed? CHRIS: That's the dream. I'm going to see if I can live it. I'll let you know how it goes. But yeah, that's a bit of what's up in my world. What else is going on in your world other than some lovely singing? STEPH: Oh, there's always lots of singing. It's been an interesting week. It's been a mix of some hiring work. Specifically, we are helping our client team build their development team. So we have been helping them implement a hiring process. And then also going through technical interviews and then going through different stages of that interview process. And that's been really nice. I haven't done that specifically for a client team where I helped them build a hiring pipeline from scratch and then also conduct those interviews. And one thing that stood out to me is that rotations are really important to me and specifically that we don't ask for volunteers. So as we were having candidates come through and then they were ready to schedule an interview, then we are reaching out to the rest of the development team and saying, "Hey, we have this person. They're going to be scheduled at this time. Who's available? Who's interested? I'm looking for volunteers." And that puts pressure on people, especially someone that may be more empathetic to feel the need to volunteer. So then you can end up having more people volunteer than others. So we've established a rotation to make sure that doesn't happen, and people are assigned as it becomes their next turn to conduct an interview. So that's been a lot of fun to refine that process and essentially make it easier. So the rest of the development team doesn't have to think about the hiring. But it still has an easy way of just saying, "Hey," and tapping someone to say, "Hey, it's your turn to run an interview." The other thing I've been working out is figuring out how to measure an experiment. So we at thoughtbot are running an experiment where we're looking to address some of the concerns around sustainability and people feeling burned out. And so we have introduced half-day Fridays, more specifically 3.5 Fridays, as our half-day Fridays just to help everybody be certain about what a half-day looks like. And then also, you can choose your half-day. Everybody works different schedules. We're across different time zones, so just to make sure it's really clear for folks and that they understand that they don't need to work more than those hours, and then they should have that additional downtime. And that's been amazing. This is the second Friday of the experiment, and we're doing this for nine Fridays straight. And one of the questions that came up was, well, how do we know we did a good thing? How do we know that we helped people in terms of sustainability or addressing some of the feelings that they're having around burnout? And so I've collaborated with a couple of other thoughtboters to think through of a way to measure it. It turns out helping someone measure their wellness is incredibly complex. And so we went for a fairly simple approach where we're using an anonymous survey with a number of questions. And those questions aren't really meant to stand up to scientific scrutiny but more to figure out how the team is feeling at the time that they fill out the survey and then also to understand how the reduced weekly hours have impacted their schedule. And are people working extra hours to then accommodate the fact that we now have these half Fridays? So do you feel pressured that because you can't work a full day on Friday that you are now working an extra hour or two Monday through Thursday to accommodate that time off? So that survey just went out today. And one of the really interesting parts (I just haven't had to create content for a survey in a while.) was making sure that I'm not introducing leading questions or phrasing things in a very positive or negative light since that is a bias that then people will pick up on. So instead of saying, "I find it easy to focus at work," and then having like a multiple choice of true, always, never, that kind of thing, instead rephrasing the question to be, "Are you able to focus during work hours?" And then you have a scale there. Or instead of asking someone how much energy they have, maybe it's something like, "Do you experience fatigue during the day?" Or instead of asking someone, "Are you stressed at work?" because that can have a more negative connotation. It may lead someone to feel more negatively as they are assessing that question. Then you can say, "How do you feel when you're at work?" And then you can provide those answers of I'm stressed, slightly stressed, neutral, slightly relaxed, and relaxed. So it generated some interesting conversations around the importance of how we phrase questions and how we collect feedback. And I really enjoyed that process, and I'm really looking forward to seeing what folks have to say. And we're going to have three surveys total. So we have one that's early on in the experiment since we're only two Fridays in. We'll have one middle experiment survey go out, and then we'll have one at the end once we're done. And then hopefully, everybody's responses will then help us understand how the experiment went and then make a decision going forward. I'll be honest; I'm really hoping that this becomes a trend and something that we stick with. It is a professional goal of mine to slowly reduce the hours that I work each week or quickly; it doesn't have to be slowly. But I really like the four-day workweek. It's something that I haven't done, but I've been reading about it a fair amount lately. I feel like I've been seeing more studies conducted recently becoming published, and it's just very interesting to me. I had some similar concerns of how am I still going to be productive? My to-do list hasn't changed, but my hours are changing. So how am I still going to get everything done? And does it make sense for me to still get paid the same amount of money if I'm only working four out of the five days? And I had lots of questions around that, and the studies have been very enlightening and very positive in the outcome of a reduced workweek, not just for the individuals but for the companies as well. CHRIS: It's such an interesting space and exploration. The way that you're framing the survey sounds really great. It sounds like you're trying to be really intentional around the questions that you're asking and not being leading and whatnot. That said, it is one of the historically hard problems trying to quantify this and trying to actually boil it down. And there are so many different axes even that you're measuring on. Is it just increased employee happiness? Is it retention that you're talking about? Is it overall revenue? There are so many different things, and it's very tricky. I'm super interested to hear the results when you get those. So you're doing what sounds like more of a qualitative study like, how are you feeling? As opposed to a more quantitative sort of thing, is that right? STEPH: Yes, it's more in the realm of how are you feeling? And are you working extra hours, or are you truly taking the time off? CHRIS: Yeah, I think it's really hard to take something like this and try and get it into the quantitative space, even though like, oh yeah, if we could have a number, if it used to be two and now it's four, fantastic. We've doubled whatever that measure is. I don't know what the unit would be on this arbitrary number I made up. But again, that's the hard thing and probably not feasible at all. And so it makes sense the approach that you're taking. But it's super difficult. So I'm very interested to hear how that goes. More generally, the four-day workweek thing is such a nice idea. We should do that more. I'm trying to think how long I did that. So during the period that I was working freelance, I think there were probably at least five months where I did just a true four-day workweek. Fridays were my own. It was fantastic. Granted, I recorded the podcast with you. But that day was mine to shape as I wanted. And I found it was a really nice decompression period having that for a number of weeks in a row. And just getting to take care of personal stuff that I hadn't been and just having that extra little bit of space and time. And it really was wonderful. Now I'm working full five days a week, and my Fridays aren't even investment days, so I don't know what I'm doing over here. But I agree. I really like that idea, and I think it's a wonderful thing. And it's, I don't know, sort of the promise of this whole capitalism adventure we're supposed to go on, increasing productivity. And wasn't this the promise the whole time, everybody, so I am intrigued to see it being explored more, to see it being discussed. And what you're talking about of it's not just good for the employees, but it's also great for the companies. You're getting people that are more engaged on the days that they're working, which feels very true to me. Like, on a great day, I can do some amazing work. On a terrible day, I can do mediocre to bad work. It is totally possible for me to do something that is actively detrimental. Like, I introduce a bug that is going to impact a bunch of customers. And the remediation of that is going to take many more hours. That is totally a realistic thing. I think we often think of productivity in terms of are you at zero or some amount more than zero? But there is definitely another side of that. And so the cost of being not at your best is extremely high in my mind. And so anything we can do to improve that. STEPH: There's a recent study from a non-profit company called Autonomy that published some research called Going Public: Iceland's Journey to a Shorter Working Week. It's very interesting. And a number of people in my social circle have shared it. And that's one of the reasons that I came across it. And they commented in there that one of the reasons...I hope I'm getting this right, but we'll link to it in case I've gotten it a bit wrong. But one of the reasons that Iceland was interested or open to this idea of moving workers to a shorter workweek is because they were struggling with productivity and where people were working a lot of hours, but it still felt like their productivity was dropping. So then Autonomy ran this study to help figure out are there ways to improve productivity? Will shortening a workweek actually lead to higher productivity? And there was a statement in there that I really liked where it talks about the more hours that we work; we're actually lowering our per hour productivity which rings so true for me. Because I am one of those individuals where I'm very stubborn, and so if I'm stuck on something, I will put so many hours into trying to figure it out. But at some point, I have to just walk away, and if I do, I will solve it that much faster. But if I just try to use hours as my way to chip away at a problem, then that's not going to solve it. And my ability to solve that problem takes exponentially more time than if I had just walked away and then come back to the problem fresh and engaged. And some of the case studies I admired the way that they tackled the problem. They would essentially pay the company. So the company could reduce the hours for certain employees so then they could run the experiment. So if they reduced employees to say 32 hours but the company didn't actually want to stop working at 32 hours and they wanted to keep going, so then they brought in other people to work the remaining eight hours. Then as part of that study, they would pay the company to help them stay at their current level of productivity or current level of hours. This way, they could conduct the study. And I thought that was a really neat idea. I do have lots of questions still around the approach itself because it is how do you reduce your to-do list, essentially? So just because you dropped to a four-day workweek. So essentially, you have to just say less stuff gets done. Or, as these case studies promise, they're saying you're actually going to be more productive. So you will still continue to get a lot of your work done. I'm curious about that. I'd like to track my own productivity and see if I feel similarly. And then also, who is this for? Is this for everybody? Does everybody get to move to a four-day workweek? Is this for certain companies? Is it for certain jobs? Ideally, this is for everybody because there are so many health benefits to this, but I'm just intrigued as to who this is for, who it impacts, how can we make it available for everyone? And is the dream real that I can work four days a week and still feel as productive, if not more productive, and healthier, and happier as I do when working five days a week? Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: I remember there was an extended period where working remote was this unique benefit that some organizations had. They had adopted that mode. They were async, and remote, and all of these wonderful things. And it became this really interesting selling point for those companies. Now the pandemic obviously pushed public opinion and everything on that in a pretty significant way such that it's a much more common thing. And so, as a result, I think it's less of a differentiator now. It used to be a way to help with recruiting. I wonder if there are organizations that are willing to take this, try it out, see that they are still close to as productive. But if it means that hiring is twice as easy, that is absolutely...especially if it is able to double your ability to hire, that is incredibly valuable or retention similarly. If you can increase retention or if you can make it easier to hire, the value of that is so, so high. And it's interesting in my mind because there's sort of a gold rush on that. That's only true for as long as a four-day workweek is a unique benefit of working at the organization. If this is actually the direction that everything's going and eventually everyone's going to settle to that, then if you wait too long to get there, then you're going to miss all the benefits. You're going to miss that particular benefit of it. And so I do wonder, would it be advantageous to organizations...I'm thinking about this now. Maybe this is the thing I have to do. But would it be advantageous to be that organization as early on as possible and try to get ahead of the curve and use that to hire more easily, retain more easily? Now that I say it all out loud, I'm sold. All right. I got to do this. STEPH: Yeah, I think that's a great comparison of where people are going to start to look for those types of benefits. And so, if you are one of the early adopters and you have the four-day workweek or a reduced workweek in general, then people will gravitate towards that benefit. And it's something that people can use to really help with hiring and retention. And yeah, I love it. You are CTO. So you have influence within your company that you could push for the four-day workweek if you think that's what you want to do. And I would be really intrigued to hear how that goes and how you feel if you...well, you've done it before where you've worked four days a week. So applying that to your current situation, how does that feel? CHRIS: Now you're actually holding me accountable to the things that I randomly said in passing. But it's interesting. So we're so early stage, and there's so much small work to do. There's all…oh, got to set up a website. We've got to do this. We've got to build that integration. There's just kind of scrambling to be done. And so there's a certain version in my mind that maybe we're in a period of time where additional hours are actually useful. There's a cost to them. Let's be clear about that. And so how long that will remain true, I'm not sure. I could see a point perhaps down the road where we achieve a little bit closer to steady-state maybe, who knows? It depends on how fast growth is and et cetera, a lot of other things. So I'm not sure that I would actually lead with this experiment myself, given where the organization is at right now. But I could see an organization that's at a little bit more of a steady-state, that's growing more incrementally, that is trying to think really hard about things like hiring and retention. If those were bigger questions in my mind, then I think I would be considering this more pointedly. But for now, I'm like, I kind of just got to do a bunch of stuff. And so my brain is telling me a different story, but it is interesting. I want to interrogate that and be like, brain, why is that the story you have there, huh? Huh? STEPH: I really appreciate what you're saying, though, because that makes sense to me. I understand when you are in that earlier stage, there's enough to do that that feels correct. Versus that added benefit of having a reduced workweek does benefit or could benefit larger companies who are looking to hire more heavily, or they're also concerned about retention or just helping their people address feelings of burnout. So I really appreciate that perspective because that also rings true. So along this whole conversation around wellness and how we can help people work more sustainable hours, there's a particular book that I've read that I've been really excited to share and chat with you about. It's called Burnout: The Secret to Unlocking the Stress Cycle. It's written by two sisters, Emily and Amelia Nagoski. And they really talk through the impact that stress has on us and then ways to work through that. And specifically, they talk about completing the stress cycle. And I found this incredibly useful for me because I have had weeks where I have just worked hard Monday through Friday. I've gotten to the end of my day Friday, and I'm like, great, I'm done. I've made it. I can just relax. And I walk away from work, and I can't relax. And I'm just like, I feel sick. I feel not good. Like, I thought I would walk away from work, and I would just suddenly feel this halo of relaxation, and everything would be wonderful. But instead, I just feel a bit ill, and I've never understood that until I was reading their book about completing the stress cycle. Have you ever had moments like that? CHRIS: It has definitely happened to me at various points, yes. STEPH: That makes me feel better because I haven't really chatted about this with someone. So until I read this book and I was like, oh, maybe this is a thing, and it's not just me, and this is something that people are experiencing. So to speak more about completing the stress cycle, they really highlight that stress and feelings, capital F feelings, can cause physiological symptoms. And so it's not just something that we are mentally processing, but we are physically processing the stress that we feel. And there's a really big difference between stressors and stress. So a stressor could be something like an unmeetable deadline. It could be family. It could be money concerns. It could be your morning commute, anything that increases your stress level. And during that, there's a very physical process that happens to your body anytime there's a perceived threat. And it's really helpful to us because it's frankly what triggers our fight, flight, or freeze response. And our bodies receive a rush of adrenaline and cortisol, which essentially, if we're using that flight response, that's going to help us run. And a number of the processes in our system will essentially go into a state of hibernation because everything in our body is very focused on helping us run or do the thing that we think is going to save our life in that moment. The problem is our body doesn't know the difference between what's more of a mental threat versus what is a truly physical threat. So this is the difference between your stress and your stressors. So in more of a physical threat, if there's a lion that you are running from, that is the stressor, but then the stress is everything that you still feel after you have run from that lion. So you encounter a lion, you run. You make it back to your group of people where you are safe, and you celebrate, and you dance, and you hug. And that is completing the stress cycle because you are essentially processing all of that stress. And you are telling your body in a body-focused language that I am safe now, and everything is fine. So you can move back, and anything that was in a hibernation state, all of that dump of adrenaline and cortisol can be worked out of your system, and everything can go back to a normal state. Most of us aren't encountering lions, but we do encounter jerks in meetings or really stressful commutes. And whenever we have survived that meeting, or we've gotten through our commute to the other side, we don't have that moment of celebration where we really let our body know that hey, we've made it through that moment of stress, and we are away from that stressor, and we can actually process everything. So if you're interested in this, the book's really great. It talks about ways that you can process that stress and how important it is to do so. Otherwise, it will literally build up in your system, and it can make you sick. And it will manifest in ways that will let us know that we haven't dealt with that stress. And one of the top methods that they recommend is exercise and movement. That's a really great way to let your body know that you are no longer in an unsafe state, and your body can start to relax. There's also a lot of other great ways. Art is a really big one. It could be hugging someone. It could be calling someone that you love. There are a number of ways that you can process it. But I hadn't recognized how important it is that once you have removed yourself from a stressor, that doesn't necessarily just mean you're done, and you can relax. You actually have to go through that physical process, and then you can relax. So I started incorporating that more into my day that when I'm done with work, I always find something to do, and it's typically to go for a walk, or it's go for a run. And I have found that now I really haven't felt that ill-feeling where I'm trying to relax, but I just feel sick. Saying that out loud, I feel like I'm a mess on Fridays. [chuckles] CHRIS: I feel like you're human. It was interesting when you asked the question at the beginning. You were like, "Is this a thing that other people experience?" And my answer was certainly, yes; I have experienced this. I think there's something about me that I think is useful where I don't think I'm special at all on any axis whatsoever. And so whenever there's something that's going on, I'm like, I assume that this is just normal human behavior, which is useful because most of the time it is. And this is the sort of thing where if I'm having a negative experience, I will look to the external world to be like, I'm sure other people have experienced this, and let me pull that in. And I've found that really useful for myself to just be like, I'm not special. There's nothing particularly special about me. So let me go look from the entirety of the internet where people have almost certainly talked about this. And I've not read the book that you're describing here, but it does sound like it does a great job of describing this. There is a blog post that I found that has stayed in the back of my mind and informed a little bit of my day-to-day approach to this sort of thing which is a blog post by Cal Newport, who I think at this point we've mentioned him a handful of times on the show. But the title of the post is Drastically Reduce Stress with a Work Shutdown Ritual. And it's this very interesting little post where he talks about at the end of your day; you want to close the book on it. I think this is especially pointed now that many of us are working from home. For me, this is a new thing. And so, I've been very intentional with trying to put walks at the beginning and end of my day. But in this particular blog post, he describes a routine that he does where he tidies things up and makes his list for the next day. And then he has a particular phrase that he says, which is "schedule shut down, complete." And it's a sort of nonsense phrase. It doesn't even quite make sense grammatically, but it's his phrase that he internalized, and somehow this became his almost mantra for the end of the day. And now when he does it, that's like his all right, okay, turned off the brain, and now I can walk away. I know that I've said the phrase, and I only say the phrase when I have properly set things up. And so it's this weird structure that he's built in his mind. But it totally works to quiet those voices that are like, yeah, but what about…Do we think about…Do we complete…And he's got now this magic phrase that he can say. And so I've really loved that. For myself, I haven't gotten quite to that level, but I've definitely built the here's how I wind down at the end of the day. Here's what I do with lists and what I do so that I can ideally walk away comfortably. Again, this is one of those situations where I sound like I know what I'm doing or have my act together. This is aspirational me. Day-to-day me is a hot mess like everybody else. [laughs] And this is just what I...when I do this, I feel better. Most of the time, I don't do this because I forget it, or because I'm busy, or because I'm stressed, [chuckles], and so I don't do the thing that reduces stress, you know, human stuff. But I really enjoyed that post. STEPH: I haven't heard that one. I like a lot of Cal Newport's work, but I haven't read that particular blog post. Yeah, I think the idea of completing the stress cycle has helped me tremendously because by giving it a name like completing the stress cycle has been really helpful for me because working out is important to me. It's something that I enjoy, but it's also one of those things that's easy to get bumped. It is part of my wellness routine. And so, if I'm really busy, then I will bump it from the list. And then it's something that then doesn't get addressed. But recognizing that this is also important to my productivity, not to just this general idea of wellness, has really helped me recenter how important this is and to make sure that I recognize hey, it's been a stressful day. I need to get up and move. That is a very important part of my day. It is not just part of an exercise routine, but this is something that I need to do to close out my day to then make sure I have a great day tomorrow. So bringing it back, it's been a week that's been filled with a lot of discussions around burnout and then ways that we can measure it and then also address it. And I've really enjoyed reading this book. So I'll be sure to drop a link in the show notes. On that note, shall we wrap up? CHRIS: Schedule shut down, complete. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeeeeee!!! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
You know what really grinds Chris' gears? (Spoiler Alert: It's Single-Page Applications.) Steph needs some consulting help. So much to do, so little time. Sarah Drasner tweet about shared element transitions (https://twitter.com/sarah_edo/status/1431282994581413893) Article about Page Transitions API (https://developer.chrome.com/blog/shared-element-transitions-for-spas/) Svelte Crossfade layout demo (https://svelte.dev/repl/a7af336f906c4caab3936972754e4d6f?version=3.23.2) Svelte Crossfade tutorial page (https://svelte.dev/tutorial/deferred-transitions) (Note - click "Show Me" on the bottom left) Transcript: CHRIS: I have restarted my recording, and we are now recording. And we are off on the golden roads. Isn't software fun? STEPH: Podcast battle. Here we go! Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. CHRIS: And I'm Chris Toomey. STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, happy Friday. How's your week been? CHRIS: Happy Friday to you as well. My week's been good with the exception of right before we started this recording, I had one of those experiences. You know the thing where software is bad and software is just terrible overall? I had one of those. And very briefly to describe it, I started recording, but I could hear some feedback in my headphones. So I was like, oh no, is that feedback going to show up on the final recording? Which I really hope it doesn't. Spoiler alert - listener, if I sound off, sorry about that. But so I stopped recording and then I went to go listen to the file, and I have our audio software configured to record directly to the desktop. And it does that normally quite well. But for some reason, the file wasn't there. But I remember this recently because I ran into it another time. For some reason, this is Finder failing me. So the thing that shows me the files in a graphical format, at least on my operating system. Although I think it also messes up in the terminal maybe. That feels like it shouldn't be true, but maybe it is. Anyway, I had to kill all Finder from the terminal to aggressively restart that process. And then suddenly, Finder was like, oh yeah, there's totally files there, absolutely. They were there the whole time. Why do you even ask? And I know that state management is a hard problem, I am aware. I have felt this pain. I have been the person who has introduced some bugs related to it, but that's not where I want to experience it. Finder is one of those applications that I want to just implicitly trust, never question whether or not it's just sneakily telling me that there are files that are not there or vice versa. So anyway, software. STEPH: I'm worried for your OS. I feel like there's a theme lately [chuckles] in the struggles of your computer. CHRIS: On a related note, I had to turn off transparency in my terminal because it was making my computer get very hot. [chuckles] STEPH: Oh no, you're not a hacker any more. CHRIS: I'm not. [chuckles] I just have a weird screen that's just dark. And jellybeans is my color scheme, so there's that going on. That's in Vim specifically. Pure is my prompt. That's a lovely little prompt. But lots of Day-Glo colors on just a black background, not the cool hacker transparency. I have lost some street cred. STEPH: What is your prompt? What did you say it is? CHRIS: Pure. STEPH: Pure, I don't know that one. CHRIS: It is by Sindre Sorhus; I think is his name. That's his Twitter handle, GitHub name. He is a prolific open-source creator in the Node world, particularly. But he created this...I think it's a Bash and a Zsh prompt. It might be for others as well. It's got a bunch of features. It's pretty fast. It's minimal. It got me to stop messing around with my prompt, which was mostly what I was going for. And it has a nice benefit that occasionally now I'll be pairing with someone, and I'll be like, "Your prompt looks like my prompt. Everything is familiar. This is great." STEPH: Well, if you get back in the waters of messing around with your prompts again, I'm using Starship. And I hadn't heard of Pure before, but I really like Starship. That's been my new favorite. CHRIS: Wow. STEPH: Wow. CHRIS: I mean, on the one hand… STEPH: You're welcome. [laughs] CHRIS: On the one hand, thank you. On the other hand, again, let me lead in with the goal was to stop messing around with my prompt. So you're like, oh, cool. Here's another prompt for you, though. [chuckles] STEPH: [laughs] But my goal is to nerd snipe you into trying more things because it's fun. CHRIS: I don't know if you know this, but I am impervious to nerd sniping. STEPH: [laughs] CHRIS: So try as you might, I shall remain steady in my course of action. STEPH: Are we playing two truths and a lie? Is that what we're doing today? [laughs] CHRIS: Nah, just one lie. It's easier. Everybody wins one lie. STEPH: [laughs] CHRIS: But anyway, in other news, we're going to do a segment called this really grinds my gears. That's today's segment, which is much like when I do a good idea, terrible idea. But this is one that I'm sure I've talked about before. But there's been some stuff that I saw moving around on the internet as one does, and it got these ideas back into my head. And it's around the phrase single-page application. I am not a fan of that phrase or SPA as the initialism. Thank you, Edward Loveall, for teaching me the difference between an initialism and an acronym. I really hope I'm getting it right, by the way, [laughs] SPA as people call them these days. I feel weird because of how much I care about this thing, how much I care about this idea, and how much whenever I hear this acronym, I get a little bit unhappy. And so there's a part of it that's I really do think our words shape our thinking. And I think single-page application has some deeply problematic ideas. Most notably, I think one of the most important things about building web applications is the URL. And those are different pages, at least in my head. I don't know of a different way to think about this. But if you are not emphasizing the URL and the fact that the URL is a way to address different pages or resources within your application, then you are throwing away one of the greatest advancements that humankind has made, in my mind. I care a lot about URLs; it turns out. And it's not inherent to an SPA that you will not be thinking about URLs. But again, in that idea that our words shape our thinking, by calling it an SPA, by leaning into that idea, I think you are starting down a path that leads to bad outcomes. I'm going to pause there because I'm getting kind of ranty. I got more to say on the topic. But what do you think? STEPH: Yeah, these are hot takes. I'm into it. I'm pretty sure that I know why URLs are so important to you and more of your feelings around why they're important. But would you dive in a bit deeper as to why you really cherish URLs, and why they're so important, and why they're one of the greatest advancements of humanity? CHRIS: [laughs] It sounds lofty when you say it back to me, but yeah. It's interesting that as you put into a question, it is a little bit hard to name. So there are certain aspects that are somewhat obvious. I love the idea that I can bookmark or share a given resource or representation of a resource very simply. Like the URL, it's this known thing. We can put hyperlinks in a document. It's this shared way to communicate, frankly, very complex things. And when I think of a URL, it's not just the domain and the path, but it's also any query parameters. So if you imagine faceted search on a website, you can be like, oh, filter down to these and only ones that are more than $10, and only ones that have a future start date and all those kinds of nuance. If you serialize that into the URL as part of the query param, then that even more nuanced view of this resource is shareable is bookmarkable is revisitable. I end up making Alfred Workflows that take advantage of the fact that, like, oh, I can look at this URL scheme, and I can see where different parts are interpolated. And so I can navigate directly to any given thing so fast. And that's deeply valuable, and it just falls naturally out of the idea that we have URLs. And so to not deeply embrace that, to not really wrap your arms around it and give that idea a big hug feels weird to me. STEPH: Yeah, I agree. I remember we've had this conversation in the past, and it really frustrates me when I can't share specific resources with folks because I don't know how to link to it. So then I can send you a link to the application itself to the top URL. But then I have to tell you how to find the information that I thought was really helpful. And that feels like a step backward. CHRIS: Yeah. That ability to say, "Follow this link, and then it will be obvious," versus "Go to this page, click on this thing, click on the dropdown, click on this other thing." Like, that's just a fundamentally different experience. So one of the things that I saw that got me thinking about this was I saw folks referring to single-page applications but then contrasting them with MPAs, which are multiple-page applications. STEPH: So the normal application? [laughs] CHRIS: And I was like, whoa, whoa, everybody. You mean like a website or a web app? As much as I was angry at the first initialism, this second one's really getting me going. But it really does speak to what are we doing? What are we trying to build? And as with anything, you could treat this as a binary as just like there are two options. There are either websites which, yeah, those have got a bunch of URLs, and that's all the stuff. And then there are web apps, and they're different. And it's a bundle of JavaScript that comes down, boots up on the client, and then it's an app thing. And who cares about URLs? I think very few people would actually fall in that camp. So I don't really believe that there is a dichotomy here. I think, as always, it's a continuum; it's a spectrum. But leaning into the nomenclature of single-page application, I think pushes you more towards that latter end of the spectrum. I think there are other things that fall out of it. Like, I believe deeply in having the server know more, have more of the logic, own more of the logic, own more authorization and routing, and all of those things because really great stuff falls out of that. And that one has more of a trade-off, I'd say. But I won't name any names, but there is a multiple billion-dollar company whose website I had to interact with recently. And you land on their page on their marketing site. And then, if you click log in, it navigates you to the application, so a separate domain or a separate subdomain, the application subdomain, and the login page there. And the login page renders, and then I go to fill in my username and password. Like, my mouse makes it all the way to click on the little box or whatever I'm doing if I'm using keyboard things. But I have enough time to actually start to interact with this page. And then suddenly, it rips away, and it actually just renders the authenticated application because it turns out I was already logged in. But behind the scenes, they're doing some JWT dance around that they're checking; oh no, no, you're already logged in, so never mind. We don't need to show you the login page, but I was already on the login page. And my feeling is this sort of brittle UI; this sort of inconsistency erodes my trust in that application, particularly when I'm on the login page. That is a page that matters. I don't believe that they're doing anything fundamentally insecure. But I do have the question in my head now. I'm like, wait, what's going on there, everybody? Is it fine? Was that okay? Or if you see something that you shouldn't see and then suddenly it's ripped away from you, if you see half of a layout that's rendered on a page and then suddenly you see, no, no, no, you actually don't have access to that page, that experience erodes my trust. And so, I would rather wait for the server to fully resolve, determine what's going to happen, and then we get a response that is holistically consistent. You either have access, or you don't, that sort of thing. Give me a loading indicator; give me those sorts of things. I'm fine with that. But don't render half of a layout and then redirect me back away. STEPH: I feel like that's one of the problems with knowing too much because most people are not going to pick up on a lot of the things that you're noticing and caring deeply about where they would just see like, oh, I was logged in and be like, huh, okay, that was a little weird, but I'm in and just continue on. Versus other folks who work very closely to this who may recognize and say, "That was weird." And the fact that you asked me to log in, but then I was already logged in, did you actually log me in correctly? What's happening? And then it makes you nervous. CHRIS: Maybe. Probably. But I wonder…the way you just said that sounds like another dichotomy. And I would say it's probably more of a continuum of an average not terribly tech-savvy user would still have a feeling of huh, that was weird. And that's enough. That's a little tickle in the back of your brain. It's like, huh, that was weird. And if that happens enough times or if you've seen someone who uses an application and uses it consistently, if that application is reasonably fast and somewhat intuitive and consistent, then they can move through it very quickly and very confidently. But if you have an app that half loads and then swaps you to another page and other things like that, it's very hard to move confidently through an application like that. I do think you're right in saying that I am over-indexed on this, and I probably care more than the average person, but I do care a lot. I do think one of the reasons that I think this happens is mobile applications came along, and they showed us a different experience that can happen and also desktop apps for some amount of time this was true. But I think iOS apps, in particular really great ones, have super high fidelity interactions. And so you're like, you're looking at a list view, and then you click on the cell for that list view. And there's this animated transition where the title floats up to the top and grows just a little bit. And the icon that was in the corner moves up to the corner, and it gets a little bigger. And it's this animated transition to the detailed view for that item. And then if you go back, it sort of deanimates back down. And that very consistent experience is kind of lovely when you get it right, but it's really, really hard. And people, I think, have tried to bring that to the web, but it's been such a struggle. And it necessitates client-side routing and some other things, or it's probably easiest to do if you have those sorts of technologies at play, but it's been a struggle. I can't think of an application that I think really pulls that off. And I think the trade-offs have been very costly. On the one positive note, there was a tweet that I saw by Sarah Drasner that was talking about smooth and simple page transitions with the shared element transition API. So this is a new API that I think is hoping to bring some of this functionality to the web platform natively so that web applications can provide that higher fidelity experience. Exactly how it'll work whether or not it requires embracing more of the single-page application, client-side routing, et cetera, I'm not sure on that. But it is a glimmer of hope because I think this is one of the things that drives folks in this direction. And if we have a better answer to it, then maybe we can start to rethink the conversation. STEPH: So I think you just said shared element transitions. I don't know what that is. Can you talk more about that? CHRIS: I can try, or I can make a guess. So my understanding is that would be that sort of experience where you have a version of a certain piece of content on the page. And then, as you transition to a new page, that piece of content is still represented on the new page, but perhaps the font size is larger, or it's expanded, or the box around it has grown or something like that. And so on mobile, you'll often see that animate change. On the web, you'll often see the one page is just completely replaced with the other. And so it's a way to have continuity between, say, a detailed view, and then when you click on an item in it, that item sort of grows to become the new page. And now you're on the detail page from the list page prior. There's actually a functionality in Svelte natively for this, which is really fancy; it's called crossfade. And so it allows you to say, "This item in the component hierarchy in the first state of the application is the same as this item in the second state of the application." And then, Svelte will take care of transitioning any of the properties that are necessary between those two. So if you have a small circle that is green, and then in the next state of the application, it's a blue rectangle, it will interpolate between those two colors. It will interpolate the shape and grow and expand it. It will float it to its new location. There is a really great version of it in the Svelte tutorial showing a to-do list. And so it's got a list on the left, which is undone things, and a list on the right that is done things. And when you click on something to complete it, it will animate it, sort of fly across to the other list. And if you click on it to uncomplete, it will animate it and fly back. And what's great is within Svelte because they have this crossfade as a native idea; all you need to say is like, "It was on this list, now it's on this list." And as long as it's identifiable, Svelte handles that crossfade and all the animations. So it's that kind of high-fidelity experience that I think we want. And that leads us to somewhat more complex applications, and I totally get that. I want those experiences as well. But I want to ask some questions, and I want to do away with the phrasing single-page application entirely. I don't want to say that anymore. I want to say URLs are one honking good idea. Let's have more of those. And also, just to name it, Inertia is a framework that allows me to build using some of the newer technologies but not have to give up on URLs, give up on server-side logic as the primary thing. So I will continue to shout my deep affection for Inertia in this moment once again. STEPH: Cool. Thanks. That was really helpful. That does sound really neat. So in the ideal world, we have URLs. We also have high fidelity and cool interactions and transitions on our pages. We don't have to give it a fancy name like single-page application or then multi-page application. I do wonder, with our grumpiness or our complaint about the URLs, is that fair to call it grumpy? CHRIS: It's fair to call it grumpy, although you don't need to loop yourself in with me. I'm the grump today. STEPH: [laughs] CHRIS: You're welcome to come along for the ride if you'd like. And I'm trying to find a positive way to talk about it. But yeah, it's my grumpytude. STEPH: Well, I do feel similarly where I really value URLs, and I value the ability to bookmark and share, like you said earlier. And I do wonder if there is a way to still have that even if we don't have the URL. So one of the things that I do is I'll inspect the source code. And if I can find an ID that's for a particular header or section on the page, then I will link someone to a section of that page by then adding the ID into the URL, and that works. It's not always great because then I have to rely on that being there. But it's a fix, it's a workaround. So I wonder if we could still have something like that, that as people are building content that can't be bookmarked or the URL doesn't change explicitly, or reference that content, to add more thoughtful bookmark links, essentially, or add an ID and then add a user-facing link that says, "Hey, if you want to link someone to this content, here you go." And under the hood, it's just an ID. But most people aren't going to know how to do that, so then you're helping people be able to reference content because we're used to URLs, so just thinking outside the box. I wonder if there are ways that we can still bookmark this content, share it with people. But it's okay if the URL isn't the only way that we can bookmark or reference that content. CHRIS: It's interesting that you bring that up, so the anchor being the thing after the hash symbol in the URL. I actually use that a ton as well. I think I built a Chrome extension a while back to try what you're saying of I'll inspect the DOM. I did that enough times that I was like, what if the DOM were to just tell me if there were an ID here and I could click on a thing? Some people's blogs...I think the thoughtbot blog has this at this point. All headers are clickable. So they are hyperlinks that append that anchor to the URL. So I wouldn't want to take that and use that functionality as our way to get back to URLs that are addressing resources because that's a way to then navigate even further, which I absolutely love, to a portion of the page. So thinking of Wikipedia, you're on an article, but it's a nice, long article. So you go down to the section, which is a third of the way down the page. And it's, again, a very big page, so you can link directly to that. And when someone opens that in their browser, the browsers know how to do this because it's part of the web platform, and it's wonderful. So we've got domains, we've got paths, we've got anchors, we've got query params. I want to use them all. I want to embrace them. I want that to be top of mind. I want to really think about it and care about that as part of the interface to the application, even though most users like you said, are not thinking about the shape of a URL. But that addressability of content is a thing that even if people aren't thinking of it as a primary concern, I think they know it when they...it's one of those like, yeah, no, that app's great because I can bookmark anything, and I can get to anything, and I can share stuff with people. And I do like the idea of making the ID-driven anchor deep links into a page more accessible to people because you and I would go into the DOM and slice it out. Your average web user may not be doing that, or that's much impossible to do on mobile, so yes, but only more so in my mind. [laughs] I don't want to take anchors and make them the way we do this. I want to just have all the URL stuff, please. Mid-roll Ad Now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API. With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city. The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby. STEPH: I have a confession from earlier when you were talking about the examples for those transitions. And you were describing where you take an action, and then the page does a certain motion to let you know that new content is coming onto the page and the old content is fading away. And I was like, oh, like a page reload? We're just reimplementing a page reload? [laughs] That's what we have? CHRIS: You have a fancy, though. STEPH: Fancy, okay. [laughs] But that felt a little sassy. And then you provided the other really great example with the to-do list. So what are some good examples of a SPA? Do you have any in mind? I think there are some use cases where...so Google Maps, that's the one that comes to mind for me where URLs feel less important. Are there other applications that fit that mold in your mind? CHRIS: Well, so again, it's sort of getting at the nomenclature, and how much does the acronym actually inform what we're thinking about? But taking Google Maps as an example, or Trello is a pretty canonical one in my mind, most people say those are single-page applications. And they are probably in terms of what the tech actually is, but there are other pages in those apps. There's a settings page, and there's a search page, and there's this and that. And there's like the board list in Trello. And so when we think about Trello, there is the board view where you're seeing the lists, and you can move cards, and you can drag and drop and do all the fancy stuff. That is a very rich client-side application that happens to be one page of the Trello web app and that one being higher fidelity, that one being more stateful. Stateful is probably the thing that I would care about more than anything. And so for that page, I would be fine with the portion of the JavaScript that comes down to the client being a larger payload, being more complex, and probably having some client-side state management for that. But fundamentally, I would not want to implement those as a true client-side application, as a true SPA. And I think client-side routing is really the definition point for me on this. So with Trello, I would probably build that as an Inertia-type application. But that one page, the board page, I'd be like, yeah, sorry, this is not going to be the normal Inertia thing. I'm going to have to be hitting JSON endpoints that are specifically built for this page. I'm going to have a Redux store that's local. I'm going to lean into all of that complex state management and do that within the client-side app but not use client-side routing for actual page-level transitions, the same being true for Google Maps. The page where you're looking at the map, and you can do all sorts of stuff, that's a big application. But it is one page within the broader website, if you will. And so, I still wouldn't want client-side routing if I can avoid it because I think that is where I run into the most problems. And that thing I was talking about where I was on the login page for a second, and then I wasn't; I do not like that thing. So if I can avoid that thing, which I have now found a way to avoid it, and I don't feel like I'm trading off on that, I feel like it's just a better experience but still reserving the right to this part of the application is so complex. This is our Wiziwig drag and drop graphical editor thing, cool. That's going to have Redux. That's going to have client-side state management, all that stuff. But at no point does single-page application feel like the right way to describe the thing that we're building because I still want to think about it as holistically part of the full web app. Like the Trello board view is part of the Trello web app. And I want it all to feel the same and move around the same. STEPH: Yeah, that makes sense. And it's funny, as you were mentioning this, I pulled up Google Maps because I definitely only interact with that heavy JavaScript portion, same for Trello. And I wasn't even thinking about the fact that there are settings. By the way, Google Maps does a lot. I don't use hardly any of this. But you make a great point. There's a lot here that still doesn't need such heavy JavaScript interaction and doesn't really fit that mold of where it needs to be a single-page app or even needs to have that amount of interactivity. And frankly, you may want URLs to be able to go specifically to these pages. CHRIS: That actually is an interesting, perhaps counterpoint to what I'm saying. So if you do have that complex part of one of your applications and you still want URL addressability, maybe you need client-side routing, and so that becomes a really difficult thing to answer in my mind. And I don't necessarily have a great answer for that. I'm also preemptively preparing myself for anyone on the internet that's listening to this and loves the idea of single-page applications and feels like I'm just building a straw man here, and none of what I'm saying is actually real and whatnot. And although I try to...I think we generally try and stay in the positive space of like what's good on the internet. This is a rare case where I'm like, these are things that are not great. And so I think in this particular case, leaning into things that I don't like is the way to properly capture this. And giant JavaScript bundles where the entirety of the application logic comes down in 15-megabyte download, even if you're on 3G on a train; I don't like that. I don't like if we have flashes of a layout that they can get ripped away b; it'secause it turns out we actually aren't authorized to view that page, that sort of thing. So there are certain experiences from an end client perspective that I really don't like, and that's mostly what I take issue with. Oh, also, I care deeply about URLs, and if you don't use the URL, then I'm going to be sad. Those are my things. Hopefully, that list is perhaps a better summary of it than like...I don't want it to seem like I'm just coming after SPA as a phrase or a way of thinking because that's not as real of a conversation. But those particular things that I just highlighted don't feel great. And so I would rather build applications that don't have those going on. And so if there's a way to do that that still fits any other mold or is called whatever, but largely what I see called an SPA often has those sorts of edge cases. And I do not like those edge cases. STEPH: Yeah, I like how you're breaking it down where it's less of this whole thing like I can't get on board with any of it. You are focusing on the things that you do have concerns with. So there can be just more interesting, productive conversations around those concerns versus someone feels like they have to defend their view of the world. I have found that I think I'm a bit unique in this area where when people have a really differing opinion than mine, that gets me really excited because then I want to know. Because if I believe very strongly in something and I just think this is the way and then someone very strongly says like, "No, that's not," I'm like, "Oh yeah. Okay, we should talk because I'm interested in why you would have such a different opinion than mine." And so, I typically find those conversations really interesting. As long as everybody's coming forward to be productive and kind, then I really enjoy those conversations. CHRIS: That is, I think, an interesting frame that you have there. But I think I'm similar, and hopefully, my reframing there puts it in the way that can be a productive conversation starter as opposed to a person griping on a podcast. But with that said, that's probably enough of me griping on a podcast. [chuckles] So what's up in your world, Steph? STEPH: Oh, there are a couple of things going on. So I am in that pre-vacation chaotic zone where I'm just trying to get everything done. And I heard someone refer to it recently as going into a superman or superwoman mode where you're just trying to do all the things before you go, which is totally unreasonable. So that has been interesting. And the name of the game this week has been delegate, delegate, delegate, and it seems to be going fairly well. [chuckles] So I'm very excited for the downtime that I'm about to have. And some other news, some personal news, Utah, my dog, turns one. I'm very excited. I'm pretty sure we'll have a dog birthday party and everything. It's going to be a thing. I'll share pictures on Twitter, I promise. CHRIS: So he's basically out of the puppy phase then. STEPH: Yeah, the definition for being a puppy seems to be if you're a year or younger, so he will not be an adult. Teenager? I don't know. [laughs] CHRIS: What about according to your lived experience? STEPH: He has calmed down a good bit. CHRIS: Okay, that's good. STEPH: He has gotten so much better. Back when we first got him, I swear I couldn't get 15 minutes of focus where he just needed all the attention. Or it was either constant playtime, or I had to put him in his kennel since we're using that. That was the only way I was really ever getting maker's time. And now he will just lounge on the couch for like an hour or two at a time. It's glorious. And so he has definitely calmed down, and he is maturing, becoming such a big boy. CHRIS: Well, that is wonderful. Astute listeners, if you go back to previous episodes over the past year, you can certainly find little bits of Utah sprinkled throughout, subtle sounds in the background. STEPH: He is definitely an important part of the show. And in some other news, I have a question for you. I'm in need of some consulting help, and I would love to run something by you and get your thoughts. So specifically, the project that I'm working on, we are always in a state where there's too much to do. And even though we have a fairly large team, I want to say there's probably somewhere between 7 and 10 of us. And so, even though we have a fairly...for thoughtbot, that's a large team to have on one project. So even though there's a fair number of us, there's always too much to do. Everything always feels like it's urgent. I can't remember if I've told you this or not, but in fact, we had so many tickets marked as high priority that we had to introduce another status to then indicate they're really, really high, and that is called Picante. [chuckles] CHRIS: Well, the first part of that is complicated; the actual word that you chose, though, fantastic. STEPH: I think that was CTO Joe Ferris. I think he's the one that came up with Picante. So that's a thing that we have, and that really represents like, the app is down. So something major is happening. That's like a PagerDuty alert when we get to that status where people can't access a page or access the application. So there's always a lot to juggle, and it feels a lot like priority whiplash in terms that you are working on something that is important, but then you suddenly get dragged away to something else. And then you have to build context on it and get that done. And then you go back to the thing that you're working on. And that's a really draining experience to constantly be in that mode where you're having to pivot from one type of work to the other. And so my question to you (And I'll be happy to fill in some details and answer questions.) is how do you calm things down? When you're in that state where everything feels so urgent and busy, and there's too much to do, how do you start to chip away at calming things down where then you feel like you're in a good state of making progress versus you feel like you're just always putting out fires or adding a band-aid to something? Yeah, that's where I'm at. What thoughts might you have, or what questions do you have? CHRIS: Cool. I'm glad you brought an easy question that I can just very quickly answer, and we'll just run with that. It is frankly...what you're describing is a nuanced outcome of any number of possible inputs. And frankly, some of them may just be like; this is just the nature of the thing. Like, we could talk about adding more people to the project, but the mythical man-month and that idea that you can't just throw additional humans at the work and suddenly have that makes sense because now you have to coordinate between those humans. And there's that wonderful image of two people; there's one line of communication. Three people, suddenly there are a lot more lines of communication. Four people, wow. The exponential increase as you add new people to a network graph, that whole idea. And so I think one of the first questions I would ask is, and again, this is probably not either/or. But if you would try and categorize it, is it just a question of there's just a ton of work to do and we're just not getting it done as quickly as we would want? Or is it that things are broken, that we're having to fix things, that there are constant tweaks and updates, that the system doesn't support the types of changes that we want, so any little thing that we want to do actually takes longer? Is it the system resisting, or is it just that there's too much to do? If you were to try and put it into one camp or the other. STEPH: It is both, my friend. It is both of those camps. [chuckles] CHRIS: Cool. That makes it way easier. STEPH: Totally. [laughs] To add some more context to that, it is both where the system is resistant to change. So we are trying to make improvements as we go but then also being respectful of the fact if it is something that we need to move quickly on, it doesn't feel great where you never really get to go back and address the system in a way that feels like it's going to help you later. But then, frankly, it's one of those tools that we can use. So if we are in the state where there's too much to do, and the system is resisting us, we can continue to punt on that, and we can address things as we go. But then, at some point, as we keep having work that has slowed down because we haven't addressed the underlying issues, then we can start to have that conversation around okay; we've done this twice now. This is the third time that this is going to take a lot longer than it should because we haven't really fixed this. Now we should talk about slowing things down so we can address this underlying issue first and then, from now on, pay the tax upfront. So from now on, it's going to be easier, but then we pay that tax now. So it is a helpful tool. It's something that we can essentially defer that tax to a later point. But then we just have to have those conversations later on when things are painful. Or it often leads to scope creep is another way that that creeps up. So we take on a ticket that we think, okay, this is fairly straightforward; I don't think there's too much here. But then we're suddenly getting into the codebase, and we realize, oh, this is a lot more work. And suddenly, a ticket will become an epic, and you really have one ticket that's spiraled or grown into five or six tickets. And then suddenly, you have a person that's really leading like a mini project in terms of the scope of the work that they are doing. So then that manifests in some interesting ways where then you have the person that feels a bit like a silo because they are the ones that are making all these big changes and working on this mini-project. And then there's the other one where there's a lot to do. There are a lot of customers, and there's a lot of customization for these customers. So then there are folks that are working really hard to keep the customers happy to give them what they need. And that's where we have too much to do. And we're prioritizing aggressively and trying to make sure that we're always working on the top priority. So like you said, it's super easy stuff. CHRIS: Yeah. To say it sincerely and realistically, you're just playing the game on hard mode right now. I don't think there is any singular or even multiple easy answers to this. I think one question I would have particularly as you started to talk about that, there are multiple customers each with individualized needs, so that's one of many surface areas that I might look t say, "Can we sort of choke things off there?" So I've often been in organizations where there is this constant cycle of the sales team is going out. They're demoing against an InVision mock. They're selling things that don't exist. They're making promises that are ungrounded and, frankly, technically infeasible or incredibly complicated, but it's part of the deal. They just sold it, and now we have to implement it as a team. I've been on teams where that was just a continuing theme. And so the engineering team was just like, "We can never catch up because the goalpost just keeps moving." And so to whatever degree that might be true in this case, if there are ten different customers and each of them right now feels like they have an open line to make feature requests or other things like that, I would try to have the conversation of like, we've got to cut that off right now because we're struggling. We're not making the forward progress that we need to, and so we need to buy ourselves some time. And so that's one area that I would look at. Another would be scope, anywhere that you can, go into an aggressive scope cutting mode. And so things like, well, we could build our own modal dialogue for this, but we could also use alert just like the JavaScript alert API. And what are all of the versions of that where we can say, "This is not going to be as nice, and as refined, and as fitting with the brand and feel and polish of the website. But ways that we can make an application that will be robust, that will work well on all of the devices that our users might be using but saves us a bunch of development time"? That's definitely something that I would look to. What you described about refactoring is interesting. So I agree with we're not in a position where we can just gently refactor as we find any little mess. We have to be somewhat ruthless in our prioritization there. But like you said, when you get to that third time that a thing is working way harder, then take the time to do it. But really, like just every facet of the work, you just have to be a little better. If you're an individual developer and you're feeling stuck, raise your hand all the earlier because that being stuck, we don't have spare cycles right now. We need everybody to be working at maximum efficiency. And so if you've hit a wall, then raise your hand and grab somebody else, get a pair, rubber duck, whatever it is that will help you get unstuck. Because we're in a position where we need everybody moving as fast as they can. But also to say all of those aren't free. Every one of those where you're just like, yeah, do it the best you can. Dial it up to 11 on every front. That's going to drain the team, and so we have to also be mindful of that. This can't be forever. And so maybe it is bringing some new people onto the team or trying to restructure things so that we can have smaller communication channels. So it's only four people working together on this portion of the application, and therefore their communication lines are a bit simpler. That's one way that we can maybe save a little bit. But yeah, none of these are free. And so, we also need to be mindful that we can't just try harder forever. [laughs] That's a way to burn out the team. But what you're describing is like the perfect storm of every facet of this is difficult, and there's no singular answer. There's the theory of constraints (I think I'm saying that right.) where it's like, what's the part of our process that is introducing the most slowdowns? And so you go, and you tackle that. So if you imagine a website and the app is slow is the report that you're getting, and you're like, okay, what does that mean? And you instrument it, and you log some stuff out. And you're like, all right, turns out we have tons of N+1s. So frankly, everything else doesn't matter. I don't care if we've got a 3 megabyte JavaScript bundle right now; the 45 N+1s on the dashboard that's the thing that we need to tackle. So you start, and you focus on that. And now you've removed that constraint. And suddenly, the three megabyte JavaScript bundle is the new thing that is the most complicated. So you're like, okay, cool, let's look into tree shaking or whatever it is, but you move from one focus to another. And so that's another thing that could come to play here is like, which part of this is introducing the most pain? Is it feature churn? Is it unrealistic sales expectations? Is it developers getting stuck? And find the first of those and tackle it. But yeah, this is hard. STEPH: Yeah, it is. That's all really helpful, though. And then, I can share some of the things that we are experimenting with right now and then provide an update on how it's going. And one of the things that we're trying; I think it's similar to the theory of constraints. I'm not familiar with that, but based on the way you described it, I think they're related. One of the things that we are trying is breaking the group into smaller teams because there are between 7 and 10 of us. And so, trying to jump from one issue to the next you may have to really level up on different portions of the application to be able to make an impact. And there are areas that we really need infrastructure improvements and then essentially paving the way for other people to be able to move more quickly. We do have to prioritize some of that work as well. So if we break up into smaller teams, it addresses a couple of areas, or at least that's the goal is to address a couple of areas. One is we avoid having silos so that people aren't a bottleneck, or they're the only ones that are really running this mini-project and the only one that has context. Because then when that person realizes the scope has grown, bringing somebody on to help feels painful because then you're in an urgent state, but now you have to spend time leveling someone else up just so that they can help you, and that's tough. So the goal is that by having smaller teams, we will reduce that from happening because at least everything that feels like a small project...and by feels like a small project, I mean if we have more than one ticket that's associated with the same theme, that's going to start hinting at maybe this is more than just one ticket itself, and it might actually belong to an epic. Or there's a theme here, and maybe we should have two people working on this. And breaking people into groups, then we can focus on some people are focused more on the day-to-day activity. Some people are focused on another important portion of the codebase as we have what may be extracted. I'm going to say this, but we're going to move on, maybe extracted into its own service. [laughs] I know that's a hot one for us, so I'm just going to say it. CHRIS: I told you I can't be nerd sniped. This is fine. Let's continue on. [laughs] STEPH: [laughs] And then a small group can also focus on some of those infrastructure improvements that I was alluding to. So smaller teams is something that we are trying. We are also doing a really great job. I've been really happy and just proud of the team where folks are constantly reaching out to each other to say, "Hey, I'm done with my ticket. Who can I help?" So instead of immediately going to the backlog and grabbing the next thing. Because we recognize that because of this structure where some people are some silos, they have their own little mini backlog, which we are working to remove that to make sure everything is properly prioritized instead of getting assigned to one particular person. But we are reaching out to each other to say, "Hey, what can I do to help? What do you need to get done with your work before I go pick something else up?" The other two things that come to mind is who's setting the deadlines? I think you touched on this one as well. It's just understanding why is it urgent? Does it need to be urgent? What is the deadline? Is this something that internally we are driving? Is this something that was communicated without talking to the rest of the team? Is this just a really demanding customer? Are they setting unrealistic expectations? But having more communication around what is the sense of urgency? What happens if we miss this deadline? What happens if we don't get to this for a week, a month? What does that look like? And then also, my favorite are retros because then we can vote on what feels like the highest priority in terms of pain points or run these types of experiments like the smaller teams. So those are the current strategies that we have. And I'm very interested to see how they turn out because it is a tough way. Like you said, it's challenge mode, and it is going to burn people out. And it does make people feel fatigued when they have to jump from one priority to the next. So I'm very interested. It's a very interesting problem to me too. It just feels like something that I imagine a lot of teams may be facing. So I'm really excited if anybody else is facing a similar issue or has gone through a similar challenge mode; I'd love to hear how your team tackled it. CHRIS: Yeah, I'm super interested to hear the outcome of those experiments. As a slightly pointed question there, is there any semi-formal version of tracking the experiments? And is it just retro to retro that you're using for feedback on that? I've often been on teams where we have retro. We come up with it, and we're like, oh, this is a pain point. All right, let's try this. And then two weeks later, we're like, oh, did anyone actually do that? And then we just forget. And it's one of those things that I've tried to come up with better ways to actually manage, make slightly more explicit the experiments, and then have a timeline, have an almost scientific process of what's the hypothesis? What's the procedure? What are the results? Write up an executive summary. How'd it go? STEPH: We are currently using retro, but I like that idea of having something that's a bit more concrete. So we have action items. And typically, going through retro, I tend to revisit the action items first as a way to kick off retro. So then that highlights what did we do? What did we not do? What do we not want to do anymore? What needs to roll over to the current iteration? And I think that could be just a way that we chat about this. We try something new, and we see how it's going each week in retro. But I do like the idea of stating upfront this is what we're looking to achieve because I think that's not captured in the retro action item. We have the thing that we're doing, but we haven't captured this is what we hope to achieve by taking this experiment on. Mid-roll Ad And now a quick break to hear from today's sponsor, Scout APM. Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more. Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform. See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. STEPH: As for the other thing that you mentioned, I do have an idea for that because a former client that I worked with where we had experiments or things that we wanted to do, we were using Trello. And so we would often take those action items…or it was even more of a theme. It wasn't something that could be one-and-done. It was more of a daily reminder of, hey; we are trying this new thing. And so, we want to remind you each day to embrace this experiment and this practice. And so we would turn it into a Trello ticket, and then we would just leave it at the top of the board. So then, each day, as we were walking the board, it was a nice reminder to be like, hey, this is an ongoing experiment. Don't forget to do this. CHRIS: I do like the idea of bringing it into a stand-up potentially as like that's just a recurring point that we all have. So we can sort of revisit it, keep it top of mind, and discard it at some point if it's not useful. And if we're saying we're doing a thing, then let's do the thing and see how it goes. So yeah, very interested to hear the outcomes of the experiment and also the meta experiment framework that you're going to build here. Very interested to hear more about that. And just to say it again, this sounds like your perfect storm is not quite right because it doesn't sound like there's a ton of organizational dysfunction here. It sounds like this is just like, nah, it's hard. The code's not in perfect shape, but no code is. And there's just a lot of work to be done. And there are priorities because frankly, sometimes in the world, there are priorities, and you're sort of at the intersection of that. And I've been in plenty of teams where it was hard because of humans. In fact, that's often the reason of we're sort of making up problems, or we're poorly communicating or things like that. But it sounds like you're in the like, nope, this is just hard. And so, in a way, it sounds like you're thinking about it like, I don't know, it's kind of the challenge that I signed up for. Like, if we can win this, then there's going to be some good learnings that come out of that, and we're going to be all the better. And so, I wish you all the best of luck on that and would love to hear more about it in the future. STEPH: Thank you. And yeah, it has been such an interesting project with so many different challenges. And as you've mentioned, that is one area that is going really well where the people are wonderful. Everybody is doing their best and working hard. So that is not one of the competing challenges. And it is one of those; it's hard. There are a lot of external factors that are influencing the priority of our work. And then also, some external areas that we don't have control over that are forcing some of those deadlines where customers need something and not because they're being fussy, but they are themselves reacting to external deadlines that they don't have control over. So it is one of those where the people are great, and the challenges are just real, and we're working through them together. But it's also hard. But it's helpful chatting through all the different challenges with you. So I appreciate all of your thoughts on the matter. And I'll report some updates once I have some more information. On that note, shall we wrap up? CHRIS: Let's wrap up. STEPH: The show notes for this episode can be found at bikeshed.fm. CHRIS: This show is produced and edited by Mandy Moore. STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show. CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed on Twitter. And I'm @christoomey. STEPH: And I'm @SViccari. CHRIS: Or you can email us at hosts@bikeshed.fm. STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week. All: Byeeeeeeee! Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
The presenter is James Thurston, G3ict, who is joined by Chris Misra, University of Massachusetts, Amherts. SPEAKER: Please welcome James Thurston and Chris Misra. James is the Vice President for G3ict, where he leads the design and implementation of new worldwide advocacy strategies and programs to scale up G3ict's global impact. G3ict is the global Initiative for Inclusive Information and Communication Technologies promoting the rights of Persons with Disabilities in the Digital Age. Chris is the Vice Chancellor and CIO at the University of Massachusetts – Amherst. At the University of Massachusetts Amherst, information technology plays a crucial role in many key areas, including but not limited to student success and engagement, research competitiveness, and multi-modal education. Today they will be looking at how leveraging accessibility and inclusion can provide an adaptive and accessible multi-modal IT ecosystem to support campuses. Chris will review findings, digital inclusion gaps, next steps for improvements at the University of Massachusetts – Amherst and more! JAMES: Our goal with this session is to share with all of you some detail about how the U Mass approach to being more accessible, more inclusive through technology. Through its technology assets and deployments and Chris and I over the next hour want to surface and share with you, I think, what are some valuable and actionable experiences from U-mass, that will hopefully apply to your own accessibility journey in your higher education institution. This particular session is the third in the IAAP higher ed series. It's also the first of the next three sessions, which relate to, and are sort of sourced from, G3ict's work with universities and higher education institutions, using our smart university digital inclusion maturity model tool. And I'm just going to briefly give you a little bit of information on that, just so it will make a little bit more sense as Chris and I start to have a conversation about our work with Chris and what Chris has been leading and driving there at the University of Massachusetts model. The smart university digital inclusion maturity model tool, it's an assessment tool and a benchmarking tool. And it's really to help universities better understand how their digital transformation, how they're using technology, how their use of data is either supporting accessibility inclusion of people with disabilities or potentially presenting barriers to the inclusion of people with disabilities, including faculty, staff and students. And even, really, the broader community where the university might sit. So, the tool itself, the assessment tool, it's made up of 28 variables, we call them enablers, and they define what it really means to be an inclusive smart university. They enable accessibility and enable inclusion. These variables, or enablers, contribute to the university's building up the capabilities that we know support greater inclusion in accessibility at a university. And these capabilities, and with the tool we're able to look at the role of things like leadership, the existence of a digital inclusion strategy or not, we look at the accessibility of the university's engagement channels, how it's pushing information out, getting information back, are those accessible, we look at things like the culture of diversity, is the university employing people with disabilities, is it training on disability and accessibility. We look at things like procurement, what systems does the university have in place to make sure that its investments in technology and its deployments of technology are accessible. So, a whole range of issues that we know are pretty critical to a university becoming increasingly accessible, increasingly inclusive. And, of course, we do dig into technology and data, which are the backbone and the life blood of a smart university. And the way that we use these variables, these 28 enablers, these 18 capabilities, is in a three-step process. That is pretty straightforward. We do some analysis of documents, I.T. strategies, digital inclusion strategies, budgets, accessibility statements. We do some analysis of those. We make available to the university an online self-assessment where they sort of write themselves across these variables. And then we actually do an expert site visit where we curate a team of global experts on inclusion and accessibility and bring them in to engage with the university, dig into some of these variables, and hopefully, at the same time, provide some help and assistance on pain points, issues that the university might be experiencing. And then the final step is we deliver a road map, which includes a set of scores for each of these variables and a set of priorities and recommendations for moving forward. So, if you're at a level 2 on a scale of 1 to 5 for procurement, these are the kinds of things you might think about doing to get to levels 3, 4 and 5. So pretty straightforward. The process with U Mass, we'll be talking -- jumping in with Chris in just a minute. We, I think, started that process last spring and sort of did the site visit, I think, in early summer this past year. And in that process, we reviewed probably more than 20 documents, these budgets, these strategies, these org charts, policy statements. We talked with more than 40 U Mass faculty and staff over ten different listening sessions. And then we delivered the road map. And in the road map, U Mass, I think, had real relative strengths in the area of leadership and other areas identified where there's an opportunity to really make some steps to have some improvements in the capabilities and ultimately in the accessibility and inclusion there. So that's a little bit of a background on how G3ict came to be working with U Mass. I thought it might be useful to sort of frame our conversation. And with that, I'm really excited now, and I've been I've been looking forward to this discussion, Chris, for quiet a while, of jumping in with you and hearing a little bit about the U Mass Amherst journey, where you are, where you're headed but maybe we can start, if you can tell us a little bit about the University of Massachusetts Amherst, give us a general sense of the university and how you're deploying technology there. CHRIS: Sure, thanks, James. So, U-Mass Amherst, for those of you who aren't familiar with Massachusetts geography, I grew up in Massachusetts, so I know, we're about 90 minutes west of Boston, 175 miles north northeast of New York City. It's a relatively rural area, but it's a significant institution. We have about 24,000 undergraduate students, about 7,500 graduate students. About 1,500, instructional faculty. Largest state institution in New England, research one, $233 million, $1.3 billion budget, big. 1500-acre campus, which is the biggest thing is trying to find your way around the campus. Our journey of accessibility came about really through just conversations and advocacy within the campus in terms of this has to be a key responsibility for us. Our technology platform is really very traditional, higher education. We migrated many of our services to the Cloud, excessive use of Zoom recently, Google, exchanging out platforms, and the challenge with the campus of this size is really just managing the breadth and depth of both a campus and a highly decentralized institution. JAMES: Great, thanks, Chris. We probably started having conversations about a year ago, actually, just as we were coming into the pandemic and universities, in particular, I think, we're scrambling to try to figure out, okay, how do we fulfill our mission in this environment. Can you talk a little bit about sort of what that looked like as we were coming into the pandemic from a CIO perspective, the kinds of things that you were thinking about and needing to take steps on? CHRIS: Sure. There were sort of two interesting aspects. I mean, aside from it's amongst the longest days of my career in the past and probably ever going forward just in terms of how do you migrate an institution that size to an online education. We made a very early determines, we were one of the early schools who decided to go remote, we thought it would be two weeks, we took a double spring break. We quickly ramped up the technology portfolio. We were fortunate that we already had tools like Zoom, we had pretty good practice of online education, fairly robust online education school, but not a lot of digitally native capacity to teach instructionally remote. So, there's really two principal areas of impact. There's a principal area of impact in academics, and the impact in administration. Since we extended out the spring break for an extra week, we actually had two weeks to figure out how we were going to do these academics. But that meant we had to move the administration into an online world in a very short order of time. From the basic things, how are we going to pick up the mail to how are we going to communicate, how do staff meetings work, and recognizing that institutionally we were a face-to-face campus, our staff meetings were face to face, our one-on-one meetings were face to face and we had to comport all of that. So, the social change was actually significant, and that led quickly to substantial change in the academic side as well. We saw increases of -- astounding increases in Zoom utilization. One of my favorite statistics on Zoom utilization is in the first week of -- I'm sorry -- in the first day of the first week when we brought our academics online, we used more Zoom time the entire month previously. So, each day in April, we used the same number of Zoom hours in the entire month of February. And that pace continued through the balance of the spring semester. JAMES: Chris, I remember that data point as well. And I often use it myself because I think it is a really easy, compelling example of this accelerated digital transformation. Can you talk a little bit about where -- how accessibility fits into I.T. and into the university in general? I know, you've got a really great I.T. strategy, accessibility is embedded in there. I don't think that there's a specific digital inclusion or necessarily accessibility strategy, but maybe a little bit about strategy and organizational structure, just so we understand how accessibility fits in. CHRIS: Absolutely. So, we've actually been fortunate from an I.T. perspective, we've had staff supporting accessibility but a very modest staff. I think when James did the assessment, we had a single staff member, at a high point we had two staff, and we're in the process of transitioning that as well. So, our overall accessibility strategy comes multi fold. My team is responsible for the information technology, and that's across the board. That means we support students' technology use in the classroom, we support faculty's technology use, we provide general technology use for administration. We do not have responsibility for accessibility accommodations per se, we have a disability services team on campus, it's organized in our student affairs area. So, really, it's a key partnership working between student affairs, working with my central I.T. organization. I will say from a maturity perspective, though, we had staff, it was very much more about boutique service, solving discrete individual accommodations, and it hadn't crossed the line of being generalizable to most of our day-to-day normal use of population technology. It was very much targeted at a subset population that had self-disclosed a need for an accommodation. JAMES: And I know as part of this conversation, we'll get into a bit later, a discussion of these issues of silos and coordination and collaboration, which we had a lot of conversation about when we were working with you. So, maybe we can jump in now a little bit into this sort of notion of accelerated accessibility that happened for U Mass for sure but probably for most universities around the world because of the pandemic and what that looks like. And how -- maybe start with a little bit about how does the university deploy technology assets that are accessible and really are working for everyone, and what did it look like to have this sort of intensified effort to include a focus on accessibility as you were becoming more and more -- using technology more and more to do all of your services, both administrative and academic and teaching? CHRIS: Sure. So I'll say the structural change that really occurred was, I think, originally we treated accessibility as meeting the needs of identified individuals who had to have accommodations and making sure our web content was accessible, doing basic accessibility reviews, it was basic, W3CG, not a lot of detailed work and it was not invested across the board in terms of we had a lot of natively accessible tool set but it was really natively delivered accessible tool set, there wasn't a lot of work and push for us to drive an institutional priority around making sure our content was natively accessible, except where there was either liability or like I say, a dedicated accommodation. As we went into the pandemic, that really had to pivot because we realized, we no longer had the mechanism, we couldn't deploy a notetaker for a student in a classroom because there wasn't a classroom. We couldn't make point by point accommodations on either technology or use case basis. So, we had to start generalizing. We were fortunate that we were in the midst of a transition of our strategic plan, so we were actually at a point of making that type of pivoting. Of identify digital inclusion as a core property going forward. And, so, we had a lot of the substrate work, but I'd say the pandemic really drove us to recognize it wasn't solely about a compliance obligation but much more about reaching our community where they're at. JAMES And as you were making that shift, were you -- some of what we had talked about in the past, when you were in the middle of all this, is there some -- much like what you would probably do on the security side of your work, any sort of risk rating system, and trying to make these decisions about where are we going to prioritize and focus first and those types of decisions when it comes to accessibility? CHRIS: Absolutely, yeah. So, one of the things, for me, I consider fortunate is prior to my role as a CIO I've been in a number of roles at U Mass. I came from a very technical background. But I spent many years in a security role. So, I was responsible for information security at the organization. Within the information security field, it's very much a derivative of risk management field that works very heavily on risk and concepts like maturity models play very heavily there. So, when you're assessing implementation of controls to mediate security risk, you have to assess what is the cost of control, what is the value, what is the return. The easiest way to assess that is against a maturity model so I had a lot of familiarity with the concept of maturity models. One of the things that made me very excited about the engagement of G3ict was the application of this discipline-type technology of applying a maturity model to a domain like accessibility because I had not seen that done before, but I had a lot of experience. What's nice about that, it gives you an abstract way of measuring your progress, although there can be a metric and a rating, it also talks about where you are legitimately relative to your peers but what steps you can take, and gives you a better mechanism to start prioritizing resource allocations. So, as I moved out of information security, into a CIO role, I changed from being responsible for compliance to be responsible for budget, priority and allocation. So being able to have a document like a maturity model that can help guide investment and show return relative to cost was a better framework for us to make ongoing decision making and I felt more at home in that security field, like oh, we know this is a high risk, let's apply a resource here, even if the resource is fairly modest, it's going to get us significant return against that issue. JAMES: Can you -- if you're able, can you talk a little bit about some of those areas where you were making decisions at the time in this accelerated period of focus on accessibility in addition to a lot of other things? Where you are identifying risk and taking some steps specifically around improving the accessibility of your technology assets? CHRIS: Sure. And in some cases, what's interesting with the technology assets is our first task, because we are technologists, is let's just fix the technology. What it really came down to in many cases it's about the business process as well. So, when we started going through the assessment process, we realized the first and foremost, we have a 24,000 student population moving remote. We had to get in front of the faculty and instructors to explain why this was relevant. So, it wasn't so much about, hey, don't put a poorly scanned PDF up on your website, we'd already been providing those types of instructions, but it really had to pivot to, is your course content accessible natively. And in that case, it is still digital accessibility, but it may be, have you applied alt tags to your PowerPoints, have you made sure you're not doing poorly rendered PDFs, is your content screen reader able. It was these sorts of things that are actually technology related but it was about the business process behind it. What we did, we formed a working group between my team, our university library, our center for teaching and learning, and our instructional designers, we call our ideas group, it's a big long acronym I can never remember, but we put those together as ideas is the support resource, faculty primary interact with. Library is a resource that provides a lot of the supplementary external materials, I.T. is a lot of times the bridging infrastructure. So, it was really about forming a coalition within campus, identifying priorities, it was helped inform by the maturity model where those risk areas are, and providing guidance, which wasn't just apply technology, but help individuals creating content to make the content accessible natively, because the incremental cost to them was much smaller than us throwing lots of money at making the technology do it for them. JAMES: You touched on a really important point that I think would resonate with any university around the world, which is the sort of decentralized structure of universities, we'll dig into that more deeply in a minute. But I'm just wondering, as you were partnering, and leading this accelerated digital transformation during the pandemic and focus on accessibility as part of that, how was that received? I recall in part of our conversations, for example, there was, with the faculty, there may have been some incentives around going digital, maybe even going digital and accessible at the same time. But, in general, how would you say this accelerated accessibility was received? CHRIS: So I would say it was received well. I was actually somewhat surprised at how well it was received. Those of you who have been at universities, especially in large universities, they're very decentralized power structures, recognize that change comes slowly. The ship turns slowly, as we like to say, right? It will get there eventually but it turns slowly. I was tremendously impressed with the empathy and the caring shown by the faculty and the instructors involved in supporting students at a distance, but they recognized an individual obligation. And, really, our role as technologists was to reduce that barrier to them to make their content accessible. So, there was some financial structure incentives, as we went into our subsequent semester that helped faculty teaching online to build hybrid instruction. What we did, we developed a series of standards to make sure as our content went out, it met these standards, that was sort of the condition of the incentivising. So rather than make it a big deal, like hey, you all have to do accessibly, it was really embedded into an existing incentivization structure, but we added the accessibility obligations as additional compliance checks to go to an accessible by default role. I was concerned about the uptake we'd see from faculty, you but I was very surprised. The other thing with decentralized higher education, as much as the ship turns slowly, once everybody gets where you're going, they generally get on board. So, we took this more adapt to the culture of the campus, adapt to the change culture of the campus, and tie into those change mechanisms that are effective, that's what really helped us be more successful, I believe, that and the empathy of the faculty and the instructors. SPEAKER: The International Association of Accessibility Professionals membership consists of individuals and organizations representing various industries including the private sector, government, non-profits, and educational institutions. Membership benefits include products and services that support global systemic change around digital and the built environment. United in Accessibility, join I.A.A.P. and become a part of the global accessibility movement. JAMES: So, maybe take a little bit of a step back, but still thinking about the deployment of accessible, inclusive technology assets. Can you talk a little bit about your thinking, U-Mass' thinking and approach to incident management? How do you remediate issues, how does that happen? And then the other piece that I'd love to hear a little bit more is about testing, when it comes to accessibility, automated user testing? CHRIS: Sure. So, two-fold. On the testing piece, we've employed students both in our help desk and our accessibility office to do some of the testing. We actually are just launching another program to do more broad usability testing, which includes accessibility testing, working in concert with some faculty in our writing program. They tend to have a good degree of expertise in there. So, the other advantage of a higher education institution is students are fresh, motivated, focused and quite inexpensive labor and they like the work. It's great experience, it's great value to them, it's great value to us institutionally. So, we've really tied into that, this is something we've done for many, many years, tie into a workforce that's motivated, it's interesting. We've definitely seen the awareness of our student body around accessibility issues is much greater in the last five and ten years than it has been previously. I've been asked about making sure content is accessible from a course perspective, I've been -- there's been a shift and the challenge is, that shift isn't necessarily as strongly perceived at the faculty that are instructing them because they tend to be a little bit older. So, using the students to help motivate that work has really helped improve the accessibility piece of it because we've embedded the testing more into the core processes when we role out new applications, whether it's a PeopleSoft application or a new web application, we're commissioning that testing as part of launches of applications, as well as new web properties. JAMES: Chris, Mark Nichols is asking a question. If the standards that you're talking about, before content goes out, or even other standards that you're looking at and testing on really to -- related to accessibility, are they in-house standards or are you using global standards like WCAG? CHRIS: They are in-house standards developed off WCAG. But I will get James and Yulia a link afterwards. We posted up our academic standards and it referred to those suggestions, it was built off of WCAG. One thing, just amongst everybody here, accessibility is not my first language. I'm an info set guy, I was a technologist, I was a Linux assist Admin. I know the acronyms, I know the space, but it's not quite my domain of expertise, I'm fortunate to have well-trained staff who understand this both on my team and the disability services team so we can absolutely share those standards. They're academic standards we posted for the fall semester for 2020. JAMES: So, Chris, I know, as I recall from our previous conversations and work, there were sort of nine legacy platforms that you guys had deployed. And I'm wondering if over the course of the many months since we've worked together, how you're thinking about incident management has changed or evolved or how you're approaching that and dealing with that, how much of an issue -- accessibility issues have become in this accelerated period? CHRIS: I mean, the challenge has been, before -- I believe we started talking about the accessibility review before the pandemic. I had high hopes that we would be able to make significant progress in some of our core administrative systems in the shorter term. And then the pandemic hit and next thing I knew, we were running COVID testing sites for the western part of the state. We were running vaccination programs. We were one of the earliest vaccination programs for first responders. So, unfortunately, a lot of the resources I'd have to help make accessibility improvements to our core applications really got put aside for new application deployment. What I will say, we've been strong about implementing accessibility standards for the new applications as we roll them out. So, at this point my hope is to get us back, likely as we refactor some of our applications to do a more detailed review. It's definitely a goal, it's an asserted goal, it's part of the road map and strategy going forward. It's just with the pandemic, the resource allocation tipped everything so sideways. I'm a little further behind than I hoped to be there. Legacy platforms, we haven't made as much progress as I was hoping to. We've certainly made progress. What we've made significant progress in is in the awareness and the accountability that accessibility is an issue that has to be accommodated at deployment or at refresh for an application. That was a huge improvement that we hadn't been able to make as successfully in the past. JAMES: You've shared, at least with me, what I think are some really interesting facts about how you as a CIO had to evolve into using technology to support a dramatically increased public health role of the university for the state during the pandemic, which is pretty amazing. There's another question from Peter, who decides the threshold for compliance? It's never 100%. CHRIS: And, so, again, this is where I'm going to go a little bit on my information security soap box, right? The definition of compliance is just bending the wheel to another. So, yeah, it's never 100%. It's not going to be 100%. Really what we do is use a risk-based model, understanding where the risk is. Usually that started historically, with either liability of the institution or legal accommodation requirements. That's a barrier to cross, that's a legal obligation to cross, but it's really not meeting this notion of digital inclusion as a core value of the campus. So, the threshold is really handled generally on a case-by-case basis. There isn't an arbitrary threshold. What we focus on, these are the recommendations to make your course content accessible, to make your web property accessible. These are the standards. From a web property perspective, we do actually have a compliance check less, we actually have a team inside our university relations group that will run through both automated testing and some hand-based testing to look at, does the content render in a screen reader, does it provide appropriate alt image tags and things like that. My goal with compliance is always making sure that we're investing the right amount of resource to ensure that we meet the largest degree of population as effectively as we can. Information security is a risk management game. Accessibility and compliance become a risk management game. And it's hard sometimes to think of it in those terms, but one of the challenges, I think, that I've seen working with some of my staff is, staff come with a tremendous degree of accessibility concern are passionate, profound and focused. The challenge is also balancing those resources against the other resource needs of campus, right? How much time can I spend on ensuring my web properties are accessible if, at the same time, I have to take those same resources to allocate them to make sure we're setting up a COVID vaccination clinic. It's really a continuum of resource allocations. For me, thinking about how can I make sure there's always a guarantee of resource allocation towards accessibility, recognizing that that might not be core to our mission. What can be core to our mission is deploying accessible applications on an going forward basis. But our core mission is instructing students, performing research, being a land grant institution. We always have to balance that resource allocation to make sure we're moving the ball forward in these different fronts, but serving, first and foremost, what is it we're core here to do, instruct students. Accessibility is a component of that, but it can't be the dominating component. It has to be an absolutely key component, but the dominating is us delivering students with a path to their future. JAMES: Thanks, Chris. Before we go on to the next topic, briefly, if you can talk about thinking about your staff, the technology staff at the university even more broadly, perhaps, the skill and training on accessibility and how you think about that and approach that. CHRIS: Yeah, I think there's three aspects of that. So, the first aspect was, we've had some staff transition, in our accessibility staff. Making sure we have the appropriate professional training for folks who are doing the accommodation work or engagement and consultation work. That's always been a fairly straightforward, that's an institutional investment. That makes good sense. Where the real value we've seen, both from a leadership perspective, raising accessibility as a topic of concern at senior levels at the institution. So, raising this concept, our provost is fluid with the concept of accessibility, right? He's not going to go out and do a WCAG review, but he gets the concept that he can instruct his Deans that this is going to have to be a key component of the content their faculty deliver on a go forward basis. From a training perspective, there's a lot of low-cost effort that we can put in place to raise accessibility on the radar from a leadership perspective, discuss it with a broad team of not just executive but operational, manager and cross-functional teams, we've also been very successful in engaging our students about accessibility conversations, what does that mean to you. Because my concept of accessibility is how big is the font is, a student's concept of accessibility may be how does it render on a cell phone. That's a very different problem set, depending on what technology you apply to that. It doesn't have to be, but we need to collect those voices in terms of understanding what that means and a lot of that does not involve a lot of out-of-pocket cost. JAMES: And just one more question, then we'll move on to one of my favorite topics, which is sort of collaboration across departments. From Kathy, how do you decide what to test? Do you do spot checks of certain course websites and more checking of applications used by larger populations? CHRIS: Sure. So let me break up the administrative from the academic side of things. So, from the administrative side of things, we actually have a review process for our web properties in conjunction between our I.T. team and our university relations team that's responsible for our web properties. So, there's actually a checkoff evaluative process for our core web properties. I'm fully confident there's probably some research lab websites or some individual P.I. websites that were created by word press that probably don't meet the testing. We focus on the high-visibility targets to make sure the information that's most relevant to a large population gets out there. From a course perspective, we do have a couple of very large enrollment courses. We tend to focus most of our resources on ensuring the platform is accessible natively. There is always compliance issues, right? There's always some faculty member that wants to take their PDF from 1982, turned it 10 degrees and scan it and hope it will work. We do spot checks, especially on the large enrollment course, but generally we focus on ensuring the platforms are natively compliant, and then providing strong guidance to the faculty to ensure they have the guidance and parameters of what are those steps that they can take that's relatively die minimums, relatively incremental burden for them but provides a more inclusive experience natively. JAMES: Thanks so much. Now let's shift gears a little bit, Chris, and get into the issues of collaboration, coordination, working across departments at a big university on a big campus. One of the things -- one of the other things that stuck in my mind that you said early on when we started working together was how at the University of Massachusetts Amherst, there are some really amazing, I give you full credit for this term, these pockets of heroic effort. Which I think will resonate with anyone who's doing work in the accessibility field, any kind of organization, that there are really good practices happening in parts of the university. And I think some of the ones that had come up, U Mass were around UDL, instructional design, and some other areas that the Assistive Technology Center, some really good resources and practices. But siloed and not scaled because they are siloed in departments. And even some departments, I think, that may have been a little ahead of others in terms of academic departments in terms of their approach to inclusion and accessibility. I know that since we last worked together, and during this -- these last several months, the accelerated accessibility period, that you've done some work on greater collaboration and coordination. Can you talk a little bit about that, including maybe some description of what it felt like before taking some improving steps? CHRIS: Sure. I mean, for those of you who have spent time, and this is true of both large and small higher education, but higher education tends to be a very siloing structure, at least in my experience. There's a couple of exceptions but there tends to be a lot of belief that faculty are experts in their domain, by virtue of experts in their domain, that there's a lot of notions of self-rule, self governance and that sometimes extends out to administration. I will avoid pining too deeply on that. But there are some challenges that come from that. There's communication, there's logistical challenges. What you end up seeing is subcultural development about, this is important. And what I've observed, and I've seen this both in technology fields as well as accessibility is and let me take it out of the accessibility domain, my email team for many years thought they delivered the best email application out there. They understood how it worked, nobody else understood how it worked but it made a lot of sense to them, and they thought they were doing great. And, so, within their minds, they were providing heroic effort but the impact from a user perspective was not the heroic effort they thought it was going to be. I've observed similar challenges within accessibility at the campus as well. There are these pockets of brilliance, pockets of heroes that are out there working with good empathy this. The challenge is, they don't always have, or they have not been provided the degree of leadership to have these conversations more broadly. So, why is it that one of the very small questions that came up had to do with a resource allocation around providing captioning for course materials for students that had defined accessibilities -- defined accommodations and it became this substantial issue that the costs were decentralized out to each of the departments? And many of our departments, by virtue of being academic, tend to run on very thin budgets. So, when we stopped this conversation, we went into the pandemic, said, what is the net budget impact can here? I can't remember what the number was. Let's say it was $40,000 across the campus. You know, when I brought it up to the right degrees of leadership, they're, like, we're arguing over this? $1.3 billion budget. Don't get me wrong, $40,000 is real money but that's not the thing we should be arguing. By virtue of us decentralizing decision making to that being 40 decisions of $1,000 each, it became much more difficult to get the resource allocation. So the key observation I'd say is, clearly articulating why this is important, clearly articulating that when we marshal our resources collectively, we can make changes that don't seem so big when you're working in a larger context and it really involves that collaboration between and amongst groups and I've actually been very pleased, I think, going through the review with G3ict, certainly delivered us a road map, it certainly delivered us a maturity model, it gave us a sense of where we sat, but it actually opened up conversations amongst teams that have worked and sat together for many, many years but those conversations weren't as effective. You know, we always joke, my background, like I said, is information security with auditors. If the audit doesn't tell you what you want to know, you did something wrong, right? I will say, I have been very pleased with James, had a very objective, and the team he brought in was excellent, but it told us what we wanted to hear, you've got some pockets of brilliance but there's some coordination, there's some logistics, alignment you need to do. Having a third party assert that brought more credibility to this notion of accessibility than any empathetic call from staff on campus could have. JAMES: Thank you for that, Chris. And I think to your credits, and we've done a good number of these reviews of universities and of smart cities as well, I think one of the things that you did was pretty courageous, I think, you involved an enormous number of people from both the academic side of the university and the administrative side in a large number of conversations. I think over these ten conversations that our expert team had with your university community, there were 200 participants, 40 unique individuals, I think, but they were heavily attended, some of the discussions were quite passionate, I will say, because the passion was there. Can you talk a little bit about where -- recognizing and wanting to make even more progress on collaboration and breaking down some of these silos and amplifying some of these heroic efforts. Either where some of these -- what are some of these pockets that you would love to see replicated and I'd also be curious to hear a little bit about what are some of the groups that can help promote this kind of collaboration? We had talked, in particular, in our conversations with U Mass, the faculty Senate actually had been pretty engaged on these issues of accessibility. There is an academic advisory committee, I think, on accessibility. Are there any sort of areas that or groups that can help you as the CIO promote this collaboration? CHRIS: Yeah, you know, that's a great question, James. One of the key things, and one of the things that I found sort of helpful to me in my career, both in the CIO role I'm in, and previously in the information security role, is identifying those governance structures and where they have efficacy. That's one of the things that I've observed at least in some of the accessibility staff I've worked with. They have passion, they have technical focus, they have deep empathy and deep caring, but they don't have the experience with how universities govern themselves or what the governance structures are, where decision authority really rests. It's great to think, you know, I've had staff that think I have all sorts of decision authority, I have responsibility for my $30 odd million of budget, but sort of the extent of the responsibility I have, I have responsibility for standards, as we get into decision making, I have to tie into bodies like our faculty Senate, I have the information technology advisory council, some of these academic advisory councils. We have other both faculty and administration, leadership groups, task forces that are focused on the shared governance structure of universities, we have administrative focus units. So working with accessibility teams to identify where those power structures exist, how change occurs in an institution, and how you can be effective at making this case amongst all the other many cases, that was one of the key things, which again, I was fortunate to have a lot of this experience in information security, I observed many of my peers in information security, other institutions, come in and try to win the day of information security solely on technical merit. Like, well, we're going to go to this, we're going to spend another $100,000 on this new antivirus thing, because it's incrementally better than this other thing. And quite honestly, when you're making that case to a CFO or to a Chancellor or Provost, that's $100,000 for a technical thing I don't understand. Whereas, if you can turn it into a conversation about, either mediating institutional risk, delivering institutional benefit, understanding how change actually occurs on a campus, when you make that case in business terms, it becomes more rational and plausible amongst the thousand other things the Provost or the Chancellor or the CFO has been asked in the last day. So that's the key transition for me, how do you find those power structures, how do you identify those governance structures, how do you make it a business value proposition, not solely a technical or empathetic proposition. JAMES: That's actually a perfect segue, Chris, into a topic that I know you feel passionately about and that we recognize as well in our assessment tool, the maturity model is really pretty critical to an increasing commitment and capability on accessibility, inclusion. And that is what we call, you know, the business case for accessibility. Moving beyond, particularly here in the United States, every university has a legal requirement to be accessible and inclusive, in other countries as well, but you and I are sitting or standing here in the U.S. today. But we'd like to sort of move the conversation beyond risk avoidance and legal compliance to what is the business case? As you say, the why or the value proposition, of accessibility. Based on your experience, either over the past year as a result of or as part of this assessment, or just in general, can you talk a little bit more about that, that key issue of how you are trying to tap into the why and the value proposition at U Mass? CHRIS: Absolutely. So, one of the key value conversations we have on a regular basis, and this is not a conversation unique to U Mass, it's not a conversation even unique to the northeastern United States, but within the United States, there is a significant decline coming in college-age students in the coming years based off of just changes in birth rates, patterns like that. What you're seeing is increasing competition within the field for high-qualified students, you've seen this manifest through, U Mass was deeply involved in the closure of mount IDO, we actually took over parts of the campus, we inherited some of the students from there, you know, recently, I know Becker college in Worcester announced that it is intending to close as well. One of the key things that drives university budgets is attracting, retaining strong students to maintain competitiveness. And if the population is shrinking, one way from a business value perspective is to make sure that you're delivering a natively accessible education to appeal to as broad a population of students as possible. If we are, by virtue of not providing accessible content, unintentionally excluding some arbitrary percentage, say, even 5% or 10% of our students. That's 10% of a student population that will not become paying students, high-quality students. We're excluding a portion of our population that could engage. And that's based on a conjecture of 10%. If the conjecture is much higher, we could be unintentionally avoiding potential population when we know there's going to be restrictions in that. So from a very raw perspective, if budgets are driven at institutions through a combination of both undergraduate, graduate tuition, and research education, if we're not strongly positioned, meeting the market demand, and that can either be meeting market demand because there's a growth or being more competitive and approachable to a larger population, if there's a reduction in that student -- potential student population. We are not tied into the strategic mission of the institution to provide our role as a land grant, to provide instruction to residents of the commonwealth and to create a workforce for the commonwealth. We have over 250,000 living alumni from U Mass, vast majority of them stay in Massachusetts. At U Mass, we graduate more students than the top eight private institutions from the state of Massachusetts combined. That means we're tied deeply to the workforce. So, if we cannot find a way to make our content accessible and approach that, we're not only risking our own potential economic future, but we're actually risking issues of workforce development and long-term competitiveness of the state potentially. JAMES: Yeah. A couple thing in there that I would love to follow up on. One is, you've talked about the role of students, the diversity of students as a driver for the competitiveness of U Mass in fulfilling your many roles as a land grant state university. As you're thinking about the why and the value proposition, are you having discussions or thinking about, we certainly discussed this as part of our engagement, the technology assets you're deploying, the accessibility of them, it also impacts faculty and staff, is that part of the calculus as well? CHRIS: It absolutely is. Because, again, that same, you know, rubric holds, as we remain a competitive institution, we have to be competitive in our hiring practices. And that means approaching as broad a population of the available talent pool out there. If we are not delivering natively accessible experiences, whether that is directly instructional or it's, you know, pedantic as H.R. forms, right, everybody's got to do an H.R. form somewhere, but if we're delivering, and we've had our challenges in the institution of three copy, carbon forms that, you know, our vice Chancellor of human resources loves to say, he shut off the last -- he got rid of the last typewriter not that many years ago, right? There's clearly some substantial issues that we've had. If we're not competitive with the potential workforce, both at the highly skilled faculty level, at the highly skilled technical level, but at all levels of the organization, we're going to potentially compromise the available resource pool as well. So, again, if it comes back to business case, I see a compelling business case to make sure accessibility is core to our digital transformation because it allows our long-term access to a larger candidate pool. With the move to remote work, we're having very serious conversations, what does that mean, long term, right? We've had staff working remotely, we're going to struggle, like every other public and private institution is now, what does it mean for workforces returning, if the pandemic slows as we're hoping? Would we accept this notion of more broad remote work? Does that increase our potential labor pool? Those are all interesting questions that are going to have to be worked out. But if we cannot position our institution to be natively digitally inclusive, we're excluding a portion of our population that may have accessibility accommodations that we're just turning our back to from the get-go. And that's a challenge. That's a loss both to us and it's a loss of potentially high talented, high-skill individuals that could make this university stronger. JAMES: So, Chris, I would imagine that with your expertise and experience in the information security space, you've sort of tackled this issue of the value proposition, the why of security. How is the starting conversations, advancing conversations about the business case, the why and the value proposition, of accessibility, how is that being received? Where is it being received well, where is it a bit more of a struggle? CHRIS: I'd say it's being received well at the high level when I talk about this notion of making sure we're finding the most accessible pool, we're making -- ensuring we're going to remain competitive, tying to workforce. I think the value proposition, executive level, is very strong there. We've always been very successful at the value proposition at a very operational level, for our students and our staff that are providing accessibility accommodations, who are working with students on a one-to-one basis, for our help desk who are taking calls. Where the challenge is, and I think we've had a path to move forward, is for people who do not have either the high-level strategy, do not have the day-to-day blocking and tackling is trying to make the value proposition of why is this one more thing they should do, why should you take ten more minutes to ensure accessibility, alt image tags, why should you take two more minutes to turn on the captioning features in Zoom or PowerPoint? So, I ended up teaching again this fall, I taught for many years at U Mass, I took a number of years off. When I taught this fall, I taught entirely remotely, I taught entirely by Zoom. Zoom's native captioning feature wasn't there. So, I elected to use PowerPoint, use Office 365, turn on the captioning when I lectured. I use Zoom to record the lecture. And it put the captions into it. It's not perfect. It wasn't great. But the cost to me was thinking to do it, clicking a check box on Office 365 on PowerPoint and making sure I hit play and record. So, the incremental burden to me of applying captioning to course content, and I've taught this course material for 20 years, this is the first year I did that. So, there is two minutes of clicking, it took me about ten minutes going through each of my slide decks to apply alt image tags. That investment of my time as an instructor is absolutely worth it to make sure that content is more accessible. And that's the value proposition I think we have to hit that middle portion of the population, if we can move that population, the impact is going to be tremendous. SPEAKER: With the adoption of WCAG 2.1 in many countries, there is an increased demand for web developers, designers and other professionals with knowledge of web accessibility standards and guidelines. With this growth comes the need for an objectively verified level of expertise. The Web Accessibility Specialist exam will provide individuals and employers with the ability to assess web accessibility competence. Complete the WAS and CPACC exam to earn the special designation of Certified Professional in Web Accessibility!
I sat down with Chris Hay of LoveWorkRevolution.co to talk about his journey through his 30's and how he plans on changing as many lives as possible with his new mastermind. Enjoy! Joe Podcast Music By: Andy Galore, Album: "Out and About", Song: "Chicken & Scotch" 2014 Andy's Links: http://andygalore.com/ https://www.facebook.com/andygalorebass If you enjoy the podcast, would you please consider leaving a short review on Apple Podcasts/iTunes? It takes less than 60 seconds, and it really makes a difference in helping to convince hard-to-get guests. For show notes and past guests, please visit: https://joecostelloglobal.libsyn.com Subscribe, Rate & Review: I would love if you could subscribe to the podcast and leave an honest rating & review. This will encourage other people to listen and allow us to grow as a community. The bigger we get as a community, the bigger the impact we can have on the world. Sign up for Joe's email newsletter at: https://joecostelloglobal.com/#signup For transcripts of episodes, go to: https://joecostelloglobal.lybsyn.com Follow Joe: https://linktr.ee/joecostello Transcript Joe: My guest this week is Chris Hay, you can find Chris's website at LoveWorkRevolution.co. Chris did some exploration during his 30s right after selling his company. He now just turned 40 in November of 2020 and he's working on a new project, a new mastermind, if you will. He has an acronym for the project he's working on and the mastermind he's building. Joe: And it's H.E.A.R.T. H is heal and hear your heroic heart. E Explore your genius. A accept your mission R rebirth yourself T take action and trust. I would encourage you to check out his website at LoveWorkRevolution.co and also get in touch with him at Chris@LoveWorkRevolution.co. Please sit back and enjoy my conversation with Chris Hay. Joe: My guest today is Chris Hay, Chris and I hit it off really well on a completely unrelated conversation to what we're going to talk about today. And during that conversation, we realized that we both are really excited about the same thing. And so I wanted him to come on and talk to us about that. He's originally from New Zealand. He's coming to us now from Barcelona, Spain, where he currently lives. Chris, welcome to the podcast. Chris: Thank you so much. It's a great pleasure and honor to be here. And yeah, as I mentioned to you, I'm deeply grateful, particularly because this my first guest appearance. So I'm a little bit nervous, but I'm sure you'll go easy on. You know, I've got a lot of learnings that I've taken and been working on condensing down and really excited to share with your audience and and beyond. So thanks so much for having me. Joe: Yeah, absolutely my pleasure. So if most of the people who have listened to any of my past podcasts know that for me, it's important to have the guest give their back story so that we understand who you are, where you came from, and it sort of lays the foundation for the conversation that we're going to dig in deeper about all that you're doing now and it new, exciting project that you're working on. So if you can't and this is great for me to because you and I have only chatted a few times, but it would be really cool to understand where Chris Hay came from and and where Chris Hay is going. Chris: Yeah, absolutely. Thanks so much. So, yeah, an interesting thing happened to me when I was 31 years old in 2011, and I kind of feel like this is the beginning of the modern day part of my life with you. Life, like everything that happened up until the end was kind of a dry run or whatever. And then this one moment kind of feels like where I was sort of born again, if you like, what would become the, I guess, the middle part of my life or something like that. So basically, yeah, 2011. And I'm 31 years old and I'm sitting on the beach in Bali on my head, looking out on the most spectacular sunset. And I've come here to celebrate selling my business, which I've been working on all throughout my 20s, kind of leading up to this moment with this great anticipation that when I saw my business, you know, I'm going to have cash and in cash flow looks like the happiness. And as I'm sitting there looking out on the sunset, I just feel dreadfully lost and empty and just completely bamboozled, I guess for lack of a better word, that everything I've invested all of my hopes and dreams that I'm building up to this this milestone and then cashing out is going to bring all this happiness. And of course, it just doesn't. So that's right. Chris: As I sat there and over the next three days while I'm on this vacation, I really was reflecting on my the paradigm change that had happened to my brain because I realized that concurrently I'd lost my purpose and my reason for getting out of bed in the morning, because previously I've had this business. Now I'm like, how am I going to do with the rest of the life? I'd also lost my identity to a large degree because I had been quite ripped off and invested in the identity of being like this 20 something year old, pretty successful entrepreneur. And now I'm kind of like shit. I just I don't even know what I'm going to do next and I'm going to go about figuring that out. But perhaps more than anything, I realized that I've really lost my art on how to find happiness. And and I really believe that this maybe not lasting forever, but like at least, you know, the golden boy that could last more than like five minutes or something like that. So as I reflect of all of that, I was talking about like, OK, where do I go from here? And I knew that it had to be something entrepreneurial because I don't want to go back and get a job like looking for the man. But I knew that I had to be about much more than just making money. Chris: I had something much more impactful and meaningful and genuinely like helping people because my previous business, to be totally honest, like when I thought it, I really was pretty much just thinking about the money. That was my main motivation. So I reflected on all of this and, you know, and kind of sat there for the rest of this vacation moping around, kind of feeling sorry for myself. I really had no idea where to start and try to figure out how to rebuild my life. And and so it began, as I call it, has other people refer to it as the dark night of the soul. You know what? Tend to lock it up for me to be quite a long and painful, drawn out process that really, to be honest with you, lasted pretty much all of my thirties. Thirty one when I sold that business, I turned 40 in November, just gone. And so really throughout my thirties was this really intense and difficult period of introspection and and figuring out all the elements that I don't like about myself and other. I do like to myself initially, you know, and really learning deeply, but not just about myself, but about how I could show up and and and do work that I would love and have a positive impact in the world. Joe: Can I ask you one quick question? Chris: Yeah. Yeah, of course Joe: So we hear this so often when someone that's successful. Right. And we always we hear from the wisdom of those who have accomplished something and they've reached some sort of financial stability and then they get to that point or they get to that moment of what they call success. Right. What they originally were striving for, which was the money and creating this entity. And then potentially, right? if you have a business, the goal is eventually to sell it in cash out on that and then maybe go to the next thing. Right? Joe: But we hear so often that people get to that spot, they sell, they have the financial freedom, and then it doesn't it's not what they thought it was going to be. And I think that the hard thing for people that maybe haven't gotten to that point yet and the only reason I want to stop you here is because these things also get into my own brand, like what was more painful, struggling financially or getting to the point where you had the money and then it wasn't all that it meant to be like if you had the choice. Chris: A great question. So it's a really good question. Chris: So I guess I would like to reframe that question. What I'm hearing is like what was the greatest challenge was that the struggle for money or wasn't the struggle for meaning which came afterwards? Well, until I cashed out of that business, the money was the biggest struggle I'd ever had. Like struggling to build that business was the greatest challenge in my life that I've had. And until that time. But it was superseded by what came next, which was the struggle for meaning. And I think I don't know, I'm in for some lucky people, this might come a lot more naturally than it did for me young people or more successful than you. And whatever metric you might consider their success, whether it's financially or or perhaps a more holistic measure of success, is how well they've found their passion or the purpose or and ideally the combination of those good things that they sound like they're they're passionate about and are doing well financially out of it. I mean, that's the gold standard. I think that's what we're all aiming for. So for me, making money, making money was was hard making making money or like making it even just an income or a comfortable income that's good enough to live off and whatnot. Doing what you truly feel like you want to do is, I would say in some ways more challenging because it requires that you know yourself at a much deeper level, which can only happen with great introspection and then over time. Chris: But in some ways it's easier when you find it because, you know, you might have people say that if you you find the kind of work that you're supposed to do on work that you love and you'll never feel like you work another day in your life. Right. So as you I think as you get closer to finding as you sniff it out and you're on the trail and you're kind of getting closer and closer and closer with the various projects you might be engaged in and then honing in on the work that you truly want to do that feels like play for you and makes you come alive. Then then I guess that part of it gets easier. And then you try to, like, build your skill level to a standard where the world will reflect the value back to you in the form of financial renumeration. That makes sense. That's a very long way of asking your questions. I would get into that, I guess, easier in some ways and more challenging in other ways. But certainly it requires a much deeper level of self-awareness, I think, which takes longer to get to just how do we get a product designed in China and sell it on Amazon, for example? I suspect that people are doing that. Joe: But I mean, some people possibly will struggle their whole life. And it's unfortunate. And and I meant that like financially or also that they're not doing what they were meant to do on this earth. Right. So the choice is if you gave someone the choice of saying, OK, you can have you can. And the struggles usually are the financial part of it, the your health. And then it's whether or not you enjoy your life. And that means you're doing some. That resonates with your soul, right? Maybe those are the you know, there's probably more I mean, a million books and but if I think about myself, it's like, OK, I have my health. I love a lot of the things that I'm doing. And I might not be at the financial level that I want but I think if I if I have the choice, I'd rather be where I am and and and do this than to be financially free. But hate what I have to wake up and do every day. Right. So and I think the problem is, is until you get to the point where maybe you got to where you sold a company and you had some financial freedom, when people hear someone like you say, hey, you know, I sold my company, not just you, I mean anybody. I sold my company and I I made a lot of money and I got to the end goal of what I set out to do. And at the end, I wasn't happy. And if someone hasn't done that, they have a hard time relating to resonate with that. Chris: Yeah. Yeah. Well, I mean, another way to put it is like at the end of last year, the year before last, now coming into 2020, let me sort of Zoom out again for a second that like this wasn't a one and done for me, you know, like after I sold the business and had this experience of kind of like reaching an extrinsic goal and, and finding that this feeling of emptiness on the other side of it, like you would think that that would be enough to kind of knock, knock, knock that paradigm completely out of my head and replace it with the new paradigm. Chris: Only do things that are intrinsically rewarding. Which, by the way, science has found that intrinsically rewarding tasks, things that you would do even know that are rewarding in their own right for richer or rewarding motivation that leads to greater happiness. So if you can find if you can if you can pursue that, then you will be happier, even if you don't have to get to the high paying job in order to realize that it's simply just research around. Like, for example, is a Daniel Pink's Motivation 3.0, where he talks about intrinsic extrinsic motivation. So if you understand that, you will have a richer, better life experience by being driven by things like purpose and mastery and autonomy, then you can you can build that into your job, crafting if you're employed or into your business, if you're an entrepreneur. So you don't necessarily have to get to that milestone and realize that's what I started out by talking about how you think that having this experience once at such a deep level would be enough to kind of totally rewire your brain that you wouldn't make the same mistake again. But for me at least, and I think that it's common a lot in our culture, we're so hard wired to be motivated by extrinsic motivators, money, the trappings of success that add up that it's very that I didn't just learn at once like this has been. Chris: I don't like the volition of my lifetime. After I sold that business and I vowed to myself that whatever the next had to be about more than just the money had to be more meaningful. I would still, for the several years that came up, that still pantelides that be like and tempted by lucrative opportunities. And I spent countless lost months and cumulatively is kind of going down the rabbit hole just like, oh, this looks like a, you know, an interesting business idea, which is just financially motivated and whatnot. Chris: So I got to the point anyway, where I get before the last bout, or at least my New Year's resolution was to remove all extrinsic goals and replace them with one goal, which was in a piece, because I think that we oftentimes put in a piece or happiness on the other side of extrinsic goals, like when I achieved this milestone, then I'll feel happy. And when we do that, we you know, we pride ourselves on happiness here and now and then. Chris: So that's the trap that I found myself falling into time and time again. And I'm pretty sure I'm not alone. And so. Now and when we have these big, lofty goals, it creates a friction as well, like where where the goal is and where we are now and then at least all of these feelings of inadequacy and lack of self-worth, because I'm not, quote unquote there yet, you know what I mean? So I'm still kind of trying to get my head up, to be honest, at the deepest level. Chris: But I think that it's about kind of holding holding space for ideas and visions that you might have that you want to achieve, but also kind of being less attached to them, I guess, you know, so that they don't rob you of your happiness here and now and above all else, being present and being grateful for the moment. And and I've found as well and spoke at a lot of other people who kind of have the shared experience as well as like the more we do that and remove ourselves from this relentless rushing towards the goal that may or may not ever eventuate and leaning into the future and at the cost of sacrificing our happiness here and now, that the the the more we stay present, the more kind of, you know, without getting to work with, the more kind of magic shows up in our lives. But first of all, we just appreciate the moment more. So maybe, I don't know, you take time to go for a walk in the morning and smell the fresh air and admire your your neighbor's flower garden or whatever it might be. So you notice those things. But also and this is this is kind of borderline a move by magic can sometimes show up like synchronicities and whatnot. And so that's the one grand synchronicity that kind of unfolded in my life, which led to a deeper understanding of my work and my my, my my greatest gift that I feel that I've received and that I can kind of share with others. So I'm curious if you want to go there and share that story with you as well. Joe: Yeah. So because I kind of interrupted you, because I wanted to clarify that, you know, if you've seen both sides, not everybody, I guess that was my point. Not everybody sees gets to both sections. Right? They either. And if they do that, like you said, they're they're either really happy doing what they're doing. And I think it comes when you're you're serving others where it's in alignment with yourself. Right. So if you if you have figured it out, which is really hard to do, but once you do and you can stick with it and not have it be like the way our world runs right now to be present and all the things you're talking about to be still and to leave space supercop. Right. We have so many things coming at us and and we're told to that you have to be really active out in the world of social media. And I'm just as guilty as the next person. But so it's really hard to wake up and take that one and walk past the flower and actually smell it. And it's just it it doesn't exist. It's really hard. Right. So I interrupted you when you were talking about how once you sold your company, there was through your 30s, you just felt like you were still trying to figure out how to find this this spark, this bliss, this inner peace. Right. So I guess that's where we're at now because I so rudely interrupted you. Joe: But I want to. That's that's cool. Chris: That's cool. Let me come back because I'd like one of the thought that kind of might help put a bow on this. Chris: You know, this concept of what if you if you haven't kind of if you haven't if you've got to like you've made it financially yet, how does that reconcile with the experience about sharing? One way to look at it, I guess, is like Maslow's hierarchy. Right. Which at the base of the hierarchy are like food, clothing, shelter, all of those things. And then at the top is self actualization. Chris: And in fact, about that and self transcendence, which is another category that I added to the to the in the twilight years, which is kind of little known and underreported then management textbooks. But it's an interesting concept that maybe will come later. Chris: And and so there's no doubt that if you haven't made enough financial needs yet, then you're not going. The crisis of meaning is going to mean much less to you because you've got to cover those basic needs first. And so in some ways, it's like a physical problem, I guess, to a degree, like a crisis of meaning. But I still believe as did Maslow that if you are lucky enough to supersede the baseline financial needs and then if you don't feel like you've made it there yet, I would encourage you to reflect on if you if you really do need all of the things that you currently pay money for because you never get caught up in wrapping. Chris: And spent a lot of money on unnecessary things. Maybe you can live on much less and then spend time instead of working to obscure things that are truly, deeply meaningful to you. And as you said, like figuring out how you can be of service to others while serving yourself as well, rather than sacrificing yourself to save others. Chris: But at the top of Maslow's hierarchy, which is the apex of the human experience, is the self transcendence for giving yourself to. As Becca Franklin said, you've got a great quote where he says specialization is possible only as a side effect of self transcendence. So you can only become your best self by losing yourself in service to a cause greater than yourself, essentially. Joe: Perfect. OK, so you're now when did you restore it back to the story? Joe: So when did you get over that hump? Let's call it. Chris: Yeah, so to be honest, like I'm still getting over, I feel like, you know, it's not going to take me. But there was one moment of kind of like a pivotal kind of, I guess, turning moment where I was at a personal development event in Hawaii. And we were asked actually Patrick Combs, I was attending one of his retreats in Hawaii and he was leading an exercise where he had us write our eulogy from the perspective of living essentially our best life from from this point forward. And it's been a really busy day. And it was the last thing at night like this. It was dark, dark outside and then back inside of us as we lay there on the on the ground and this little torch can light and to whatever would come to us in terms of how we're going to be remembered from from that point at our eulogy, having looked at our life from that point forward. And so I tried to empty my mind out. And really the only thing that I really like, I really believe that love is the universal connect, the one thing that we all share in common. And so I just wrote, Chris, touched a hundred million and I thought, I'll dream big ya know a hundred million people with love in this lifetime. And then I thought, this is my one chance to dream really big. So I added, an extra zero or zeros. So I made it like Chris touched one billion people with love in this lifetime. And then and I didn't really know exactly what they meant. I still don't really. But like, as I read it out and it will give me the opportunity to share with the rest of the group. Chris: And as I read it out, I felt this wave of embarrassment like rush over and be like, oh my God, I made a fool of myself. Like we had a dream so big that I could possibly impact a billion people. Oh, my God. Fortunately, it was the last was the last exercise of the day. And I started back to my room and kind of like it under my pillow pretty much. And and then I woke up the next morning and I was still really grappling with the sense of shame and embarrassment of having this out. And I think it's big. And so I went for a run down to the beach and did a meditation on the beach. And then on the way back, I had to stop for a public restroom. And and I'm standing at the urinal, of all places. And I looked up on the wall and someone had drawn a love heart with wings. And it's like, oh, that's kind of weird. And then I like that. I look around the bathroom and actually someone had drawn love all over the wall so they hadn't noticed coming in. So they were expecting is like, what the hell? You know, like what what does this mean? And, you know, I grew up in like a very scientifically minded family. And so I try not to believe in woo woo stuff or synchronicities and that kind of thing in my life has been a screaming pattern recognition bias. You know, like humans, the brain is programmed to recognize things like that. And then as I reflected on it, but maybe the recognition from my brain or maybe just my desire to kind of leave or make sense of things, but I guess I chose to adopt that event as some kind of affirmation, a potential affirmation from the universe, that go all in ya know life like it was. Chris: I let my mind go empty the night before when I had like to think about what I wanted to be remembered. And it felt like it was the source or the universe kind of speaking to me and wanting this. But I think this idea of this really is the most important thing. And I think that it's you know, I'm not sure if it's like I meditate a lot. Right. And and when I meditate, I feel what can only be described as love. And people will get that through prayer and maybe being in nature, watching a sunset, that kind of thing. But when you stop thinking in your mind goes empty, your serene, blissful, it's beautiful. And it feels to me like I can't think of a better word other than love. And so I don't know what love is, the fabric of the cosmos or the some underlying kind of fabric of human consciousness or both of those things, I'm not really sure. But there's something mystical about it. And and so I chose to adopt this as part of my story. And as I reflected on it more, I kind of thought, well, not only does I take this as an affirmation that the that I should pursue love and trying to make the world more love place. But I thought this this love with wings is an interesting motif because it love heart with wings. It's the same somewhere. It invites following. Chris: And as I reflected on it, I thought of this Steve Jobs quote. That was one of the first videos that I developed, my kind of videos that I watched after I came home from from vacation in Bali. So my business always. Had earlier, and he has this great commencement speech at Stanford, I think, where he says this above all else, follow your heart and intuition that somehow already know who you truly want to become. And and as I reflected on that, that's like that, you know, through all of these trials and tribulations of my theories, trying to figure out who I am, how I can show up, how I can help other people, if there's one thing I can put my hand on my heart and say is that I really did follow my heart. And so and then I was on a flight. And you know how you get those quiet times on flights where you might be doing some journaling or whatever, and you just kind of get these flashes of inspiration. And and so I started to etch out what would become this framework around, like how to follow your heart and everything I learned about that. So, I mean, I did around an acronym for H.E A.R.T. H is heal and hear your hero heart and E is explore your genius, A is accept your mission, R is rebirth yourself and T is take action and trust. And so yeah, like five modules and I really just the greatest joy of my life to try to condense down everything that I've learned over the last decade and, and try to make something beautiful out of it, out of all of that struggle. And I guess I kind of relate back to your initial question. You know, is it easier to make money or to make meaning? And then I guess of that is like, you know, what's what's more gratifying? And in my experience, you know, like this is this is brand new. And I'm actually looking for beta test people to kind of come and be guinea pigs with me. But this has been the most meaningful and interesting and validating experience of my life. Like it's the gift of stuff to everything I've learnt, really, to teach. Joe: Yeah, that's awesome. So ultimately, you're going to I know where you're going to think of a name for for all of this or you have ideas, but we're not we're, like you said, is being transparent. This is new. And we didn't want to like for some sort of title to this, but you you had it. Chris: Maybe it's like maybe it's like your back to your greatest destiny or I'm playing with, like, discover your destiny or something like that. Chris: Now, one of the things that I left out is like this thing about the love and making the world more complex and following your heart is that I think like love is the language of the heart. And I think that when you follow your heart. When you follow your heart and you find the work that you feel called to do. That can be your greatest conduit, one of your greatest conduits for love, I think you create that work with the motivation of love and you serve. I think you end up serving people who who who resonate with your story and to appreciate what you've gone through and probably going for something similar. And so you have empathy for them and in their case and you want to help them. And so that feels like love to me. And so I really like your vocation can become like one of your greatest battles that you have for manifesting love. And so if you believe that, as I do, that that love is the solution for most of the world's problems, then I believe that by following your heart to find your ultimate vocation that brings you to life the most can be your your heart. Your heart knows the way to those people who you truly want to become, but also how we can create a more loving world in the process. Joe: Yeah, and it's really interesting that I know as young adults and I've I've put up a post about this on certain Facebook groups that I'm in and I've reflected on this a lot, which is in the day and age that we're in now, there seems like we are. We're constantly trying to fix something that's broken, right, and it's usually and I'm talking individuals, right. Saying that we we get to a certain point and we realize, like you, when you sold your company, like many of us, when we hit certain points in our life that this isn't right. This doesn't feel right. It's not making me happy. All of the things that go through your head and I keep thinking, gosh, I wish we could just get to. The young adults earlier, like just this whole thing shifts from where it is here all the way, like they just take anything that any of the people that you and I know are doing or the people like Tony Robbins, the work that he does, Dean Graziosi, you know, good work Patrick's doing with Eric. If we could take all of that and just slide it earlier and just and I know that at a certain point, the young minds are not they don't have the attention span for there they are don't have the interest in it. They're not mature enough to understand it yet. Joe: But there's got to be a point where if we took all of this and just brought it way earlier in the life span of a human and just got to young people early and said, listen, before you get to where all of the rest is, not everybody's like that. Some people just find what they were meant to do and a really young age and are happy and life is grand. I would say the majority don't. They wander around really lost for a really long time. And the only thing that they always seem to gravitate to is making money. It's all financial and just and so they go down this path and then they come to realize later in life that that didn't work. But then now we're in like repair mode, right? Instead, it's like, God, if we could just figure out a way to guide young people to saying, listen, we can tell you now that money is not the answer. It's following your heart. It's being nice to people and loving and caring and empathetic and transparent and having integrity and all of those things that that if you could learn those and navigate that, all the rest will come to you because you're deserving of it, you know. But it's just it's such a frustrating thing for me. Chris: Yeah, absolutely. I mean, it reminds me of a story I wanted my friend Raj. He has a similar kind of story, wildly successful coffee company and business empire, really at the stage. But he was looking for an oil company when he graduated college. And one of his mentors within the company was a guy who was 60 or something like that and really didn't have a huge passion for the work. But he had another interest outside of work, which was now. But it was maybe it was, I don't know, like wood work or something more textile that he wanted to do with his hands. And he was always talking to to do this when I retire. Yeah. And then and then he passed away like that at age 60 or whatever and never reached retirement. You know, for my friend Raj, that was it. Chris: That was like that was that was all he needed to kind of be like, I'm not falling into that trap. You know, life is for the living. So. Yeah. Chris: And then they kind of back on what started out by saying where you we could only get this information at the end to people to get younger. Chris: If you're younger and you're listening to that's the one thing that I guess would encourage you to do as well as following your heart or maybe even kind of in tandem with that or another way to frame it or it even comes before following your, following your heart is kind of a, you know, a slightly amorphous kind of thing to say I'm cognizant of that. But I think following your curiosity is a great, great place to start. And so, like for me, having sold my real estate business and then I had no idea that I was going to end up essentially in the personal development space, you know, like where I come from in New Zealand. I've never met a life coach in my life, know what I mean? Chris: I didn't even really register for me that that was a viable option. So for me, it took me a long time to put the two together and go, oh, my God, like, what if I could teach everybody? But what if I could teach people, for example, in a word or, you know, some light on a dark night of the soul or some of the challenges, everything I've learned how gratifying that would be for a long time to get to that place. And I would but I wouldn't have got there had I not followed my curiosity and my curiosity in the first place was for personal development content. And so I sold that business and that that watched that Steve Jobs video. Chris: And and that was the aspect that the next several years were just a whole kind of personal development books and YouTube videos and everything I could get my hands on to try to figure out myself and and and and so following my curiosity, whatever your curiosity is, I'm read a... Chris: I look, I had an awesome video interview with Common, the rap artist Common A...just a couple of days ago Chris: But I was out walking my baby and he was talking about the same concepts of essentially service as being of service and finding your greatest gifts and getting them to give service to others. And he would say, you know, he started out in music because he enjoyed it. Chris: It was for him it was therapeutic and it's cathartic. It was fun, playful. And I guess he was following his own curiosity. And the people say, follow your passion. I don't. What are you curious about? What are the books you read? What are the experiences you'd love to have? Where might you love to travel? How would you like to speak to if you have the opportunity? Follow your curiosity and so Common followed his passion for music, curiosity and music and then realize how it could benefit other people, you know, how their audience are reacting to it was obviously resonating with them and giving them an emotive experience and and giving giving the audience joy. And then ultimately the cash comes as a result of that. So that's like one of the really interesting models that I discovered along the way as well. And you can look back up that it from what I saw this one talking like I was talking about basically the the default model that we have in society for happiness essentially is wrong. Chris: You know, it's based around Do Have Be like you think of that and like you do whatever it takes to have the stuff that you think you need houses, cars, material possessions in order to be happy. Right. But but if you flip that around a there's another interesting models, which I would advocate for, which is Be Do Have and I'll explain that essentially and be happy now and the research cutting edge cognitive psychology research actually shows that when we are happy, here are now, happiness and optimism fueled performance and achievement. So if you're interested in that, you can look up a book called The Happiness Advantage by Shawn Achor where he really dives deeply into that and finds that when get when you show up in it and a happy, optimistic state where we're trying to perform, you know, we're more open minded, we're more creative, all that stuff. So be happy now and don't put happiness or a piece on the other side of extrinsic goals. Extract the self evidence now and then find. And so then. So that's Be and then the next step is to Do so, do what you love. And if you don't know what it is you love to do yet, I would say follow that curiosity. Chris: And when you do what you love, ultimately your as Steve Jobs says as with all matters of the heart, you'll know when you find it. And Mihaly Csikszentmihalyi has a wonderful called Flow where he talks about the flow of state that that athletes get in when they're playing, that painters are in when they're painting, musicians are in when when they're making music. But also, you know, in other professional fields that are kind of elite that everyone could find themselves doing, can be employed when they're cooking or a coder is in flow when they're not coding and a designer is and flow when they're designing and people who have podcasts are probably in flow when they're in conversation. And so for that flow state is where we create point. So if you can find work that you if you can find work that brings you into flow in that flow, stay with time disappears and then you kind of lose focus or lose touch with the outside world and you're just lost in the work like that is that's when you know, you're you're on the right path and that's that work that will never feel like work. It really feels like play. And so then the final stage of this model, be happy now through the work that you love, will do the work that brings you to the flow. Chris: And then Have and so the keeping with the Have and this model, that is the first one where you're doing whatever is necessary in order to have now you're happy and you're doing what you love and Have part kind of like follows you, magnifiers into you because you're doing your best work, which is in flow. And then eventually, sooner or later, when you can have certain degree of competence at that, what the world will reward you people will take notice. And I'll be like, holy crap like Joe's podcast is amazing and this other work that Joe does, he obviously loves doing and shows up with this immense passion. It's like so inspiring. I want to be a part of that. Like tell me how I can be one of Joe's clients, you know, and the money kind of gets magnetized to you. Becomes a by product rather than if you're going out to get the money, you're going on to do what you love and be of service and and yeah, yeah, Joe: It's very interesting because I'm doing some work now in my own career. And part of it is the piece with Russell Bronson and and he just talks like he literally today's live webinar that we did. He literally got on there and right out of the gate, he was like, if you are here to make money, you're in the wrong place. It was like, I am here to get you to shift your mindset and to figure out what it is that you are here to do and how you are here to serve others. He goes, and when you figure that piece out, all of the rest falls into place. And it's in it's kind of like the whole thing where the universe gives you more of what you what you think about and what you are attracted to. And so if you are attracted to complaining and feeling like he woe is me and all of those things, that's what it delivers more of. Right. So if you shift it and say, listen, the more and more people I can help, the more and more love I can spread, the more and more whatever all of that goodness just it just naturally happens. Right. And then all the other things fall into place. But it's just it's really hard for us where we are in our lives again. God, if I only knew that 20 years ago or 30 or whatever, I just if that's what's really frustrating. So yeah. And I want to get back to what so what you're doing this work that you're doing and what you're about to offer to the world and present. Right? What in what form is this going to be and is it is it going to be a when you said you want beta testers, is this a course that you're going to run people through? Is, is...explain that piece of it to me. Chris: Yeah. For sure. So I'm thinking of a 90 day program, OK, 90 days. And like the small group, you know, maybe in four or five or ten people and and basically just a donation, more or less, if you like. You know, it's not about the money, but I think if somebody pays some money, at least, you know, they'll show up in a more committed way. So whatever, whatever potentially whatever people are can afford or are comfortable with, you know, I think it's such an important material that and my passion is to get it out to whoever and not let people be hamstrung if they think they have limited financial means. Chris: So, you know, some some very big price point. And and, yeah, I think I think a 90 day program to start with is enough to really get people pretty deeply set and the concepts and really understand all the stuff at the same level. And and then you want to go take a longer than the one year thing or even a month or whatever. Joe: And what's the product going to look like? Is it going to be like in a Facebook group? Is it on your website? Is it some piece of software developed or basically the kind of like a mastermind where in a training environment where we'll have a small group and then just meet every week, once a week or 90 days, and then we'll have like a Facebook group? Yeah. So there are these five modules where we'll be stepping away, moving through these for scale. And er and so just real quickly, like what I mean by something which like you're familiar with The Hero's Journey, Joseph Campbell's Hero's Journey. Yes, so like Star Wars and every major blockbuster movie is basically written according to this format where there's a hero in there, you know, they're at home, they're in their safe, nice, warm bed, more or less like the hobbits in the shire and then some there's some catalyst and that call to adventure and they go out where they need allies and enemies and face obstacles and overcome these. And in the process, they gain the kind of awareness of self and and ultimately face their biggest fears and then come back, return home, essentially with the power to bestow upon their common man, as Joseph Campbell would put it and so it comes from all of humanity, oldest mythology and whatnot. Chris: So Joseph Campbell wrote a book called "The Hero with a Thousand Faces" where he kind of discovered all of this and then Lucas was it, Lucas who did it wasn't Star Wars was one of the first to really adopt it into a major motion picture. But it's a really interesting frame sort of to do your life for. Chris: And I guess I argue that it's not just for the movies, but I mean, there's a reason why we kind of resonate with the parents and movies that we admire their courage and and their following of their heart, really, to face their demons and ultimately return home and take a bit of vision of themselves and able to help their fellow people. Chris: And so part of the way that a lot of work around the growth mindset and process is fixed mindset and how if you think you can if you think you can, can. And the difference between fixed and a growth mindset. And I would argue that this is, you know, viewing yourself, doing your, um as Joseph Campbell puts it, that you are the hero of your own life story. And so I believe that viewing yourself in the spy or even just playing with this concept of like viewing yourself as the hero of your own journey and that you have to face titanic challenges and surmount them, and then how can you grow and what you learn as a result of that? And how can you benefit benefit other people with what you learn as you go through this personal growth, like viewing your life through the ______ lens? I would say the ultimate growth mindset and then the H is the Hear and heal your heroic heart. And hear is what I heal is, you know, they say we can only love others as much as we love ourselves. So this is like developing self-love and really becoming more compassionate with yourself and improving your internal narrative and and and being more loving for yourself so that then when you find when you move through the process in the program and you figure out the work that you do, you'll be showing up from a place of love and then to hear your heart, you know, this is around like tapping into your inner wisdom for journaling and meditation and stuff like that, and then explore your genius and some of the stuff that really cool down a little bit around discovering your superpowers or your own zone of genius, as Gary Hendricks put it, as opposed to your zone of excellence and sense of competence and so on, which is so like separating out like what you're truly genius, that which is others activities that bring you into flow most often and do everything you can to structure your your work life around those tasks and get rid of everything that drags you out of that. Chris: And then accepting your mission is built around something that's a rarity and a little bit around like Maslow's hierarchy and how we can only self actualization as possible, only as a side effect of self transcendence. So so to serve and truly contribute to others is how we how we could be the best selves. And we see this in our political leaders and so on. If you think of like Nelson Mandela or Gandhi, Mother Teresa or some of these people, these icons that we really admire, what do we admire about them? We admire and how much they've been of service to other people and the effect that they've had. Chris: But it's not only political leaders, also business leaders and business leaders, for example Elon Musk said that coming from PayPal, he thought to himself , what are some of the other problems that are most likely to impact the future of humanity, not thinking what's the best way to make money? Interesting. I read an article a couple of days ago where he's just surpassed Jeff Bezos as the world's most wealthy individual you know obviously on the rise of the electric car and stuff, but everything's happening with the climate. But so, yeah, accepting your mention is about like figuring out how can you how can you tell what, A, that you might give you a life for? Ultimately, what do you care about more than you care about. What would you do even if you knew you would fail the kind of thing in a mission that is so big that you could spend the rest of your life pursuing it and still be satisfied, even if you didn't fully realize it, but contributed what's towards it and then R stands for rebirth is really just like stepping into that new identity because there's a lot, you know, people will know you as they've always known you and expect you to be, and they always thought you were kind of thing, but when you step into your life, it's great. It's worth a lot of that has to change. And so dealing with the fallout of some of those relationships that need to change and and also how to pursue the new relationships that will move forward and surround yourself with, you know, people who won't let you fail and then finally take action and trust. Chris: So that's kind of what it sounds like to get your thesis around. Like holding each other accountable and having a part of this program will obviously have accountability groups and have a positive peer pressure that would show up. And and if you want to do not do the thing that last week you said you were going to do, that was going to move the needle for them on your most of most important projects. Chris: And then. And then Trust and you're finally, just trusting. And I guess that's the slightly mystical thing, you know. And when I talk about the trust, I talk about that event that happened to me in Hawaii and how that invited me to trust to put aside my rational left brain scientific thinking mind and believe that just maybe, you know, the universe might be conspiring to bring great things about for people who have other intentions. Chris: So, yes, that's it. Joe: That's awesome. And I guess it's safe to say you're in rebirth mode, right? Chris: Yeah, exactly. You got that? Yes. Joe: Well, awesome. OK, so what is the website URL? I'm going to put it all on the notes, but I just want to make sure. Chris: Yeah, yeah. Chris: It's love work revolution so loveworkrevolution.co. Chris: And so the word revolution is an interesting one to talk about how I think by following your heart you can find the work that you love and that will bring more love into the world. And then the revolution piece is Gallup, which is a research institute there in the States that are really a massive survey where they interviewed hundreds of thousands of people and found that I think it was. Eighty seven percent could be slightly wrong. And that's in the 80s, 80 something percent of people are either disengaged or actively disengaged in their work. So there's so many that's a disaster not only for the personal suffering of all those people who have to show up for work that they hate every day. But the the untapped human potential, that's just going to waste because people are sitting there, like, not really giving a crap about what they do. And and and at the same time, you know, humanity faces all these immense difficulties and challenges that we face globally around like climate change and poverty and all these really meaningful causes that people could engage with. And that's what we're languishing doing so we don't care about. And Malcolm Gladwell in his book "Tipping Point", found that there's a there's a kind of a magic number around like 20 percent. Like when 20 percent of people latch onto an idea, then there is a tipping point it can spread for the rest of the community. Chris: So, I mean, the margin of people who are disengaged in their work in the high 80s and the number of people who need to be defined to do what they love in order to create a revolution or a tipping point, and where we can see a sea change, where it's written for the rest of the population is around 20 percent. Chris: So there's only about six percent of people that we have to move to find work that they love, an create a more loving world. So that's why I loveworkrevolution.co. So co Joe: Ok, cool. So I'll put that in the show notes if someone wants to become a part of this, what is the best way for them to get in touch with you Chris: Yeah just shoot me an email, it's Chris, C H R I S at loveworkrevolution.co Joe: Perfect, awesome! Ok, and then I'll get all of this in the show notes. And I wish you luck with this. I know this this is it for me. I can tell how it comes out of you. I see your eyes light up and you just you just know. Right, that this is what you've wanted to do. And this is this, this speaks to you, so and I think it's going to be amazing. I'm glad that you've decided to do this. And I look forward to seeing this blossom and help a lot of people out there. Chris: Thank you Joe like so much today. Today's been a big deal for me, as mentioned, this is my first podcast interview talking about this stuff. So I just really appreciate you giving me space and letting me connect with the audience. And as you mentioned, you know, you can tell what this is it for me. And it really is. You know, this is the last 10 years of my life kind of accumulating and coming full circle and to, you know, in my way of making meaning and purpose and sense out of all of the struggle of the last 10 years. So needless to say, I am deeply passionate about this and intend to do this for a very long time. And so I look forward to several years from now when, you know, you and I can catch up and have a beer together and say, hey, remember that time I was, I was on my first ever podcast with you. So I really appreciate you having me, man. Thank you. Joe: Yeah, it's absolutely my pleasure. Glad to be here in the beginning of all this will actually get to see it, turn into something great. And I'm looking forward to it. So, Chris, thank you so much for taking the time. I know it's late there in Barcelona. It's probably been a long day for you. And it was really nice to talk with you. And I was super, super excited about this for you. And again, I'm looking forward to seeing what happens. Chris: Thank you. Peace and love Joe and to your audience, thanks for listening. Joe: Yeah, OK. We'll talk soon. Thank you. Chris: All right. Bye for now...
This month we hear from Chris and Jessica Duncan…and Zakai, their 4 month old who chimes in too! Chris and Jessica share insight into the importance of purity, how and why God designed sex for marriage, the power of soul ties while highlighting how God's redemption reaches our most broken, messy parts and pasts and the beautiful way He uses marriage to bind up these delicate parts of our pasts. 6:00- 11:50 — They talk about the first year of marriage and taking down the expectations they first had. 11:50-14:20 How they met. The actual greatest story! Jessica spotted Chris on a reality TV show and 4 years later, they connected. 14:20 - 17:40 Prior to committing their lives to Jesus, Chris and Jess dated for 3 years. They open up about what the basis of their relationship was — sex. But did they really know each other? “Looking back I can't figure out how or why we started having sex. Did we feel love for each other? Fast forward, to when we stopped having sex we found out new ways to love each other, to communicate beyond just the act of having sex. It's not even a connection or intimacy, it's really just empty pleasure. We didn't even have a relationship with each other. It was just that. We didn't know who the other was” — Jessica “Before Jesus, we see sex as for yourself. You get your self gratification, you get your sex, and that's it. That's what we know now, this wasn't what we knew back then.” -Chris 17:40 Jessica let Chris go after a 3 years of dating because of what she realized as she began seeking a relationship with Jesus. Finding Jesus became more important than any intimate relationship Jessica had with Chris when they were just dating and that's what really spoke to him…he thought, “You're willing to choose someone in some old book over me? I'm standing right here in front of you? You want to shut off this pleasure that's going on?”- Chris “It was a battle and it took time. Because there was a person in front of me. Jesus became real the more I looked…the more I sought Him. For a long time, I thought, ‘Okay, if something happens between us then I can fully dive into my relationship with God. I knew that's what I should be doing, but I kept falling into the same old, ‘But this is my boyfriend…' and twisting the words in the Bible. And thought, ‘Well one day I want to marry him, so that means he's my husband.' But there was a point God said, “He's not your husband, he hasn't committed that. And I realized, my husband could have been somewhere else.” - Jessica. “There came a point where I just said, Lord, take him. Take him away, this is my chance to just walk with You and not be distracted. If he's not my husband, just let him be that guy that leaves it that way.” -Jessica When he came back around, if I was going to allow him back in my life I had to be honest with him. I had to tell him I wasn't going to pursue a relationship with a man that isn't a man of God — and that means more than just putting it in your bio. It's walking in his purpose, being convicted with what the Word of God says, leading her to Christ, not drawing her to you.” - Jessica “It's not just about saying “God first”, it's actually putting God first and letting Him transform you.” - Chris “Sex was what was stopping me from walking truly with hIm and the purpose he had for me.” — Jessica 22:00 God's definition of a relationship. It wasn't until we talked about this, that the true purpose of our relationship finally started revealing itself.” - Jessica “So we put Him first and tried to understand what His definition of a relationship was or is instead of our own understanding of what a relationship is” — Chris “We're fallen, we're always going to define things. It's so important to go back to the Bible, and what the Bible defines what marriage is, what a relationship is, how it should be, we can try to define it so many ways but it's always going to be imperfect bc we're imperfect people. The Bib --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app