Podcasts about hillel wayne

  • 20PODCASTS
  • 37EPISODES
  • 35mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about hillel wayne

Latest podcast episodes about hillel wayne

The Changelog
Revenge of the junior developer (News)

The Changelog

Play Episode Listen Later Mar 24, 2025 8:14


Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.

Changelog News
Revenge of the junior developer

Changelog News

Play Episode Listen Later Mar 24, 2025 8:14


Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.

Changelog Master Feed
Revenge of the junior developer (Changelog News #137)

Changelog Master Feed

Play Episode Listen Later Mar 24, 2025 8:14 Transcription Available


Steve Yegge's latest rant about the future of "coding", Ethan McCue shares some life altering Postgres patterns, Hillel Wayne makes the case for Verification-First Development, Gerd Zellweger experienced lots of pain setting up GitHub Actions & Cascii is a web-based ASCII diagram builder.

SoftwareArchitektur im Stream
Are We Engineers? With Hillel Wayne

SoftwareArchitektur im Stream

Play Episode Listen Later Mar 27, 2024 56:06


Software engineering stands apart from other engineering disciplines - or does it? Some argue that we are too informal to be deemed engineers, while others believe “real” engineers follow traditional, waterfall methods because things are much more stable in their domains. Some even argue that software development should be seen as an art or craft. To address the question of whether software engineers are “real” engineers, Hillel Wayne interviewed professionals who crossed over from traditional engineering to software engineering. In this episode, we will delve into the insights gained from these. Links Is Software Engineering Real Engineering? Hillel Wayne YOW! 2023 YouTube hillelwayne.com Are We Really Engineers? (Crossover project 1/3) We Are Not Special(Crossover project 2/3) What Engineering Can Teach (an Learn From) Us (Crossover project 3/3) The Joel Test Documenting Software Architectures (Paul Clements et al.) Handbook of Industrial Engineering (Gavriel Salvendy) Hillel Wayne & Laurent Bossavit - Is It All Built on Sand - What Do We Actually Know About Software Development? Auftragstaktik - Agilität beim Militär? mit Sönke Marahrens Crew Ressource Management - Wie geht die Luftfahrt mit dem Faktor Mensch um?

The Bike Shed
398: Developing Heuristics For Writing Software

The Bike Shed

Play Episode Listen Later Aug 22, 2023 34:07


Want a cool cucumber salad? Joël's got you covered. Stephanie has evolved and found some pickles she enjoys. Experienced programmers use a lot of heuristics or "rules of thumb" about what makes their code better. These aren't always true, but they work in most situations. Stephanie and Joël discuss a range of heuristics, how to use them, how to come up with them, how to know when to break them, and how to teach them to more junior devs. Pickled mustard seeds (https://www.youtube.com/watch?v=aLMFGk7Ylw0) The purpose of a system is what it does (https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does) Intro to empirical software engineering by Hillel Wayne (https://www.youtube.com/watch?v=WELBnE33dpY) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So, as of the recording of this, summer is in full swing, and it's the time of year where we have all these, you know, fresh vegetables out, so I've been really enjoying a lot of those. I think this week; in particular, I've been going into, like, all the variations on cucumber salads. STEPHANIE: Ooh. JOËL: Yeah. So, that's been kind of fun for me. A fun thing I've been doing to spice this up is pickling mustard seeds to add as a topping. That's actually really amazing. It adds just a little bit of acidity, a little bit of crunch, a little bit of texture. And it's pretty. STEPHANIE: That sounds so delicious. And also, I was going to share something about pickles about what's new in my world. [laughs] But first, I am curious, what has been your go-to cucumber salad that you put this pickled mustard seed situation on top? JOËL: So, cucumbers and tomatoes is just the base of everything. And then, it kind of goes with random things I have in my fridge. A little bit of goat cheese on top can be a great topping, big fan of balsamic glaze. You can just get, like, a bottle of that at the grocery store, the pickled mustard seeds. I've recently been trying topping with a fried egg. STEPHANIE: Ooh, that sounds really fun. It kind of, like, adds a bit of savoriness and creaminess and maybe even, like, the crunchy fried edges. That sounds really yummy. JOËL: Particularly if you do it over easy where the center is not fully cooked. When the egg breaks, you effectively get salad dressing for free. STEPHANIE: That sounds so delicious. JOËL: Summer vegetables, they're great. STEPHANIE: They are great. Last year, I did have a cucumber garden, as in a garden, and a few cucumber plants that were too prolific for me, to be honest. I found myself overrun with cucumbers and having to give them away because we just didn't eat them enough. And this year, we scaled back a little bit [laughs] on the cucs. But I am so excited to bring up what's new in my world now because it's, like, so related, and we did not plan this at all. But I have a silly little thing to share about my own pickle journey. So, I used to be a pickle hater. JOËL: You know what? Same. STEPHANIE: Oh my gosh, incredible. Another new thing we've learned about each other. I really, like, wanted to like pickles because, you know, when you order a sandwich in a restaurant, it always comes with the pickle spear. And neither me nor my partner were into pickles, and we would always leave the spear uneaten on the plate, and we felt so bad about it. I felt really bad about it. And so, every, like, three to six months or so, I'd be like, okay, I'm going to gather the courage to try the pickle again and see if maybe my taste buds have changed, and this time I'll like it. And, you know, I would try a bite and just be like, no, no, I don't think it's for me. [laughs] But I guess I was just so primed to do something about, like, wanting to eliminate this really inconsequential food waste. But every time it happened, I would just, you know, [laughs] be, like, oh, if only I loved pickles. And I got my friend, who is a pickle connoisseur, to help me figure out, like, what pickles I might like. So, I asked her to come up with, like, a pickle sampler for me because I really hadn't tried all too many. And that actually really helped me find which ones were a little more palatable to me. So, I found out that I liked the sweeter ones. There's, like, a bread and butter pickle that can be quite sweet. Your diner pickle can be very different from a jar of, like, fancy pickles. [laughs] JOËL: Definitely. STEPHANIE: One day, she gifted me a jar of, like, Polish gherkins that were delicious. JOËL: Hmmm. STEPHANIE: I was like, wow, I can just snack on these. So, the thing that's new is that this time, I went to an Eastern European grocery store, and I bought my own jar of pickled gherkins. And that was something that Stephanie, like, two years ago, would never even do. [laughs] JOËL: That's really cool that you got a chance to sort of explore a broader range of what was available in the pickle world and then were able to find kind of your niche there and discover something new that you actually like. STEPHANIE: Yeah, it was very fun. And now I feel like my whole world has opened up to, you know, pickley and fermented things and just, like, get to enjoy even more snacks. So, to move away from pickles, recently, on my client project, I've been pairing a lot more with other client developers. And one thing that has come up is, you know, talking about our reasoning or our thought process for when we're pairing on some code. And I realized that I have built up a lot of either intuition or maybe some rules that I like to follow when I'm writing code, writing a test, or even doing a code review. And I've realized that you know, as developers, we often use these kinds of shortcuts or heuristics to help orient us as we're doing our work. JOËL: Yeah. I think that's definitely something that either comes yourself from experience or sometimes is passed along, and you get to benefit from somebody else's experience. They learned the hard way a lot of these tips and tricks, and now they kind of pass on some of these guidelines to us. Do you have any favorites that you reach for frequently? STEPHANIE: So, one way I like to approach a problem is to start messy [laughs] and to kind of see what that gets me and then where to go from there. I find that it's a little bit easier for me to draw on things that I've, you know, learned or picked up and tips once I have something in front of me to react to. So, maybe I will just go with the naive implementation and just write all of the code in one method, you know, in a class. And from there, now that it's out of my system, can I kind of come back in with a finer tooth comb and then apply more of a sustained effort to clean things up, right? And, to me, the question I find myself asking is, like, can this be extracted further? And so, you know, if I have everything in one giant method, then yes, [laughs] there is likely, you know, many opportunities to extract that, and maybe I will see something like, oh, the way that I spaced out this code that might be a signal to me that, like, these are some ideas that are grouped together, and I can pull something out there. JOËL: Do you have a heuristic around when to stop extracting? STEPHANIE: That's a good point. I think I tend to stop when I have kind of pulled out the classes that make sense to me. And, at that point, you know, like, maybe there is more extraction that can be done. But at a certain point, you know, you then get these really tiny classes that maybe don't hold their weight. And I think that's also true of methods that then call other methods, and that's the only thing that they do. Then it's like, well, is this too extracted that it's not really giving a future reader helpful information, right? I want the extraction to improve readability. And that tends to be another lens through which I am applying to this idea of, like, can I extract further? Is this extraction helpful for understanding this code? JOËL: I like the idea of looking at the code through multiple lenses. And so, sometimes you look at it through the lens of, yeah, are there enough moving parts here? Or does it feel kind of brittle and all in one place? And then sometimes completely shifting your lens and saying, you know what? Let's put myself in the seat of someone who's looking at this code for the first time. Can I understand it? So, structuring and extracting code is a big part of the work that we do. And I also happen to have a couple of heuristics that I like to use. One is separate branching code from doing code. So, if I have an if...else condition, I try not to put ten lines of logic inside each branch; instead, I have just a call out to a method so that the only thing the conditional does is to choose which path you go, and then each individual path is its own method. Similarly, if I'm writing a method, I'm not going to have a bunch of logic then a conditional mixed in together. So, my heuristic is a method gets to do one of two things. It either gets to choose a path to take or it gets to do a thing, but you can't mix and match both. STEPHANIE: Yeah, that makes a lot of sense. I really appreciate a well-named method that is, you know, determining, like, what condition needs to happen because then that helps me, yeah, like, avoid having to hold all of this information about this condition or this other condition, and this other condition in order to figure out what path I'm trying to take. JOËL: And the naming and the readability, I think, is a big part of this. Another heuristic that I like to use that kind of converges on the same result is trying to write each method at a single level of abstraction. So, if I am writing a method that has some kind of high-level terms it's using, I'm not going to also mix in a lot of low-level implementation. And then, similarly, if it's a method that's doing a lot of, like, low-level nuts and bolts things, I'm going to try not to pull in some of these higher-level domain name methods in there. And so, by separating things out so that every method reads one level of abstraction, you make it much easier for the reader to go through and figure out what's happening. Are we kind of getting that more 10,000-foot view, getting a sense of what's happening, and saying, okay, we want to process the user form, and then we want to send off an email, and then we want to, you know, write to a file? Or are we going through, okay, we're going to increment a counter so that we get exponential back off on our [inaudible 10:28] request? Those two things do not belong together in the same method. STEPHANIE: Yeah, absolutely. I really like this heuristic. And I have been applying it more and more and found it really useful for making sure that you're handling your errors correctly, especially because, at different levels of abstraction, you want to do different things with your errors, right? An implementation error that's raised because, you know, you're calling something accidentally on nil, or maybe a third-party service is down, and you get a custom error, whatever that is, those concerns are different from how you want to handle things at the controller level. And oftentimes, I see those things really mixed together, and honestly, I think leads to a lot of buggy code when you're trying to handle things that can go wrong at the wrong level of abstraction. JOËL: Yeah. Is there a good heuristic around what level you think is best to trigger an exception? Or maybe, more generally, just being aware of different levels of abstraction and knowing that catching or triggering errors at each level will have different impacts. STEPHANIE: I think more of the latter, the having an awareness of what kinds of errors might be possible and what impact that has on the user, right? The user being either an actual customer or, you know, another developer who has to read a notification from an error monitoring service. [laughs] JOËL: This is really interesting to me because I think we've now bridged the concept of heuristics into the idea of mental models. So, the heuristic is write your methods at a single level of abstraction, but that then leads into a mental model where maybe code is structured in three or four different layers. You've got a low level, a mid-level, a high level, something like that, of abstraction. And now, you can use that mental model to start thinking about what are the impacts of exceptions at each layer? And then, maybe you complete the circle by creating a heuristic that relies on that mental model, maybe, I don't know, raise in the low-level rescue at the top level or something. I'm making something absolutely arbitrary up right now. But somehow, we've gone from heuristic, which creates a mental model, which then allows us to build new heuristics on top of that, and that seems like a virtuous cycle to me. STEPHANIE: Yeah, absolutely. I think what I'm also picking up is the idea that you do need a mental model, or you do need to draw on your own ideas about something in order to apply the heuristic, right? You know, someone could tell you to separate branching code from doing code. But maybe you don't know what that means or, like, maybe you don't see why that's important. And sure, you can still apply it and try your best to follow it. But, in some ways, I think that the best heuristics are ones that you've kind of developed for yourself based on your own experience. JOËL: That's really interesting. I think once you've built from your own experience, I definitely feel like they're really impactful because you've kind of synthesized 2, 5, 10, 20 years of experience doing some of this work into, oftentimes, like, you know, a pithy one-line sentence, 5, 6 words that convey an approach that you've found works best, you know, maybe 80% or 90% of the time. The power of synthesis for your own self-learning I think it's really hard to understate. So, I'm curious if there's any other heuristics that you commonly use that you kind of created yourself based off your own experience rather than just having it be more of a broadly received idea from the community. STEPHANIE: I think, for me, it's more so that the experience has helped affirm certain heuristics and also made me feel more comfortable with letting others go. And one that I heard a lot but, like, didn't quite understand until really working through it deeper is the idea of feeling pain when you write a test, and that being a signal of opportunities to try different design with your code. And I just didn't know what that pain was at the beginning. Like, what does that even mean? [laughs] Like, how can a test cause me pain? But on my own, I realized, oh, like, actually, I get really frustrated when I need to stub out a whole method chain, right? And I find myself having to go look up how to do that or just spending a lot of time having to do something that I haven't done before. Maybe the pain comes from having to change a lot of files because, oh no, like, I also broke 20 other tests in the process. But when you're first starting out, oftentimes, you, like, don't know that that is not normal [laughs]; at least, that was true for me. And so, that was something that I had heard about, like, if you are feeling pain when writing a test, then, like, maybe reconsider your code design. But when you don't know how to identify what that pain is, and you also, like, don't know where to go from there, I find that, you know, the heuristic can only help you so much. JOËL: Yeah. Maybe that's something that's challenging with a heuristic in that they're often expressed as these pithy sentences. But if you're not familiar with some of the underlying concepts, that might make them harder to apply, which is unfortunate because, oftentimes, these heuristics that we've developed as a community are targeted to newcomers to help them kind of avoid the mistakes that we've made along the way. STEPHANIE: I think what really helped me the most in connecting a heuristic that's commonly expressed and my own experience is when I've had someone ask me about how I'm feeling when I'm, you know, making some kind of decision or when I'm reading some code. Like, what do I think of this, or what has been my experience with this? And giving me the opportunity on the spot to synthesize that information. Because otherwise, it's hard to figure out, you know, like, what is just normal? This is just life as a developer [laughs]. And what are opportunities to maybe gain some more insight about the work itself? JOËL: One thing that I've learned over time as a developer, and I'm not sure if this quite rises to the level of a heuristic, but a lot of, like, pain and frustration in development doesn't necessarily have to be that way. And it's not necessarily because I'm bad at the job or I'm too new to the technology or whatever. It can often be a sign of underlying design issues or the fact that the system was modeled with certain assumptions that are no longer true. These can often be signals that you can make things better. So, I think if I had to reduce this idea down to a clever one-liner, it'd be something along the lines of, it can be better, or it doesn't have to be this bad. You're writing a test, and it's really annoying. There might be a better way to structure the underlying code that would make the test better. You're having to do some, like, really clunky code to deal with something. Is there maybe a better object design that would make a lot of that pain go away, or at least kind of quarantine it in a certain part of the codebase? STEPHANIE: I actually think you're really onto something because what I was just hearing, I love that, like, it can be better. It's less prescribed, I guess, than some other heuristics, like, you know, do not repeat yourself, or whatever. JOËL: Classic. STEPHANIE: [laughs] It really encourages, like, the individual to think a little deeper. And it actually reminded me of another...this is actually a bit of a pithy saying, but I find it to be really useful. And I'm curious if you've heard it before. It's a systems thinking heuristic, and the phrase is, the purpose of a system is what it does. JOËL: Ooh, I have heard that, and I'm trying to remember what context. STEPHANIE: So, it was coined by a systems thinking expert. Stafford Beer, I think, is his name. And I recently learned about it from a friend. But I think the cool thing is that it can be applied to literally anything [laughs] because everything is a system, you know, or not just software. But I have found a lot of value in applying it to just, like, is this function doing what it says it does, right? Or is it actually also doing, like, a side effect? And turns out, maybe we want to bring that into alignment with what the name of the function is, or try pulling that out, or whatever. I think it can also be true of test suites. I don't know if this is a heuristic or not. But the idea that we should always be testing or all tests are good, yeah, I guess that could pass as a heuristic. By bringing this perspective of the purpose of a system is what it does, it's like, well, is the test suite also so bloated and takes so long and so flaky that it is actually hindering development? And if that is the case, then maybe there is some reevaluation necessary, right? Rather than just claiming that it's helping us have more confidence in our code when that may or may not be true. JOËL: You brought up an interesting idea here, which is that heuristics aren't always right. So, you're talking about the idea that a heuristic like good code is tested code might not be correct in 100% of the cases. Like, how accurate does a heuristic need to be in order for it to be really valuable? You know, you're hoping for something that's, like, 90% correct that you can follow most of the time, except in some edge cases, or something maybe as low as, like, 50% where it's a coin toss whether the heuristic applies in the situation or not. Are those still useful? Or are they maybe more confusing than otherwise? STEPHANIE: Oh wow. That's a really interesting way to frame it because I don't know if I've ever stored information about how well my heuristics are serving me. [laughs] But I do really like the idea that you can use a heuristic as a guiding principle just to try and that you can always back out of it, right? So, if you're wanting to take DRY to the extremist of extremes, just for fun or just to see how that might go, you can go down that path and, at any point, decide, okay, like, I like this, or I don't like this, and choose a different path. But the idea of kind of tracking, like, how well they're working for you that is really interesting to me, and not something I've tried before. JOËL: I love the idea of taking a heuristic and, like, doing a side project whose whole goal is just to kind of push that heuristic to the extreme, to the breaking point so that, that way, you get an intuition of, like, when does it work for you? When does it not? That sounds like a really fun exercise for someone to do. Is that something that you've done yourself? STEPHANIE: Not to the point of a whole side project, but just like I like to try pickles randomly every now and then to see if I like them, [laughs] will just try a new technique and see how it goes. In an episode a while back, we talked about whether we TDD or not, and, to be honest, I don't do it, you know, 100% of the time or all the time. But one day, I did decide to TDD a full-stack feature from start to finish just for fun [laughs], and I enjoyed it. I learned some things about it. And I think now I've kind of integrated the parts that I liked about it into my development flow. Like, I'm not always going to do it. But I think it also just helped me figure out, like, okay, like, what is this thing about that people claim that is the pinnacle of how we should be writing our code? And how can I decide for myself, like, whether it works for me or just pick and choose the parts of it that work for me? JOËL: Yeah. That just seems like a really valuable exercise. There are definitely too many heuristics out there to do that for everything. But I guess I've never thought of it quite so concretely. But I almost wonder if I should, like, add this to my kind of personal growth plan to say, like, once a year, I'm going to take a heuristic and kind of push it to an extreme and see what I can learn about it. STEPHANIE: I actually think what's really cool is the process of, like, any individual developer figuring out what kinds of heuristics they want to follow, as opposed to, you know, like, a mass proclamation that, like, this is the way, right? Are there any heuristics that you have maybe picked up and then let go of because you realized that, you know, they weren't working enough or frequently enough for you or that you just didn't like? JOËL: I don't know about, like, fully letting go, but definitely kind of recontextualize and sometimes even sort of rewrote them a little bit to work for me. So, a classic one would be the idea that shorter code is more readable. So, it's common to see comments on a pull request sort of like, "Hey, you could make this shorter by doing this." And that can be true to a certain extent. When you get to the point where you're playing code golf, it becomes absolutely unreadable. But also, there's a point where sometimes using some other heuristics will result in longer code but actually make it more readable on the whole. And so, packing everything into one method might be overall shorter, so it's fewer lines to read going through a class. But maybe extracting some methods or doing that separating branching code from doing code might lead to an overall longer class but an also overall more readable one. So, I think there's probably a lot of caveats that go with that idea. Oftentimes, shorter can be more readable with, you know, two or three asterisks that maybe go a little bit more into the why that is the case. STEPHANIE: Yeah. I like the contextualizing. That actually reminded me of a talk that I watched recently by Hillel Wayne. It's called Intro to Empirical Software Engineering. And he basically, like, does a deep dive into all these studies about software practices that we think are, quote, unquote, "good," like, as a community or as an industry. And it's like, well, like, how do we actually know? Like, show me the research, right? And one of the studies that he included was trying to determine if using abbreviations for variable names or using the full words made the code easier to debug or not. And so, the main example that he was using was employee number as a variable, and the abbreviation was EMP num. And it turns out that there was no difference in how easy it was to debug. But the approach that each group that was studied differed. So, the folks who had the full names, the full words for the variable names, were kind of using an approach of just scanning the code and being able to understand at a higher level the domain, right? Whereas the folks who were debugging with just abbreviations had to work at a bit of a lower level and, you know, or maybe using breakpoints and debugging the code that way. And I thought that was really cool because, first of all, I think it kind of was trying to prove that, like, we don't actually know if one is better or not. But what is important and interesting to me is the idea that, like, you can choose the method that you like better or that works for you and the human side of it, right? The impact it has on our process. JOËL: That's really cool. I'll have to go and watch that talk. Building this kind of context and nuance around a heuristic, though, takes a lot of time, takes experience. And part of the value of a heuristic is that we're collapsing down maybe our own experience or somebody else's experience into something that doesn't require you to necessarily do all that work upfront. How do you feel about sharing and kind of targeting a lot of these heuristics to newer coders who are kind of trying to get better at their craft and looking for ways to improve without necessarily having to do, you know, five years of experience digging into a particular topic? Do you think heuristics are helpful, or do they maybe mislead? STEPHANIE: I really value when they're presented as an opinion, as opposed to a true fact about code. [laughs] Because I really appreciate when someone is able to explain to me why they chose readability in this particular scenario or why they chose speed and performance. Or maybe they were making a trade-off between accessibility and, you know, something else. To just, like, tell someone, "Oh yeah, like, DRY code is better code," or to just tell someone that without the explanation with, like, offering them the opportunity to reflect themselves on, like, oh, like, where have I seen DRY code that was easier for me to read? That seems a little less helpful in terms of investing in their growth. JOËL: Yeah. Definitely, I think sharing some of the purpose behind it can often be really useful because most of these heuristics are never an end unto themselves. They're a means to some other end. So, you're not writing code that's DRY just because you want to be cool. You're writing code to be DRY because you're trying to improve readability, make it easier to change so you don't have to change it in multiple places. You want to maybe reduce the chance of certain types of bugs. These are all actual purposes of what you want to do in your code. DRY is just one way of getting there. But oftentimes, we might skip that part and just be like, hey, you should make your code DRY because DRY is the best. And it can be, but it's in service to these other goals. STEPHANIE: I think when I am sharing those types of heuristics that are more commonly held, I also do like to preface, like, some people think this, or some people like to do things this way, just to be clear that they don't have to like it or do it. In general, I always prefer injecting more nuance [laughs] into the conversation. But yeah, like, it is a really personal process, I think, and figuring out, like, how any individual makes decisions about, like, all the code they're writing. You have to make a million [laughs] decisions every time you do it. So, yes, like, those heuristics do provide a shortcut. And also, I think it's worth taking the time to think about if it's working, especially for the specific context that you're applying it, right? Because that also can change. And, I don't know, maybe I'm just skeptical of any one size fits all solution. JOËL: I think for myself, with many heuristics, as a beginner coder, I had a bit of, like, a spiral journey, or maybe kind of going up a set of stairs. So, as a brand-new developer, I would make a lot of duplication bugs in my code, where, you know, I would have the same value in multiple places, and then I'd change it in one place, and I don't remember to change it in other places, and the code breaks. And so, being introduced to the idea of DRY actually helped my code get quite a bit better. It was, like, a net positive on my experience because I was not getting burned by all these bugs quite so frequently. And so, for a while, just throwing more DRY into my code just made my life better. And then, eventually, you kind of hit that plateau where I don't run into the pain of these bugs anymore. But now I keep doing more DRY somewhat mindlessly. And I end up with this pile of abstractions that are actually really brittle or frustrating to work with. And now, I have to rethink some of the assumptions behind the heuristic. And then, at that point, yep, maybe recontextualize a little bit, learn about when it's good, when are the trade-offs not worth it. Now I have a better understanding, and I kind of go on another growth bit where it makes a lot of my code better until maybe I hit another plateau. I've kind of maxed out the benefits. I start seeing some of the pain, and then, again, I have to go through this cycle again. And maybe the approach you were talking about earlier, where you do a side project and kind of push a heuristic to its breaking point, is a way to kind of speed run that process. STEPHANIE: Yeah, that's really interesting because you're just committing to it and trying to learn everything you can from it in a very concentrated setting. I also wonder, and it's totally fine if you don't know, but if someone had told you kind of all of those reasons you listed about why DRY code, like, what that achieves, if that may have reframed how you were thinking about applying it. Or was that also something that had to come from doing it enough? JOËL: I think as a brand-new developer, a lot of that would have gone over my head. I was still really shaky on the concept of abstraction. When is it useful? When is it not? So, a lot of those more subtle pitfalls, I think, would not have been relevant to me at that point in my career, even the concept of readability, right? When I'm a brand-new programmer, I'm still getting used to reading a lot of code. And so, the idea that code might be written in a way that's unreadable or more challenging to read, it might just feel like, oh, I just need to get better, improve myself. It's not that the code is written in a hard-to-read way. It's just I don't have enough experience at reading code. And I think that's a common thing that we do as beginners at everything, right? We start by blaming ourselves when things get hard. STEPHANIE: Yeah. I was just thinking that, you know, if you are sharing heuristics with a newer developer or an early-career developer, at the end of the day, like, really, I'm not sure about the value of just dropping it on them and letting them run [laughs] with it. But I think what could be really, really effective is just having a sustained relationship with them and, like, continuing that conversation. It's, like, maybe in a code review or in a pairing session being like, "Oh yeah, like, I see you're practicing DRY. Like, what do you think about how this made this piece of code different?" And kind of baking in that process of self-discovery along the way and speeding it up in that way as well. JOËL: So, what you're really saying is the one heuristic to rule them all is code in community. STEPHANIE: I love that. I'm totally with you. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Bike Shed
389: Review Season

The Bike Shed

Play Episode Listen Later Jun 20, 2023 33:45


Stephanie just got back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. Joël started a new project at work. Review season is upon us. Stephanie and Joël think about growth and goals and talk about reviews: how to do them, how to write them for yourself, and how to write them for others. Blue Ridge Ruby (https://blueridgeruby.com/) Impactful Articles of 2022 (https://www.bikeshed.fm/369) Constructive vs Predicative Data by Hillel Wayne (https://www.hillelwayne.com/post/constructive/) Parse, don't validate by Alexis King (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) Working Iteratively (https://thoughtbot.com/blog/working-iteratively) thoughtbot's 20th Anniversary Live AMA (https://thoughtbot.com/events/ama-developers-20th-anniversary) 20th Anniversary e-book (https://thoughtbot.com/resources/20-for-20) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And, together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I just came back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. And I had a really great time. JOËL: Oooh, I'll bet this is a great time of year to be in Asheville. It's The Blue Ridge Mountains, right? STEPHANIE: Yeah, exactly. It was perfect weather. It was in the 70s. And yeah, it was just so beautiful there, being surrounded by mountains. And I got to meet a lot of new and old Ruby friends. That was really fun, seeing some just conference folks that I don't normally get to see otherwise. And, yeah, this was my second regional conference, and I think I am really enjoying them. I'm considering prioritizing going to more regional conferences over the ones in some of the bigger cities that Ruby Central puts on moving forward. Just because I really like visiting smaller cities in the U.S., places that I otherwise wouldn't have as strong of a reason to go to. JOËL: And you weren't just attending this conference; you were speaking. STEPHANIE: I was, yeah. I gave a talk that I had given before about pair programming and nonviolent communication. And this was my first time giving a talk a second time, which was interesting. Is that something that you've done before? JOËL: I have not, no. I've created, like, a new bespoke talk for every conference that I've been at, and that's a lot of work. So I love the idea of giving a talk you've given before somewhere else. It seems like, you know, anybody can watch it on the first time on YouTube, generally. But it's not the same as being in the room and getting a chance for someone to see you live and to give a talk, especially at something like a regional conference. It sounds like a great opportunity. What was your experience giving a talk for the second time? STEPHANIE: Well, I was very excited not to do any more work [chuckles] and thinking that I could just show up [chuckles] and be totally prepared because I'd already done this thing before. And that was not necessarily the case. I still kind of came back to my talk after a few months of not looking at it for a while and had some fresh eyes, rewrote some of the things. I was able to apply a few things that I had learned since giving it the first time around, which was good, just having more perspective and insight into the things that I was talking about. Otherwise, the content didn't really change, just polished it further. I think in the editing process, you could edit forever, really. So I imagine if I revisit it again, I'll find other things that I want to change. But this time around, I also memorized my slides because, last time, I was a little more dependent on my speaker notes. And part of what I wanted to do this time around, because I had a little more time in preparing, was trying to go from memory. And that went pretty well, I think. JOËL: How did you feel about the delivery of it? Because now you had a chance to have a practice run in front of a real audience. And, as much as you practice at home in front of the mirror, it's not the same as actually giving a talk in front of an audience. STEPHANIE: Yeah. I was surprised by how the audience is also different, and the things that they'll react to is slightly different. There were some jokes that landed similarly and others that didn't land a little bit with this crowd, but maybe other parts, there was more of a reaction. So that was surprising. And I think I had to kind of adjust those expectations on the fly as I delivered whatever, you know, line I was kind of expecting some kind of reaction to. And I also, other than memorizing my slides, you know, I think had the mental capacity to focus a little more on the delivery component that you're talking about because I wasn't, you know, up until the last minute still working on the content itself, and just being able to direct my mental energy to, I guess, the next level of performance when giving a presentation. And, yeah, I would definitely give this talk again. I really liked that it was something that feels pretty evergreen, something I care a lot about. I don't think it will be a topic that I get kind of bored of anytime soon. So those were all some of the things I was thinking about in giving a talk a second time. JOËL: When you write your speaker notes, do you give yourself directions for expected audience reactions, so something like a pause for laughter after a joke or something like that? STEPHANIE: No. I think I am too nervous about presuming [laughs] how the audience will react to put something in and then have to be, like, super surprised and figure out what to do if they don't react the way that I think they will. So it ends up being that I just kind of go forth. And if I do get a reaction out of them, that's great. But not expecting it works for me because then, at least, I can control how I am presenting and how I'm showing [chuckles] up a little bit more. JOËL: So you're really working with the energy in the room then. STEPHANIE: Yeah, I think so. JOËL: Was this talk recorded? So if people in the audience want to go and watch this talk. STEPHANIE: Yeah. The first version that I gave of it is online if you search for the title "Empathetic Pair Programming with Nonviolent Communication." And this version was recorded as well. So, eventually, it'll also be up. And, I don't know, maybe I'll watch it back and [chuckles] see the difference in presentation. I would be very curious. I've never watched any one of my conference talks fully through the recording from start to end before. But I know that that's something that I could continue to improve on. So maybe one day I'll find the confidence. My other highlight that I wanted to share about this regional conference is how well-organized it was. So it was mainly organized by Jeremy Smith, and I thought he did such an awesome job. He organized a bunch of activities in Asheville for the Saturday after the conference if folks wanted to stay a little longer and just check out the city. There was a group that went hiking, a group that did a brewery tour. And the activity I chose to do was to go tubing. JOËL: Fun. STEPHANIE: Yeah, it was my first time. So you're basically in an inner tube floating down a very calm river, just hanging out. You...we were on the group, and you could clip yourself to the rest of the group so you're all, you know, kind of floating down together. But some people would unclip themselves and just go free for a little while. And, yeah, when you get too hot, you can dip into the water to cool off. And I just had such a great time. [laughs] It was almost like being on a Disney ride but out in nature, which I just, like, is totally my jam. JOËL: I tried tubing once in Texas. And the inner tubes are black, and in the Texas sun, they get really hot. So every, I don't know, 20 minutes or so, I had to get off the inner tube. It was too hot to sit on. And I had to flip it just because it absorbed so much heat. STEPHANIE: Wow. Yeah, that does sound like it would get very hot. I think the funny thing that I wasn't expecting was how hard it would be to get back into the inner tube after you had gotten in the water, at least for me, because the inner tubes were quite large. And so I couldn't get enough leverage to pull myself [laughs] back up onto it, and ended up several times just, like, flopping belly first into the inner tube and then having to, like, flop over so that I could be on my back and be sitting in it again. And other times that I had to wait a little while until the river got shallower so I could actually stand and just sit in it. So there were times that it was kind of a struggle, but 90% of it was very chill and fun. So, Joël, what's new in your world? JOËL: I started a new project at work. I'm working with a data warehouse, pulling data in from a variety of sources, getting it all into one kind of unified schema, doing some transformations on it. And then also setting up some sort of outgoing plugins to allow different sources to access that unified data. So this is not in a Rails app, but we do have a Rails app connecting to this data warehouse. Data engineering is, at least in this style, is newer to me. So I think it's a really interesting world to get into. I don't know if, technically, this counts as big data. I don't think the term is cool anymore. But five or so years ago, everybody was all about the big data, and that was the hip term to toss around. STEPHANIE: So, is this something pretty new to you? You haven't had too much experience doing this kind of data engineering work before? JOËL: Yeah, at least not with, like, a data warehouse. I think a lot of the work around data transformations, or creating unified schemas, thinking in terms of data in different stages that are at different levels of correctness...I've done a fair amount of ETL, Extract, Transform, Load, or sometimes people shift it around and say, ELT, Extract, Load, Transform. I've done a fair amount of those because I've done a lot of integrations with third-party systems. STEPHANIE: So I've always thought of data engineering as, in some ways, a separate role or a track. And I'm really curious about you having, you know, mostly been doing software development if that gives you an interesting lens to look at these problems. JOËL: So, to get the full answer, you should probably ask me again in six months. STEPHANIE: That's fair. JOËL: Initial thoughts is that there's a shocking amount of overlap between some of these ideas, again, because I've done ETL-style projects a lot. You know, if you've got any kind of Rails app and you're integrating with a third-party API, you're often doing ETL at a very small level. To a certain extent, even if you're doing, let's say, some front-end code, and you're interacting with a back end, depending on how you want to deal with that transformation of getting data from your API, you might be doing something kind of like an ETL. Designing types in something like a TypeScript or an Elm and thinking in terms of the data that you have, the transforms that you're doing has a lot of similarities to what you would do in a data warehouse. I think a lot of the general ideas apply. I know I talked at the beginning of this year articles that were impactful for me. And one of those articles that was really impactful was Hillel Wayne's "Constructive Versus Predicative Data," which is all about structuring data and when you can enforce constraints via the data structure versus when you need to enforce it via code. Similarly, a lot of the ideas from the article "Parse, Don't Validate" by Alexis King. The articles focused on designing types. But it also, I think, applies to when you're thinking of schemas because schemas and types are, in a sense, isomorphic to each other. STEPHANIE: I like what you said there about as a software developer; you've probably done this at a much smaller scale. And, yeah, like you were saying, things that you had already learned about before or thought about before you're able to apply to this different set of problems or, like, different approach to programming. Is there anything that has been challenging for you? JOËL: Yes, and it's a weird one. Because we're working with enterprise systems, navigating the websites for these enterprise systems and the documentation for them is not a pleasant experience, trying to get a feel for how the system is made to work. It's just so different when you're used to tools and documentation written by the open-source community. Even third-party tutorials and things it's never, like, oh, here's a great article where you can scan and find the thing that you want. It's, hey, I'm a consultant guru on this thing. Sign up for my webinar, and you can have a 15-hour course on how to use this tool. And that's not what I want to do. I just want give me the five-paragraph blog post on how to do data imports, or how to set up a staging area for data, or something like that. STEPHANIE: Right. You're basically being asked to develop skills in using the enterprise software rather than more general skills for the problem or task; it sounds like. Because apparently, there are people making a business out of teaching other people how to use or navigate the software. JOËL: And I think that's fine. I love that people are making businesses of teaching these. But just the way things are structured, information is not generally as available for this large enterprise software as it is in the open-source world, and even when it is, it's just different patterns of access. So even you go to a particular technology's website, and it's all marketing copy. It's all sales funnel and not a lot of actually telling you really what the technology does. It's all, like, really vague, you know, business speak on, you know, empowering your team, and gathering insights, and all this stuff. So you really do a lot of drilling down. And what you need to find is the developer site. That's where you get the actual tech documentation. Depending on the tech, it's more or less good. But yeah, the official website of the technologies is just...it's not aimed at me as a developer. It's speaking to a different audience. STEPHANIE: That is interesting. I didn't realize that once you are, you know, working on a data warehouse, it is because you are consuming so many different external sources of data, and having to figure out how to work with each one is part of the process to get what you need. JOËL: So there's the external services but the data warehouse itself that we're using is an enterprise product. STEPHANIE: Got it. JOËL: So, just figuring out how this data warehouse works, it feels like it's a different culture, a different developer culture. STEPHANIE: That's cool. I'll definitely ask you again in a few months, and I look forward to hearing what you report back. So the other topic that I wanted to get into today is reviews, specifically self-reviews. To be honest, our review cycle is happening right now. And I have very much procrastinated [chuckles] on writing them until, you know, one or two days before. So I came into our conversation today, like, in that mind space of thinking about my growth, and my goals, and that kind of stuff. And it got me thinking that I don't hear a lot of people talk about reviews, and how to do them, how to write them for yourself, how to write them for others, how people approach them. Though I would guess that the procrastination part is pretty common, [chuckles] just based on what I'm hearing from other folks on our team too, and what they're up to for the next couple of days before they do. Joël, have you written your review yet? JOËL: So it's interesting because this review cycle has a few different components. You write a self-review. You write a review of your manager, and then you write a review of several of your peers who have nominated you to write a review. So I've done my own review. I've done my manager's review. I've not completed all of my peer reviews yet. STEPHANIE: That's pretty good. That's better than me. I've only done my own. [laughs] So, yeah, the deadline is coming up. And I'll probably get back to it right after this. I'm curious about your process, though, for writing a self-review. Do you come into it having thought about how you've been doing so far in the last six months or so? Or, when you sit down to write it, are you thinking about these things for the first time in a while? JOËL: Combination. So I think I do come in without necessarily having, like, planned for the review cycle. That being said, throughout the year, I try to build a fair amount of, like, personal self-reflection, professional self-reflection at various points throughout the year. So I'm not coming into the review cycle being like, oh, I have not thought about professional growth at all. What have I done this year? I think one thing I haven't done quite as well is when I'm doing these moments of self-reflection on my own throughout the year, writing down notes that I could then use to apply when the review cycle comes up. So I am having to rely on memory on, like, oh yeah, last month, when I kind of sat down and thought about areas that I want to improve in or areas that, like, what are my goals that I want to have? And I just commit that to memory. So, yeah, I think live in the moment; now that you've asked me this question, you've made me think that maybe I should be taking more regular notes about this. STEPHANIE: One thing I've been really liking about the software that we're using for reviews and other professional growth things is...it's called 15Five. And you can give your co-workers shout-outs using this tool. And as I was writing my review, I could actually open all of the kudos and shout-outs that I received from my peers and just remember some of the things that I worked on or a lot of the things that other people noticed. I tend to sometimes have a hard time remembering some of the smaller things that I've done that made an impact, but other people are usually better about pointing that out than I am. [chuckles] And that has been really helpful because it's, yeah, nice to see like, oh, like, you know, so and so really appreciated when I paired with them on, you know, debugging this thing. And maybe I can pull that into something that I'm writing about the kind of mentorship I've been doing in the last few months. JOËL: How do you feel about the aspect where you have to then give feedback on colleagues? STEPHANIE: I really value and enjoy this aspect because most of the time, I am just gassing my colleagues up [chuckles] and writing, you know, really encouraging things about all of the awesome work that they're doing. So, for me, it actually feels really good. And I was thinking a little bit about my approach to reviewing my peers and review culture in general. I have worked at companies where we have had a very, like, healthy and positive review culture. So it happens often enough that it's become normalized. It's not a really scary thing. And I also like to think about feedback in two types, where you have feedback that you want to give someone so that they can change behavior in a way that helps you work with them better, and then feedback you have for someone for their growth. And once I separated those two things, I realized that really, the former, if you're, you know, giving someone constructive feedback because you maybe would like them to be doing something different. That's not necessarily what you want to be writing in their annual review. Those things are usually better communicated in a more timely manner, like, right when you are noticing what you might want to be changed. And so then when you are doing reviews, like, you've hopefully already kind of gotten all of that stuff out of the way. And you can just focus on areas of growth for them, which is the fun part, I think, in reviewing peers because, yeah, you can give some suggestions to further support them in, like, where they want to go. JOËL: I like that distinction between just general growth, suggestions, and then interaction suggestions. And just to give an example, it sounds like interaction suggestions would be like, "Oh, when we pair, I would like it if you used this style of communication from, let's say, nonviolent communication. Here's a talk; go watch it." STEPHANIE: [laughs] Yeah, I did talk on this; go watch it. There used to be a framework for reviews that I've done before that I actually don't quite like. It's the Stop, Start, Continue framework where you answer questions about, okay, what should this person stopped doing? What should they continue doing? And what should they start doing? And the things that you would put in stop, I think, are probably what you would want to have communicated in a more timely manner, like, not necessarily it happening, you know, really divorced from whatever behavior you might be asking. And, in general, I think focusing on what you would like others to be doing instead is usually a better approach to handling that kind of feedback just because it avoids making someone feel bad about having done something wrong and, instead, kind of redirecting them into what you would like them to be doing. JOËL: So you're saying if you have something in the stop category, let's say stop interrupting me all the time when we're in meetings, you're saying this is something you prefer not to bring up at all or something that you prefer to bring up one on one and not in the context of review? STEPHANIE: Something to bring up one on one. Ideally, pretty soon after, that might have happened. It's a little more top of mind. And then you don't end up in that position of maybe misremembering or having the other person misremember and having to figure out, like, who was in the right or in the wrong in understanding how that interaction went. Especially if you're able to do it a little sooner after it happened, you can point out, like, hey, this happened. And instead of framing it as please stop interrupting me, you could say, "Could you please make some space for some folks who've been a little more quiet in the meetings to make sure that they've been able to share?" Still, I think once you've made more space to give that kind of constructive feedback when you are writing reviews, you can then, like, focus on the growth aspect and not the redirection of how others are doing their work. JOËL: That makes sense. So, what would be an example of the kind of feedback that you like to give to other people in the context of a review? STEPHANIE: Yeah, I think especially if I know what someone is wanting to focus on, right? If I'm working with someone, hopefully, we've kind of gotten to talk about what they like to work on, what they don't like to work on, what they are hoping to spend more time doing, or yeah, just their hopes and dreams for their professional [chuckles] development, being able to point out some things that they maybe haven't thought about trying it I really like to do. I was thinking about a time when I gave a co-worker some feedback as a mentee of theirs where they had been really awesome at providing information to me about things that I was unfamiliar with. But one thing that I was really hoping for was more tools to figure things out on my own. So instead of sending me a link to some documentation, maybe helping me figure out how to search for the documentation that I'm looking for. And that was something that I could share with them because I knew that they wanted to work on their mentorship skills and an opportunity, I think, for them to take it to a level where it's closer to coaching and not just providing information. JOËL: That makes a lot of sense. Maybe flipping it around, is there a point in time where you've received a review feedback that has been really valuable to you or really helped you hit the next level in your career? STEPHANIE: I really appreciate feedback that encourages me when I'm maybe a little bit too timid to go seek the things out myself. So there were times when I received some feedback about how great of a leader I could be before I thought I was ready to be a leader. And they pointed out the qualities of leadership that I had demonstrated that led them to believe that I would be ready for a role like that. And that was really helpful because I don't think that was even necessarily a short-term goal of mine. And it took someone else saying, "I think you're ready," that made me feel a lot more confident about opening that door. I guess this is all to say that I really love review season because of, you know, all of the support I get from my co-workers. And, yeah, just remembering that it's not just a journey I have to take all by myself, that the point of working with other people is for all of us to help each other grow. JOËL: I think something that you mentioned earlier really connected with me, the idea of trying to give feedback in the...even, like, feedback that's about changing or improving, phrasing it in a more positive way, or at least framing it in a more positive way. So here's an opportunity for growth rather than here's the thing you're doing wrong. Because that reminds me of two pieces of review that I got when I was a fairly junior developer that have stuck with me ever since. And one of them was really a catalyst for growth, and the other one kind of haunted me. So this first one I got, someone in a review just mentioned that they thought that I was just generally a slow developer, just not fast at writing code. Not a whole lot of context; just that's who I was. And, in a sense, it was almost like I'd been given this identity, like, oh, I am now Joël, the slow developer. And I didn't want that identity. So I'm kind of like, I want to refuse to accept it. But at the same time, there's always that self-doubt in the back. And now, anytime I'm on a project with someone else, I'm comparing, oh, am I shipping stories quite as fast as someone else? And if not, why? Is it because I'm a slow developer? Or if I'm having a rough day and I'm not getting the ticket done that I was hoping to get done by the end of the day, you know, you just get that voice in the back of your head that's like, oh, it's because you're a slow developer. Someone called that out last year, and they were right. So, in a sense, it kind of haunted me. On the flip side, I once got some feedback talking about an opportunity for growth. If I focused on working in more iterative, incremental chunks, it would help have a smoother workflow and probably help me work faster as well. And that was really kind of an exciting opportunity. It's also stuck with me for years but not in the sort of haunting sort of way or this, like, bring in self-doubt but more in terms of opportunity. Because now I'm always like, oh, can I break this down into even smaller chunks? Would that help me move faster? Would that help me be less blocked on other people? Would that be easier for our QA team? Would this be easier for review for my colleagues? Just a lot of different opportunities for benefits with working in smaller iterative chunks. And, for years, I've just been kind of honing that skill. And now, looking back over, you know, a decade of doing this, I think it's one of the best skills that I have. And so, in a sense, I feel like both of these people that left me that review, in a sense, they're trying to get me to maybe have a slightly higher velocity. But they're different approaches, radically different in terms of how it impacted me as a person. STEPHANIE: Yeah, I am really glad you brought that up. Because I definitely have also received, quote, unquote, "constructive feedback," but maybe wasn't phrased in the right way, that also haunted me. And it doesn't feel good. I think that that sucks. That person wasn't really able to frame it in a way that pushed you to progress in the positive way that you mentioned with learning to work incrementally. And in fact, I almost think that the difference in those two phrasings is encapsulated by a framework for giving feedback that's actionable, specific, and kind. So suggesting you to work incrementally is all of those things, especially if they know that you do want to increase your velocity. But you're being supported in doing it in a way that is positive and growth-oriented as opposed to, like, out of fear that other people think that you are a slow developer. And, you know, that's certainly a way that people are motivated. But I would say that that's not the way that we want to be motivated. [laughs] JOËL: I'm glad we're having this conversation because I think it just reinforces to me just the value of good communication skills for developers. And, you know, you can see that when developers have to write documentation, or even things like comments or commit messages. You see it when developers write blog posts. So it's really valuable to work on your communication skills in a lot of these technical areas. But reviews are a very particular area where it's easy to maybe have not the impact that you wanted because you communicated a core idea that's probably right, but just the way it was communicated was not going to have the impact that you're hoping for. And so getting good at communicating specifically in the area of reviews, which I assume most of us in the software industry are doing on a semi-regular basis, is probably a good tool to have in your professional tool belt. STEPHANIE: Absolutely. JOËL: We recently hit a big milestone at thoughtbot, where thoughtbot turned 20 years old in early June. And so, throughout June, we've been doing a lot of fun internal things and some external things to celebrate turning 20. And one of those is we're hosting a live AMA with a variety of thoughtbot devs. That's going to be on Friday, June 23rd, so a couple of days after this podcast goes live. So, to our listeners, if you're listening to this, in the first few days after it goes live, you get a chance to join in on the live AMA and ask your questions of our team as we celebrate 20 years. There's a blog post with all the details, and we'll link to that in the show notes. STEPHANIE: One other thing that I think we're doing that's really cool for our 20th anniversary is we published a short ebook with a curated collection of 20 hits from our blog, the thoughtbot blog, over the course of its history, some of the more popular and impactful blog posts that we've ever published. So I highly recommend checking that out. You know, the thoughtbot blog is such an awesome resource. And I discovered a few things that I hadn't read before on the blog from this ebook. So that will also be linked in the show notes. JOËL: I mentioned earlier how one of my opportunities for growth through review was getting better at working iteratively. And, a couple of years ago, I took a lot of the lessons that I'd learned over the years of getting better at working iteratively, and I put them in a blog post, and that blog post made it into that 20th Anniversary ebook. So we can probably link the blog post itself in the show notes. But also, if you're picking up that ebook, you'll get a chance to see that article on my lessons learned on how to work iteratively. STEPHANIE: Awesome. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.

The Changelog
News: Stack Overflow's architecture, Lobsters' killer libraries, Linux is ready for modern Macs, what to expect from your framework & GoatCounter web analytics

The Changelog

Play Episode Listen Later Feb 27, 2023 7:21


Sahn Lam details Stack Overflow's monolith/on-prem architecture, Hillel Wayne asks the Lobsters community for killer libraries, Linux 6.2 is ready to run on M1 Macs thanks to Asahi Linux, Johan Halse writes up what to expect from your web framework & Eli Bendersky on using GoatCounter for blog analytics.

Changelog News
Stack Overflow's architecture, Lobsters' killer libraries, Linux is ready for modern Macs, what to expect from your framework & GoatCounter web analytics

Changelog News

Play Episode Listen Later Feb 27, 2023 7:21 Transcription Available


Sahn Lam details Stack Overflow's monolith/on-prem architecture, Hillel Wayne asks the Lobsters community for killer libraries, Linux 6.2 is ready to run on M1 Macs thanks to Asahi Linux, Johan Halse writes up what to expect from your web framework & Eli Bendersky on using GoatCounter for blog analytics.

Changelog Master Feed
Stack Overflow's architecture, Lobsters' killer libraries, Linux is ready for modern Macs, what to expect from your framework & GoatCounter web analytics (Changelog News #33)

Changelog Master Feed

Play Episode Listen Later Feb 27, 2023 7:21 Transcription Available


Sahn Lam details Stack Overflow's monolith/on-prem architecture, Hillel Wayne asks the Lobsters community for killer libraries, Linux 6.2 is ready to run on M1 Macs thanks to Asahi Linux, Johan Halse writes up what to expect from your web framework & Eli Bendersky on using GoatCounter for blog analytics.

The Bike Shed
364: Constructive vs Predicative Data

The Bike Shed

Play Episode Listen Later Dec 6, 2022 34:07


Stephanie and Joël attended RubyConf Mini, and both spoke there. They discuss takeaways and highlights from the conference. The core idea for this episode is explained in this article: Constructive vs. Predicative Data (https://www.hillelwayne.com/post/constructive/). This came up recently in a conversation at thoughtbot about designing a database schema and what constraints could be encoded in the schema directly versus needing some kind of trigger or Rails validation to cover it. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. RubyConf Mini (https://www.rubyconfmini.com/) Episode on CFP - The Bike Shed 352: Case Expressions (https://www.bikeshed.fm/352) Podcast panel: The Ruby on Rails Podcast Episode 446: I'm Giving A Talk on Thursday (https://www.therubyonrailspodcast.com/446) Slides for FP talk: Functional Programming for Fun and Profit!! (https://speakerdeck.com/jennyshih/functional-programming-for-fun-and-profit?slide=107) Episode on language: The Bike Shed - 356: The Value of Specialized Vocabulary (https://www.bikeshed.fm/356) Constructive vs. Predicative data (https://www.hillelwayne.com/post/constructive/) Avoid the Three-state Boolean Problem (https://thoughtbot.com/blog/avoid-the-threestate-boolean-problem) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So something that's very recent in both of our worlds has been that both you and I, Stephanie, attended RubyConf Mini, and we both spoke there. What are some of your takeaways or highlights from the conference? STEPHANIE: Seeing you in person was definitely a highlight. I really enjoyed that. Because we're working remotely, I don't, you know, get to be in an office with you day to day. And it was really awesome to hang out with you, I think, for the first time as co-hosts of the podcast. And we both, I think, met some people at the conference too that were listeners. And it was really awesome to share that experience with you. JOËL: I had the interesting experience of several people who told me they recognized me by my voice, which I think is a common thing for podcasters, but as a new host, I was surprised by that. STEPHANIE: Yeah, that's weird. As a podcast listener, too, I definitely know exactly what you're talking about where it's like, oh yeah, I can identify someone by their voice. But to then be that person that people can recognize is pretty weird. I also really enjoyed being an audience member of the podcast panel that you are on at the conference with other podcast folks. It was moderated by Brittany Martin. And yeah, I just thought you represented The Bike Shed really well and spoke for both of us about podcasting in a way that I really appreciated. JOËL: And for any of our listeners who were not able to be there in person, Brittany has published that episode as a podcast, and we will link to it in the show notes. STEPHANIE: Another thing I really liked about RubyConf Mini was the smaller scale. I think it was about 150 or so attendees, which felt very different from traditional Ruby Central conferences with several hundreds of people. I heard a lot from other folks there that they really liked the regional aspect of it, the intimacy of the smaller conference. I think I got more of an opportunity to run into people that I'd met at the conference over the next few days. And there was, yeah, definitely a sense of tighter knit community there, you know, when you meet someone, and then you bump into them on the way into a talk, and then you can ask how their day was going and any highlights that they had. And yeah, I guess I haven't really attended a conference that size before, and so that felt like a very special experience for me. JOËL: I 100% agree. I think the smaller format definitely makes it a little bit more intimate, makes it much easier, I think, to build some of those social connections, to meet with people, and to have some good conversations. I think the format of the conference as well favored that. There were, I think, larger breaks between talks that encouraged people to hang out and talk. And, as you said, because it's smaller, you also get to see the same people over the course of a few different breaks instead of being like, oh, I met a stranger on the morning of day one, and then in the afternoon, I met another stranger. And it's just constantly introducing yourself. One thing that was really interesting to me is the experience of being a speaker is very different than just attending. As a speaker, you get to go to the speaker dinner and connect with a lot of the other speakers there. Some of them might be quote, unquote "famous people" that you're not quite comfortable just walking up to and introducing yourself. But in the smaller dinner, you just find yourself sitting next to them and enjoying some food or a drink and getting conversations. It's also much easier to have people come up to you during the conference. Because you're a speaker, people will come and talk to you. So if you tend to be a little bit more introverted, as long as you can get over your fear of being on stage and public speaking, it actually makes social connection interaction much easier to be a speaker. I would recommend to any of our listeners who were wondering how can I get more out of a conference? How can I get better connections, better conversations? Consider being a speaker. STEPHANIE: Yeah, absolutely. We've talked about this before; I think when we chatted about writing our CFPs for this conference that speaking doesn't have to be a really big, scary thing, but everyone has something to say. I think we had mentioned in previous episodes that your talk topic came out of just a discussion that you had internally, and you were like, wow, enumerables are so cool, like, let me dig deeper into them and just share what I learned. So I totally recommend it. And this conference was my first in real-life speaking opportunity as well, and that felt super different from my experience last time doing it virtually, you know, talking about how much I love that sense of community all the time. But it really felt true for me this time around, where I could see the audience react to the things I was saying, like, maybe go off the cuff a little bit. And then yeah, at the end, having people come up to me was really awesome to just talk about pairing, which is what I spoke about, and just share our experiences. And they asked what I thought about some things, and it was really cool to just be able to spread that knowledge around. And one thing I noticed you did a lot was come up to speakers after they wrapped up their talks. You were almost always the first person to get up and congratulate them and just get the ball rolling on following up on the things they talked about. Is that something that you really enjoy doing or find particularly valuable as an audience member or speaker? JOËL: Yes, both. I think, as a speaker, it's really validating to have people come up to you after the talk and either just tell you they liked the talk or ask a question. I generally don't like to do just open questions after a talk from the audience because then you get the classic; this is more of a comment than a question or people who will tell you that you had a typo on one of your code slides. Like, none of that is useful to anyone. So, if you're really interested, come talk to me afterwards. And then that actually makes me feel like my talk connected with people, and people were paying attention, people enjoyed it, people were learning. So I try to pay that forward as well for talks that I listened to, go up to the speaker, and tell them one thing that I appreciated about the talk or a thing that I learned, or something that got me excited in their content. STEPHANIE: Yeah, I'm sure that it's very appreciated. And it also breaks the awkward silence at the end when the speaker finishes and people aren't sure if it's okay for them to get up and start moving around. Yeah, I thought that was a really good way to kind of just encourage people to start chatting with each other and moving into those break times that we mentioned earlier, those opportunities to socialize. JOËL: Another thing that I think is really fun that you can do at in-person conferences, and I know you were doing it a lot, is going to see the talks of friends and colleagues and sitting in the front row and just being there to cheer them on and encourage them. Again, I think that makes a big difference when you are on stage, and you see these people who are your friends and colleagues there to support you. It gives you that boost of confidence. And when you're there in the audience, it's fun to cheer on somebody else. STEPHANIE: Oh yeah. You gave me a lot of thumbs-ups during my talk, and I really appreciated that. [laughs] So I'm curious if there were any talks that stood out to you that you got to see. JOËL: And I was really inspired by your talk, pair programming. I think there are a lot of things that I can take from that to improve the way I pair. I was also inspired by Aji's talk, Aji Slater, on automating manual tasks that you have to do in an iterative way. That one really hit home because, on my current project, I have been doing a lot of manual things. And I just have random snippets of code, like, some shell script lines or Ruby console lines, that I copy-paste out of Slack conversations because I've shared them with other people who are doing similar work. And I realized that a lot of his advice would apply to the work that I'm doing and how that could really make things better. So that was one of those talks I was listening to, and I was like, oh, you know what? Monday morning, when I go back to my project, this is something that I'm going to start doing. This is something I'm going to change in the way I do my day-to-day work. STEPHANIE: Yeah, absolutely. I have so many tasks that I would like to get automated, and think that one day I will magically have more time in my schedule to get to it. But I liked that his talk gave pretty concrete strategies for baking it into your regular, like you said, day-to-day workflow, and that lowers the activation energy to getting them done. And then those things can be iterated on and could eventually become, in an ideal world, a fully-fledged feature that you put together from doing those repetitive tasks. And yeah, they provide a lot of value not just to you but can eventually provide value to your co-workers and then even your users in the future. JOËL: Were there any talks that stood out for you? STEPHANIE: One talk that I really enjoyed was Jenny Shih's about Functional Programming for Fun and Profit. I have attended a lot of functional programming talks within the Ruby realm, at least to try to get a better sense of how it can apply to my work and the languages and paradigms that I use. And honestly, what I liked about it was that it didn't get too in the weeds about functional programming. What she did was provide mental models for understanding the paradigm that I think was a good vehicle for understanding things very generally. And, for me, like,¬¬ a talk, it's really hard to pay attention to lines of code and to read code on the fly while people are presenting. For me, that is just not how I like to consume that information. And so she provided themes and, like I said, those mental models, which I know you really like to use a lot too in teaching people new concepts. For me, I didn't fully learn what a monad was, once again, but at least having that repeated exposure to those foundational aspects, I think, will eventually lead me to be able to grok those things a little more comprehensively the next time I see it or whenever I decide to dig deeper. JOËL: What was a mental model that was shared that connected with you particularly? STEPHANIE: So one of the main mental models that she shared was thinking about a program in terms of these three dimensions: value, behavior, and time. She had a nice slide that showed the difference between the object-oriented paradigm, where value and behavior are contained by objects, where time is kind of inherently wrapped up in those objects that hold information about the state through values and behavior. Whereas in her functional programming example, those three dimensions were a bit separate. And I found that distinction to be really helpful in separating things that felt very implicit before, but it was nice to see them broken out into very clear concepts in terms of building blocks of a program. JOËL: So it's helpful then when thinking...when you look at code, if you can think about it in those three different dimensions to help think about, am I taking a functional or other approach in this particular dimension when working with this code? STEPHANIE: Yeah, exactly. I think it also gave me more of a vocabulary to describe the pros and cons of each and a lens of thinking about which I might want to choose for the particular problem at hand. JOËL: So you mentioned there's a visual for these three dimensions from the slides. Are those slides publicly available? STEPHANIE: They are. I will link to them in the show notes. JOËL: So all of these talks were recorded. They're not yet available to the public, but I think the plan is to publish them on YouTube sometime in the new year, so that means probably January 2023. And a big shout out to the AV team and everyone who is involved in recording these. STEPHANIE: Yeah, I am definitely looking out for a link to my talk so I can send it to my mom. I also wanted to give a little shout-out to the organizers of RubyConf Mini: Jemma Issroff, Emily Samp, and Andy Croll. JOËL: Woo! STEPHANIE: They put on just a really awesome conference, and I feel very grateful that I got a chance to attend with you, Joël. JOËL: It was definitely a delightful experience. STEPHANIE: Delightful. That's a reference to Joël's talk for those of you who are listening. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: Coming back from the conference, I recently had a really interesting conversation with some other colleagues at thoughtbot. We were looking at a database schema for a new application and talking about some of the trade-offs involved in how that schema is structured, so what tables we want to have. Do we want to have indexes? Things like that. And particularly around some of the assumptions are business rules that would come into play. So we're looking at...we'd drawn out this Entity Relationship Diagram (ERD). In it, we're looking at all the tables, and something that comes up immediately is like, oh, it's possible to have some bad data that could show up in these columns. Or it's possible that this relationship could exist where this table has a foreign key on this table, but really, that should never happen in this particular way of working. And so then the question became, how do we try to prevent these things that currently the schema allows but that are not valid in this particular business domain? Do we want to change the schema somehow and make that stricter or find some way to prevent it? Do we want to add some kind of validation that will check some business rules first before inserting or updating a record? I'm curious, have you ever been in a situation like that where you had to balance those two approaches to enforcing business rules on your database? A classic small example of this is a situation where let's say, you have a users' table and you have a name column on there. And you want to ensure that that name must always be present; all users must have names. Do you try to enforce that via the schema with a NOT NULL constraint? Or maybe you try to enforce that with a validation, maybe a presence validation at the Rails level. Or if you're really into SQL, maybe some fancy trigger, but do it in a validation style rather than trying to force this using the schema. And our particular scenario was a little bit more complex than just one column; it was more to do with associations. But I think this sort of problem shows up even in constraints as small as a required field. STEPHANIE: That's really interesting. I think that, in my experience, when we are spinning up new tables, at that point, we do try to put some intentional thought into what the schema should look like and what requirements we might need to encode at the database level. But things that are more complex might need a little more code, like Ruby code. I have then pushed to an ActiveRecord validation. One thing that I think is important to know is that when you do set those things on the schema, it's harder to change. And so you usually have to feel pretty confident that that's what you want. Otherwise, you'll run into issues later if that does have to change and making changes to whatever existing data you might have. But it's also pretty common to just do your best when you are deciding on a database schema and then having to make adjustments down the line as you know more about your domain. JOËL: This conversation reminds me a little bit of the idea of database normalization. I think that might almost fit as a subset of general tactics of using the schema to ensure your data is more correct. When you are generating new tables, let's say you're creating a greenfield app and you need to create four or five tables; how much emphasis do you put on database normalization when you're initially designing those? STEPHANIE: I think for a greenfield project when you are setting everything up and creating tables for your main domain models, there is an aspect of it that should be considered because you're in this unique position where nothing really is in existence yet. And you do want to try to set yourself up to be successful and hopefully have information about your main use case for this app and can kind of make decisions about the schema then. At least in my experience, that has been part of the conversation, though, to be fair, because it's so early, you do have the opportunity to change things without as much effort or pain. But I think it's worth considering when you're just sitting down and working through what those models are going to look like. JOËL: And for our listeners who may not have heard the term normalization before, it's a series of...you can think of them as rules that you apply to your database design to try to avoid data redundancies in your tables. There are different levels of this; they're typically referred to as normal forms. So you'll see things like first normal form, second normal form, third normal form; those are kind of the fancy terms for them. But they generally involve breaking out other tables so that you don't have data redundancies. And in many ways, this is similar to principles such as the single-responsibility principle that we apply to objects when we're designing our objects in an OO system. But this is more at the table level for databases. STEPHANIE: I do think that it is so hard, maybe even impossible, to plan something out, to not have any of those redundancies, to begin with. And I do think sometimes they are a bit inevitable. But I also have had the experience of having to figure out what the heck I'm looking at when I am querying data and see all these things that are duplicated or maybe slightly different. And yeah, I think when you are in that position of starting a greenfield application, it is really interesting to see how you make those decisions about what needs to be enforced and where. Where did you end up landing, or what did you discuss in this conversation with the co-worker? JOËL: I think we went with a bit of a hybrid approach. Some things, we can use the schema to prevent bad data, and then some things either cannot be represented with a schema, or it's possible, but it's really cumbersome and painful. And so, we chose to try to enforce it with a validation. To me, this feels very similar to a problem in typed languages. So some communities that use a lot of types try to use those types to only allow data to come through that's in a valid shape. And so you'll hear things like make impossible states impossible or make illegal states unrepresentable. And that works for many things, but it's not always possible to enforce all of your business constraints through a schema. Or sometimes it's possible but just not practical. And so, I think there is a balance of finding when you can use the schema or when it's better to use the validation.¬ STEPHANIE: Yeah, I think my general rule of thumb is, like I mentioned earlier, things I feel really confident about that we want to make sure that we have in our database or in our data for sure. I do lean towards requiring those in a schema, and it also communicates that confidence or communicates that intent that it's something that at one point was decided is important. And so, if a future developer comes in, it would take a lot of work for them to write a migration, to remove some database constraint. Whereas I think sometimes validations at the Rails level are potentially a little more open to change and then even more so if you get to validating on the client side. JOËL: That can get to be a really, like, it's a useful tool, but one that you can really hurt yourself with. If you modify your validations at the Rails level or at the front-end level, but then you don't backfill those changes on your data in the database, then you might have records in your database that if you were to load them into memory and hit save on them again, would refuse to save because they no longer match the validations. And on longer-lived applications, I've seen that happen sometimes where not all rows in the database pass the Rails validations. STEPHANIE: Yeah, I think I've seen that be a problem either for developers who then have to backfill that data or write some migration to change some of the data to meet the new requirements, or just unexpected bugs on the users who discover something new but like you said, have been there long enough before those things were implemented. JOËL: The more I think of this, I think maybe constraints that are enforced at a validation level might still require changing the data in your database. So if you had a constraint enforced via a schema, you don't have a choice. You have to write some way to migrate that data so that it fits the new schema. You can kind of lie to yourself with validation and not change the historic data, and sometimes that is the case; you want to keep the old data and only prevent new data from being written in the old format. But if you need consistency, then you probably need a data migration regardless of which approach you take. STEPHANIE: Yeah, that definitely sounds like the more robust way to go about it for sure. JOËL: I have an article that I like to reference a lot by Hillel Wayne on Constructive Versus Predicative Data, which is basically looking at these two general approaches to enforcing data correctness and formalizing them a little bit. So do you try to enforce them based on the construction or the shape of the entity that you're creating, be that a database table, an object, a type, something like that? Or do you enforce it via some kind of predicate? So that could be a validation or other similar logic that runs kind of at runtime to enforce your constraints. STEPHANIE: That's interesting. I hadn't heard of those terms before, but I think they provide a lens through which you can look at the problem. Did the article end up suggesting different strategies for solving that problem, or was it more theoretical in different ways to look at it? JOËL: I think the article does two things. First, like you said, it gives us the words to talk about those approaches. And having those labels now, I start seeing them everywhere. I see them in databases, I see them in objects, I see them when doing types across a variety of languages. So that's already a huge win for me. I think you and I had done an episode a couple of months back where we talked about the value of having labels to put to ideas. And I think for me reading that article gave me those two labels. And all of a sudden, it really helped to make connections that I wasn't seeing before. The second thing that the article does is, I think, explore some of the limitations that each approach has and when you might want to use one versus another. The constructive approach, so using a schema, is more consistent because you know it is impossible for the program to create data that's in the wrong shape. That being said, not all constraints can be represented in a constructive manner, or it might be possible but really cumbersome. Also, sometimes it's not really invalid data; it's just sort of undesirable data. So you might want a looser schema. And let's say that you're storing some kind of intermediate state or some kind of raw input from another system that you might want to layer validations on top of, but you don't want to reject that data out of your database. You want that sort of incomplete or imperfect data in your system. Something that I find myself doing more and more these days when I create new tables is to really lock down the schema as much as possible. I think that might be contrary to maybe the way a lot of people in the community like to work. Some people might prefer to start with a very loose schema with no constraints and then work towards making things stricter as they explore the domain, and that's kind of the default that Rails has. If you're creating a new table, all columns, for example, are nullable by default. Personally, I will put a null false on every column and every migration that I make unless somebody can make a convincing case otherwise, and even then, I might try to think of is there any possible way that we could avoid that scenario and put that null false. Part of the reason for that is that it is much easier to loosen constraints on existing data than to tighten them afterwards. So if I have a column where no value is allowed to be null, and then later on we decide, you know what? It is okay for some of them to be null, I can change the requirement on that column, and I don't need to make any changes to the existing data. It just works. If the reverse happens, if I have a column that allows a bunch of nulls and then I want to make that column required, now I have to go and find a way to backfill all the empty spots in that column. And that could be a very challenging process. It might even be impossible. There might be some values there that it's just like, the user did not supply them at the time because we didn't ask for them. And now there's nothing we can put in there. So do you put in, like, unknown or not available? Then you have to ask yourself some really difficult questions about your data. STEPHANIE: Yeah, absolutely. I think I agree with you there. Another thing I like to do is provide default values for columns, especially ones where they can't be null, because, like you were saying, that helps me have a better understanding of just what is going on in the database. An issue I have seen come up involves a Boolean column where if a default value of false, for example, if that's what we're going with, is not encoded in the schema, you end up with potentially three values for a Boolean, which would be true, false, and null, and that I think has been -- JOËL: The infamous three-state Boolean. STEPHANIE: Yeah, exactly, the three-state problem, which is just inherently contradictory to what a Boolean is, to begin with. And I've definitely run into issues with that where you have to decide, or figure out, or write code to determine is null false? Is that what we mean here? It's not clear. But if you, like you said, locked it down at the beginning, provided those default values, that puts in those guardrails to prevent things from getting out of hand. JOËL: It also makes it easier for users of your database, application, whatever to interact with your code. I've run into this a lot when working with GraphQL APIs. And the default in many GraphQL server implementations is to make all fields nullable by default. When you build your schema, you have to add some extra things there to say, "This field is non-nullable," which means that a client that's now consuming it, anytime they deal with the data they need to check, is it present or not? You can't have the confidence that that data is there. And so it can force a lot of extra checks on the client. Or I guess you could just take it on faith and hope nothing breaks. STEPHANIE: Yeah, it's funny you mention that because I definitely think there's like spheres of impact. So as a developer, you maybe start having to write code that checks those kinds of things, like if it's null or not in your code. Then that can even extend to, like you said, your users or consumers of the API, who then have to contend with data that they have no control over. And I've been there too, and that can be frustrating as well. JOËL: We've talked a lot about data correctness and different ways to achieve it, different strategies. Why is this something that we care so much about? STEPHANIE: I think data correctness is really important from a developer experience perspective. And it's way easier to fix a bug in your code than it is to wrangle a lot of accumulated bad data. JOËL: Yeah, sometimes bad data is not fixable at all, and those are situations where you have a really bad day as a developer. STEPHANIE: Agreed. JOËL: Well, on that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Oddly Influenced
Interview: Glenn Vanderburg on engineering

Oddly Influenced

Play Episode Listen Later Nov 14, 2022 29:36


MentionedOne of Glenn's talks on engineering.The first part of Hillel Wayne's interviews of people who've "crossed over" to software from "real" engineering. It's really good.Herbert Simon, The Sciences of the Artificial, 1969Fredrick Brooks, Jr., The Design of Design: Essays from a Computer Scientist, 2010David L. Parnas and Paul C. Clements, "A Rational Design Process: How and Why to Fake It", 1986. The Neal Ford talk about constraints was taken down from YouTube because Protecting Intellectual Property by removing a whole talk that uses a short clip is far more important than Mr. Ford's ideas.Glenn's other recommendations:What Engineers Know and How They Know It: Analytical Studies from Aeronautical History, by Walter VincentiEngineering and the Mind's Eye, by Eugene S. FergusonDefinition of the Engineering Method, by Billy Vaughn KoenA number of Henry Petroski's books shed valuable light on the actual practice of engineering:To Engineer Is Human: The Role of Failure in Successful DesignInvention by Design: How Engineers Get from Thought to ThingDesign Paradigms: Case Histories of Error and Judgment in EngineeringSuccess through Failure: The Paradox of DesignTo Forgive Design: Understanding FailureEngineers of Dreams: Great Bridge Builders and the Spanning of America (this is quite different from the others, but by telling the real, non-idealized tale of how so many great bridges were built — including several disastrous failures and many other near failures — this book was instrumental in helping me understand how inaccurate the common stereotype of engineering really is)CreditsImage of double effect distillation chemical plant via Wikimedia Commons. User:Luigi Chiesa, CC BY 3.0. Cropped by Brian Marick.

The Bike Shed
355: Test Performance

The Bike Shed

Play Episode Listen Later Sep 20, 2022 42:44


Guest Geoff Harcourt, CTO of CommonLit, joins Joël to talk about a thing that comes up with a lot with clients: the performance of their test suite. It's often a concern because with test suites, until it becomes a problem, people tend to not treat it very well, and people ask for help on making their test suites faster. Geoff shares how he handles a scenario like this at CommonLit. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Geoff Harcourt (https://twitter.com/geoffharcourt) Common Lit (https://www.commonlit.org/) Cuprite driver (https://cuprite.rubycdp.com/) Chrome DevTools Protocol (CDP) (https://chromedevtools.github.io/devtools-protocol/) Factory Doctor (https://test-prof.evilmartians.io/#/profilers/factory_doctor) Joël's RailsConf talk (https://www.youtube.com/watch?v=LOlG4kqfwcg) Formal Methods (https://www.hillelwayne.com/post/formally-modeling-migrations/) Rails multi-database support (https://guides.rubyonrails.org/active_record_multiple_databases.html) Knapsack pro (https://knapsackpro.com/) Prior episode with Eebs (https://www.bikeshed.fm/353) Shopify article on skipping specs (https://shopify.engineering/spark-joy-by-running-fewer-tests) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Geoff Harcourt, who is the CTO of CommonLit. GEOFF: Hi, Joël. JOËL: And together, we're here to share a little bit of what we've learned along the way. Geoff, can you briefly tell us what is CommonLit? What do you do? GEOFF: CommonLit is a 501(c)(3) non-profit that delivers a literacy curriculum in English and Spanish to millions of students around the world. Most of our tools are free. So we take a lot of pride in delivering great tools to teachers and students who need them the most. JOËL: And what does your role as CTO look like there? GEOFF: So we have a small engineering team. There are nine of us, and we run a Rails monolith. I'd say a fair amount of the time; I'm hands down in the code. But I also do the things that an engineering head has to do, so working with vendors, and figuring out infrastructure, and hiring, and things like that. JOËL: So that's quite a variety of things that you have to do. What is new in your world? What's something that you've encountered recently that's been fun or interesting? GEOFF: It's the start of the school year in America, so traffic has gone from a very tiny amount over the summer to almost the highest load that we'll encounter all year. So we're at a new hosting provider this fall. So we're watching our infrastructure and keeping an eye on it. The analogy that we've been using to describe this is like when you set up a bunch of plumbing, it looks like it all works, but until you really pump water through it, you don't see if there are any leaks. So things are in good shape right now, but it's a very exciting time of year for us. JOËL: Have you ever done some actual plumbing yourself? GEOFF: I am very, very bad at home repair. But I have fixed a toilet or two. I've installed a water filter but nothing else. What about you? JOËL: I've done a little bit of it when I was younger with my dad. Like, I actually welded copper pipes and that kind of thing. GEOFF: Oh, that's amazing. That's cool. Nice. JOËL: So I've definitely felt that thing where you turn the water source back on, and it's like, huh, let's see, is this joint going to leak, or are we good? GEOFF: Yeah, they don't have CI for plumbing, right? JOËL: [laughs] You know, test it in production, right? GEOFF: Yeah. [laughs] So we're really watching right now traffic starting to rise as students and teachers are coming back. And we're also figuring out all kinds of things that we want to do to do better monitoring of our application, so some of this is watching metrics to see if things happen. But some of this is also doing some simulated user activity after we do deploys. So we're using some automated browsers with Cypress to log into our application and do some user flows, and then report back on the results. JOËL: So is this kind of like a feature test in CI, except that you're running it in production? GEOFF: Yeah. Smoke test is the word that we've settled on for it, but we run it against our production server every time we deploy. And it's a small suite. It's nowhere as big as our big Capybara suite that we run in CI, but we're trying to get feedback in less than six minutes. That's sort of the goal. In addition to running tests, we also take screenshots with a tool called Percy, and that's a visual regression testing tool. So we get to see the screenshots, and if they differ by more than one pixel, we get a ping that lets us know that maybe our CSS has moved around or something like that. JOËL: Has that caught some visual bugs for you? GEOFF: Definitely. The state of CSS at CommonLit was very messy when I arrived, and it's gotten better, but it still definitely needs some love. There are some false positives, but it's been really, really nice to be able to see visual changes on our production pages and then be able to approve them or know that there's something we have to go back and fix. JOËL: I'm curious, for this smoke test suite, how long does it take to run? GEOFF: We run it in parallel. It runs on Buildkite, which is the same tool that we use to orchestrate our CI, and the longest test takes about five minutes. It signs in as a teacher, creates an account. It creates a class; it invites the student to that class. It then logs out, logs in as that student creates the student account, signs in as the student, joins the class. It then assigns a lesson to the student then the student goes and takes the lesson. And then, when the student submits the lesson, then the test is over. And that confirms all of the most critical flows that we would want someone to drop what they were doing if it's broken, you know, account creation, class creation, lesson creation, and students taking a lesson. JOËL: So you're compressing the first few weeks of school into five minutes. GEOFF: Yes. And I pity the school that has thousands of fake teachers, all named Aaron McCarronson at the school. JOËL: [laughs] GEOFF: But we go through and delete that data every once in a while. But we have a marketer who just started at CommonLit maybe a few weeks ago, and she thought that someone was spamming our signup form because she said, "I see hundreds of teachers named Aaron McCarronson in our user list." JOËL: You had to admit that you were the spammer? GEOFF: Yes, I did. [laughs] We now have some controls to filter those people out of reports. But it's always funny when you look at the list, and you see all these fake people there. JOËL: Do you have any rate limiting on your site? GEOFF: Yeah, we do quite a bit of it, actually. Some of it we do through Cloudflare. We have tools that limit a certain flow, like people trying to credential stuffing our password, our user sign-in forms. But we also do some further stuff to prevent people from hitting key endpoints. We use Rack::Attack, which is a really nice framework. Have you had to do that in client work with clients setting that stuff up? JOËL: I've used Rack:Attack before. GEOFF: Yeah, it's got a reasonably nice interface that you can work with. And I always worry about accidentally setting those things up to be too sensitive, and then you get lots of stuff back. One issue that we sometimes find is that lots of kids at the same school are sharing an IP address. So that's not the thing that we want to use for rate limiting. We want to use some other criteria for rate limiting. JOËL: Right, right. Do you ever find that you rate limit your smoke tests? Or have you had to bypass the rate limiting in the smoke tests? GEOFF: Our smoke tests bypass our rate limiting and our bot detection. So they've got some fingerprints they use to bypass that. JOËL: That must have been an interesting day at the office. GEOFF: Yes. [laughter] With all of these things, I think it's a big challenge to figure out, and it's similar when you're making tests for development, how to make tests that are high signal. So if a test is failing really frequently, even if it's testing something that's worthwhile, if people start ignoring it, then it stops having value as a piece of signal. So we've invested a ton of time in making our test suite as reliable as possible, but you sometimes do have these things that just require a change. I've become a really big fan of...there's a Ruby driver for Capybara called Cuprite, and it doesn't control chrome with Chrome Driver or with Selenium. It controls it with the Chrome DevTools protocol, so it's like a direct connection into the browser. And we find that it's very, very fast and very, very reliable. So we saw that our Capybara specs got significantly more reliable when we started using this as our driver. JOËL: Is this because it's not actually moving the mouse around and clicking but instead issuing commands in the background? GEOFF: Yeah. My understanding of this is a little bit hazy. But I think that Selenium and ChromeDriver are communicating over a network pipe, and sometimes that network pipe is a little bit lossy. And so it results in asynchronous commands where maybe you don't get the feedback back after something happens. And CDP is what Chrome's team and I think what Puppeteer uses to control things directly. So it's great. And you can even do things with it. Like, you can simulate different time zone for a user almost natively. You can speed up or slow down the traveling of time and the direction of time in the browser and all kinds of things like that. You can flip it into mobile mode so that the device reports that it's a touch browser, even though it's not. We have a set of mobile specs where we flip it with CDP into mobile mode, and that's been really good too. Do you find when you're doing client work that you have a demand to build mobile-specific specs for system tests? JOËL: Generally not, no. GEOFF: You've managed to escape it. JOËL: For something that's specific to mobile, maybe one or two tests that have a weird interaction that we know is different on mobile. But in general, we're not doing the whole suite under mobile and the whole suite under desktop. GEOFF: When you hand off a project...it's been a while since you and I have worked together. JOËL: For those who don't know, Geoff used to be with us at thoughtbot. We were colleagues. GEOFF: Yeah, for a while. I remember my very first thoughtbot Summer Summit; you gave a really cool lightning talk about Eleanor of Aquitaine. JOËL: [laughs] GEOFF: That was great. So when you're handing a project off to a client after your ending, do you find that there's a transition period where you're educating them about the norms of the test suite before you leave it in their hands? JOËL: It depends a lot on the client. With many clients, we're working alongside an existing dev team. And so it's not so much one big handoff at the end as it is just building that in the day-to-day, making sure that we are integrating with the team from the outset of the engagement. So one thing that does come up a lot with clients is the performance of their test suite. That's often a concern because the test suite until it becomes a problem, people tend to not treat it very well. And by the time that you're bringing on an external consultant to help, generally, that's one of the areas of the code that's been a little bit neglected. And so people ask for help on making their test suite faster. Is that something that you've had to deal with at CommonLit as well? GEOFF: Yeah, that's a great question. We have struggled a lot with the speed that our test suite...the time it takes for our test suite to run. We've done a few things to improve it. The first is that we have quite a bit of caching that we do in our CI suite around dependencies. So gems get cached separately from NPM packages and browser assets. So all three of those things are independently cached. And then, we run our suites in parallel. Our Jest specs get split up into eight containers. Our Ruby non-system tests...I'd like to say unit tests, but we all know that some of those are actually integration tests. JOËL: [laughs] GEOFF: But those tests run in 15 containers, and they start the moment gems are built. So they don't wait for NPM packages. They don't wait for assets. They immediately start going. And then our system specs as soon as the assets are built kick off and start running. And we actually run that in 40 parallel containers so we can get everything finished. So our CI suite can finish...if there are no dependency bumps and no asset bumps, our specs suite you can finish in just under five minutes. But if you add up all of that time, cumulatively, it's something like 75 minutes is the total execution as it goes. Have you tried FactoryDoctor before for speeding up test suites? JOËL: This is the gem from Evil Martians? GEOFF: Yeah, it's part of TestProf, which is their really, really unbelievable toolkit for improving specs, and they have a whole bunch of things. But one of them will tell you how many invocations of FactoryBot factories each factory got. So you can see a user factory was fired 13,000 times in the test suite. It can even do some tagging where it can go in and add metadata to your specs to show which ones might be candidates for optimization. JOËL: I gave a talk at RailsConf this year titled Your Tests Are Making Too Many Database Calls. GEOFF: Nice. JOËL: And one of the things I talked about was creating a lot more data via factories than you think that you are. And I should give a shout-out to FactoryProf for finding those. GEOFF: Yeah, it's kind of a silent killer with the test suite, and you really don't think that you're doing a whole lot with it, and then you see how many associations. How do you fight that tension between creating enough data that things are realistic versus the streamlining of not creating extraneous things or having maybe mystery guests via associations and things like that? JOËL: I try to have my base factories be as minimal as possible. So if there's a line in there that I can remove, and the factory or the model still saves, then it should be removed. Some associations, you can't do that if there's a foreign key constraint, and so then I'll leave it in. But I am a very hardcore minimalist, at least with the base factory. GEOFF: I think that makes a lot of sense. We use foreign keys all over the place because we're always worried about somehow inserting student data that we can't recover with a bug. So we'd rather blow up than think we recorded it. And as a result, sometimes setting up specs for things like a student answering a multiple choice question on a quiz ends up being this sort of if you give a mouse a cookie thing where it's you need the answer options. You need the question. You need the quiz. You need the activity. You need the roster, the students to be in the roster. There has to be a teacher for the roster. It just balloons out because everything has a foreign key. JOËL: The database requires it, but the test doesn't really care. It's just like, give me a student and make it valid. GEOFF: Yes, yeah. And I find that that challenge is really hard. And sometimes, you don't see how hard it is to enforce things like database integrity until you have a lot of concurrency going on in your application. It was a very rude surprise to me to find out that browser requests if you have multiple servers going on might not necessarily be served in the order that they were made. JOËL: [laughs] So you're talking about a scenario where you're running multiple instances of your app. You make two requests from, say, two browser tabs, and somehow they get served from two different instances? GEOFF: Or not even two browser tabs. Imagine you have a situation where you're auto-saving. JOËL: Oooh, background requests. GEOFF: Yeah. So one of the coolest features we have at CommonLit is that students can annotate and highlight a text. And then, the teachers can see the annotations and highlights they've made, and it's actually part of their assignment often to highlight key evidence in a passage. And those things all fire in the background asynchronously so that it doesn't block the student from doing more stuff. But it also means that potentially if they make two changes to a highlight really quickly that they might arrive out of order. So we've had to do some things to make sure that we're receiving in the right order and that we're not blowing away data that was supposed to be there. Just think about in a Heroku environment, for example, which is where we used to be, you'd have four dynos running. If dyno one takes too long to serve the thing for dyno two, request one may finish after request two. That was a very, very rude surprise to learn that the world was not as clean and neat as I thought. JOËL: I've had to do something similar where I'm making a bunch of background requests to a server. And even with a single dyno, it is possible for your request to come back out of order just because of how TCP works. So if it's waiting for a packet and you have two of these requests that went out not too long before each other, there's no guarantee that all the packets for request one come back before all the packets from request two. GEOFF: Yeah, what are the strategies for on the client side for dealing with that kind of out-of-order response? JOËL: Find some way to effectively version the requests that you make. Timestamp is an easy one. Whenever a request comes in, you take the response from the latest timestamp, and that wins out. GEOFF: Yeah, we've started doing some unique IDs. And part of the unique ID is the browser's timestamp. We figure that no one would try to hack themselves and intentionally screw up their own data by submitting out of order. JOËL: Right, right. GEOFF: It's funny how you have to pick something to trust. [laughs] JOËL: I'd imagine, in this case, if somebody did mess around with it, they would really only just be screwing up their own UI. It's not like that's going to then potentially crash the server because of something, and then you've got a potential vector for a denial of service. GEOFF: Yeah, yeah, that's always what we're worried about, and we have to figure out how to trust these sorts of requests as what's a valid thing and what is, as you're saying, is just the user hurting themselves as opposed to hurting someone else's stuff? MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! GEOFF: You were talking about test suites. What are some things that you have found are consistently problems in real-world apps, but they're really, really hard to test in a test suite? JOËL: Difficult to test or difficult to optimize for performance? GEOFF: Maybe difficult to test. JOËL: Third-party integrations. Anything that's over the network that's going to be difficult. Complex interactions that involve some heavy frontend but then also need a lot of backend processing potentially with asynchronous workers or something like that, there are a lot of techniques that we can use to make all those play together, but that means there's a lot of complexity in that test. GEOFF: Yeah, definitely. I've taken a deep interest in what I'm sure there's a better technical term for this, but what I call network hostile environments or bandwidth hostile environments. And we see this a lot with kids. Especially during the pandemic, kids would often be trying to do their assignments from home. And maybe there are five kids in the house, and they're all trying to do their homework at the same time. And they're all sharing a home internet connection. Maybe they're in the basement because they're trying to get some peace and quiet so they can do their assignment or something like that. And maybe they're not strongly connected. And the challenge of dealing with intermittent connectivity is such an interesting problem, very frustrating but very interesting to deal with. JOËL: Have you explored at all the concept of Formal Methods to model or verify situations like that? GEOFF: No, but I'm intrigued. Tell me more. JOËL: I've not tried it myself. But I've read some articles on the topic. Hillel Wayne is a good person to follow for this. GEOFF: Oh yeah. JOËL: But it's really fascinating when you'll see, okay, here are some invariants and things. And then here are some things that you set up some basic properties for a system. And then some of these modeling languages will then poke holes and say, hey, it's possible for this 10-step sequence of events to happen that will then crash your server. Because you didn't think that it's possible for five people to be making concurrent requests, and then one of them fails and retries, whatever the steps are. So it's really good at modeling situations that, as developers, we don't always have great intuition, things like parallelism. GEOFF: Yeah, that sounds so interesting. I'm going to add that to my list of reading for the fall. Once the school year calms down, I feel like I can dig into some technical topics again. I've got this book sitting right next to my desk, Designing Data-Intensive Applications. I saw it referenced somewhere on Twitter, and I did the thing where I got really excited about the book, bought it, and then didn't have time to read it. So it's just sitting there unopened next to my desk, taunting me. JOËL: What's the 30-second spiel for what is a data-intensive app, and why should we design for it differently? GEOFF: You know, that's a great question. I'd probably find out if I'd dug further into the book. JOËL: [laughs] GEOFF: I have found at CommonLit that we...I had a couple of clients at thoughtbot that dealt with data at the scale that we deal with here. And I'm sure there are bigger teams doing, quote, "bigger data" than we're doing. But it really does seem like one of our key challenges is making sure that we just move data around fast enough that nothing becomes a bottleneck. We made a really key optimization in our application last year where we changed the way that we autosave students' answers as they go. And it resulted in a massive increase in throughput for us because we went from trying to store updated versions of the students' final answers to just storing essentially a draft and often storing that draft in local storage in the browser and then updating it on the server when we could. And then, as a result of this, we're making key updates to the table where we store a student's answers much less frequently. And that has a huge impact because, in addition to being one of the biggest tables at CommonLit...it's got almost a billion recorded answers that we've gotten from students over the years. But because we're not writing to it as often, it also means that reads that are made from the table, like when the teacher is getting a report for how the students are doing in a class or when a principal is looking at how a school is doing, now, those queries are seeing less contention from ongoing writes. And so we've seen a nice improvement. JOËL: One strategy I've seen for that sort of problem, especially when you have a very write-heavy table but that also has a different set of users that needs to read from it, is to set up a read replica. So you have your main that is being written to, and then the read replica is used for reports and people who need to look at the data without being in contention with the table being written. GEOFF: Yeah, Rails multi-DB support now that it's native to the framework is excellent. It's so nice to be able to just drop that in and fire it up and have it work. We used to use a solution that Instacart had built. It was great for our needs, but it wasn't native to the framework. So every single time we upgraded Rails, we had to cross our fingers and hope that it didn't, you know, whatever private APIs of ActiveRecord it was using hadn't broken. So now that that stuff, which I think was open sourced from GitHub's multi-database implementation, so now that that's all native in Rails, it's really, really nice to be able to use that. JOËL: So these kinds of database tricks can help make the application much more performant. You'd mentioned earlier that when you were trying to make your test performant that you had introduced parallelism, and I feel like that's maybe a bit of an intimidating thing for a lot of people. How would you go about converting a test suite that's just vanilla RSpec, single-threaded, and then moving it in a direction of being more parallel? GEOFF: There's a really, really nice tool called Knapsack, which has a free version. But the pro version, I feel like if you're spending any money at all on CI, it's immediately worth the cost. I think it's something like $75 a month for each suite that you run on it. And Knapsack does this dynamic allocation of tests across containers. And it interfaces with several of the popular CI providers so that it looks at environment variables and can tell how many containers you're splitting across. It'll do some things, like if some of your containers start early and some of them start late, it will distribute the work so that they all end at the same time, which is really nice. We've preferred CI providers that charge by the minute. So rather than just paying for a service that we might not be using, we've used services like Semaphore, and right now, we're on Buildkite, which charge by the minute, which means that you can decide to do as much parallelism as you want. You're just paying for the compute time as you run things. JOËL: So that would mean that two minutes of sequential build time costs just the same as splitting it up in parallel and doing two simultaneous minutes of build time. GEOFF: Yeah, that is almost true. There's a little bit of setup time when a container spins up. And that's one of the key things that we optimize. I guess if we ran 200 containers if we were like Shopify or something like that, we could technically make our CI suite finish faster, but it might cost us three times as much. Because if it takes a container 30 seconds to spin up and to get ready, that's 30 seconds of dead time when you're not testing, but you're paying for the compute. So that's one of the key optimizations that we make is figuring out how many containers do we need to finish fast when we're not just blowing time on starting and finishing? JOËL: Right, because there is a startup cost for each container. GEOFF: Yeah, and during the work day when our engineers are working along, we spin up 200 EC2 machines or 150 EC2 machines, and they're there in the fleet, and they're ready to go to run CI jobs for us. But if you don't have enough machines, then you have jobs that sit around waiting to start, that sort of thing. So there's definitely a tension between figuring out how much parallelism you're going to do. But I feel like to start; you could always break your test suite into four pieces or two pieces and just see if you get some benefit to running a smaller number of tests in parallel. JOËL: So, manually splitting up the test suite. GEOFF: No, no, using something like Knapsack Pro where you're feeding it the suite, and then it's dividing up the tests for you. I think manually splitting up the suite is probably not a good practice overall because I'm guessing you'll probably spend more engineering time on fiddling with which tests go where such that it wouldn't be cost-effective. JOËL: So I've spent a lot of time recently working to improve a parallel test suite. And one of the big problems that you have is trying to make sure that all of your parallel surfaces are being used efficiently, so you have to split the work evenly. So if you said you have 70 minutes worth of work, if you give 50 minutes to one worker and 20 minutes to the other, that means that your total test suite is still 50 minutes, and that's not good. So ideally, you split it as evenly as possible. So I think there are three evolutionary steps on the path here. So you start off, and you're going to manually split things out. So you're going to say our biggest chunk of tests by time are the feature specs. We'll make them almost like a separate suite. Then we'll make the models and controllers and views their own thing, and that's roughly half and half, and run those. And maybe you're off by a little bit, but it's still better than putting them all in one. It becomes difficult, though, to balance all of these because then one might get significantly longer than the other then, you have to manually rebalance it. It works okay if you're only splitting it among two workers. But if you're having to split it among 4, 8, 16, and more, it's not manageable to do this, at least not by hand. If you want to get fancy, you can try to automate that process and record a timing file of how long every file takes. And then when you kick off the build process, look at that timing file and say, okay, we have 70 minutes, and then we'll just split the file so that we have roughly 70 divided by number of workers' files or minutes of work in each process. And that's what gems like parallel_tests do. And Knapsack's Classic mode works like this as well. That's decently good. But the problem is you're working off of past information. And so if the test has changed or just if it's highly variable, you might not get a balanced set of workers. And as you mentioned, there's a startup cost, and so not all of your workers boot up at the same time. And so you might still have a very uneven amount of work done by each worker by statically determining the work to be done via a timing file. So the third evolution here is a dynamic or a self-balancing approach where you just put all of the tests or the files in a queue and then just have every worker pull one or two tests when it's ready to work. So that way, if something takes a lot longer than expected, well, it's just not pulling more from the queue. And everybody else still pulls, and they end up all balancing each other out. And then ideally, every worker finishes work at exactly the same time. And that's how you know you got the most value you could out of your parallel processes. GEOFF: Yeah, there's something about watching all the jobs finish in almost exactly, you know, within 10 seconds of each other. It just feels very, very satisfying. I think in addition to getting this dynamic splitting where you're getting either per file or per example split across to get things finishing at the same time, we've really valued getting fast feedback. So I mentioned before that our Jest specs start the moment NPM packages get built. So as soon as there's JavaScripts that can be executed in test, those kick-off. As soon as our gems are ready, the RSpec non-system tests go off, and they start running specs immediately. So we get that really, really fast feedback. Unfortunately, the browser tests take the longest because they have to wait for the most setup. They have the most dependencies. And then they also run the slowest because they run in the browser and everything. But I think when things are really well-oiled, you watch all of those containers end at roughly the same time, and it feels very satisfying. JOËL: So, a few weeks ago, on an episode of The Bike Shed, I talked with Eebs Kobeissi about dependency graphs and how I'm super excited about it. And I think I see a dependency graph in what you're describing here in that some things only depend on the gem file, and so they can start working. But other things also depend on the NPM packages. And so your build pipeline is not one linear process or one linear process that forks into other linear processes; it's actually a dependency graph. GEOFF: That is very true. And the CI tool we used to use called Semaphore actually does a nice job of drawing the dependency graph between all of your steps. Buildkite does not have that, but we do have a bunch of steps that have to wait for other steps to finish. And we do it in our wiki. On our repo, we do have a diagram of how all of this works. We found that one of the things that was most wasteful for us in CI was rebuilding gems, reinstalling NPM packages (We use Yarn but same thing.), and then rebuilding browser assets. So at the very start of every CI run, we build hashes of a bunch of files in the repository. And then, we use those hashes to name Docker images that contain the outputs of those files so that we are able to skip huge parts of our CI suite if things have already happened. So I'll give an example if Ruby gems have not changed, which we would know by the Gemfile.lock not having changed, then we know that we can reuse a previously built gems image that has the gems that just gets melted in, same thing with yarn.lock. If yarn.lock hasn't changed, then we don't have to build NPM packages. We know that that already exists somewhere in our Docker registry. In addition to skipping steps by not redoing work, we also have started to experiment...actually, in response to a comment that Chris Toomey made in a prior Bike Shed episode, we've started to experiment with skipping irrelevant steps. So I'll give an example of this if no Ruby files have changed in our repository, we don't run our RSpec unit tests. We just know that those are valid. There's nothing that needs to be rerun. Similarly, if no JavaScript has changed, we don't run our Jest tests because we assume that everything is good. We don't lint our views with erb-lint if our view files haven't changed. We don't lint our factories if the model or the database hasn't changed. So we've got all these things to skip key types of processing. I always try to err on the side of not having a false pass. So I'm sure we could shave this even tighter and do even less work and sometimes finish the build even faster. But I don't want to ever have a thing where the build passes and we get false confidence. JOËL: Right. Right. So you're using a heuristic that eliminates the really obvious tests that don't need to be run but the ones that maybe are a little bit more borderline, you keep them in. Shaving two seconds is not worth missing a failure. GEOFF: Yeah. And I've read things about big enterprises doing very sophisticated versions of this where they're guessing at which CI specs might be most relevant and things like that. We're nowhere near that level of sophistication right now. But I do think that once you get your test suite parallelized and you're not doing wasted work in the form of rebuilding dependencies or rebuilding assets that don't need to be rebuilt, there is some maybe not low, maybe medium hanging fruit that you can use to get some extra oomph out of your test suite. JOËL: I really like that you brought up this idea of infrastructure and skipping. I think in my own way of thinking about improving test suites, there are three broad categories of approaches you can take. One variable you get to work with is that total number of time single-threaded, so you mentioned 70 minutes. You can make that 70 minutes shorter by avoiding database writes where you don't need them, all the common tricks that we would do to actually change the test themselves. Then we can change...as another variable; we get to work with parallelism, we talked about that. And then finally, there's all that other stuff that's not actually executing RSpec like you said, loading the gems, installing NPM packages, Docker images. All of those, if we can skip work running migrations, setting up a database, if there are situations where we can improve the speed there, that also improves the total time. GEOFF: Yeah, there are so many little things that you can pick at to...like, one of the slowest things for us is Elasticsearch. And so we really try to limit the number of specs that use Elasticsearch if we can. You actually have to opt-in to using Elasticsearch on a spec, or else we silently mock and disable all of the things that happen there. When you're looking at that first variable that you were talking about, just sort of the overall time, beyond using FactoryDoctor and FactoryProf, is there anything else that you've used to just identify the most egregious offenders in a test suite and then figure out if they're worth it? JOËL: One thing you can do is hook into Active Support notification to try to find database writes. And so you can find, oh, here's where all of the...this test is making way too many database writes for some reason, or it's making a lot, maybe I should take a look at it; it's a hotspot. GEOFF: Oh, that's really nice. There's one that I've always found is like a big offender, which is people doing negative expectations in system specs. JOËL: Oh, for their Capybara wait time. GEOFF: Yeah. So there's a really cool gem, and the name of it is eluding me right now. But there's a gem that raises a special exception if Capybara waits the full time for something to happen. So it lets you know that those things exist. And so we've done a lot of like hunting for...Knapsack will report the slowest examples in your test suite. So we've done some stuff to look for the slowest files and then look to see if there are examples of these negative expectations that are waiting 10 seconds or waiting 8 seconds before they fail. JOËL: Right. Some files are slow, but they're slow for a reason. Like, a feature spec is going to be much slower than a model test. But the model tests might be very wasteful and because you have so many of them, if you're doing the same pattern in a bunch of them or if it's a factory that's reused across a lot of them, then a small fix there can have some pretty big ripple effects. GEOFF: Yeah, I think that's true. Have you ever done any evaluation of test suite to see what files or examples you could throw away? JOËL: Not holistically. I think it's more on an ad hoc basis. You find a place, and you're like, oh, these tests we probably don't need them. We can throw them out. I have found dead tests, tests that are not executed but still committed to the repo. GEOFF: [laughs] JOËL: It's just like, hey, I'm going to get a lot of red in my diff today. GEOFF: That always feels good to have that diff-y check-in, and it's 250 lines or 1,000 lines of red and 1 line of green. JOËL: So that's been a pretty good overview of a lot of different areas related to performance and infrastructure around tests. Thank you so much, Geoff, for joining us today on The Bike Shed to talk about your experience at CommonLit doing this. Do you have any final words for our listeners? GEOFF: Yeah. CommonLit is hiring a senior full-stack engineer, so if you'd like to work on Rails and TypeScript in a place with a great test suite and a great team. I've been here for five years, and it's a really, really excellent place to work. And also, it's been really a pleasure to catch up with you again, Joël. JOËL: And, Geoff, where can people find you online? GEOFF: I'm Geoff with a G, G-E-O-F-F Harcourt, @geoffharcourt. And that's my name on Twitter, and it's my name on GitHub, so you can find me there. JOËL: And we'll make sure to include a link to your Twitter profile in the show notes. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

The Bike Shed
354: The History of Computing

The Bike Shed

Play Episode Listen Later Sep 13, 2022 31:16


Why does the history of computing matter? Joël and Developer at thoughtbot Sara Jackson, ponder this and share some cool stories (and trivia!!) behind the tools we use in the industry. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Sara on Twitter (https://twitter.com/csarajackson) UNIX philosophy (https://en.wikipedia.org/wiki/Unix_philosophy) Hillel Wayne on why we ask linked list questions (https://www.hillelwayne.com/post/linked-lists/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter, Team Lead, and Developer Sara Jackson. SARA: Hello, happy to be here. JOËL: Together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world? SARA: Well, Joël, you might know that recently our team had a small get-together in Toronto. JOËL: And our team, for those who are not aware, is fully remote distributed across multiple countries. So this was a chance to get together in person. SARA: Yes, correct. This was a chance for those on the Boost team to get together and work together as if we had a physical office. JOËL: Was this your first time meeting some members of the team? SARA: It was my second, for the most part. So I joined thoughtbot, but after thoughtbot had already gotten remote. Fortunately, I was able to meet many other thoughtboters in May at our summit. JOËL: Had you worked at a remote company before coming to thoughtbot? SARA: Yes, I actually started working remotely in 2019, but even then, that wasn't my first time working remotely. I actually had a full year of internship in college that was remote. JOËL: So you were a pro at this long before the pandemic made us all try it out. SARA: I don't know about that, but I've certainly dealt with the idiosyncrasies that come with remote work for longer. JOËL: What do you think are some of the challenges of remote work as opposed to working in person in an office? SARA: I think definitely growing and maintaining a culture. When you're in an office, it's easy to create ad hoc conversations and have events that are small that build on the culture. But when you're remote, it has to be a lot more intentional. JOËL: That definitely rings true for me. One of the things that I really appreciated about in-person office culture was the serendipity that you have those sort of random meetings at the water cooler, those conversations, waiting for coffee with people who are not necessarily on the same team or the same project as you are. SARA: I also really miss being able to have lunch in person with folks where I can casually gripe about an issue I might be having, and almost certainly, someone would have the answer. Now, if I'm having an issue, I have to intentionally seek help. [chuckles] JOËL: One of the funny things that often happened, at least the office where I worked at, was that lunches would often devolve into taxonomy conversations. SARA: I wish I had been there for that. [laughter] JOËL: Well, we do have a taxonomy channel on Slack to somewhat continue that legacy. SARA: Do you have a favorite taxonomy lunch discussion that you recall? JOËL: I definitely got to the point where I hated the classifying a sandwich. That one has been way overdone. SARA: Absolutely. JOËL: There was an interesting one about motorcycles, and mopeds, and bicycles, and e-bikes, and trying to see how do you distinguish one from the other. Is it an electric motor? Is it the power of the engine that you have? Is it the size? SARA: My brain is already turning on those thoughts. I feel like I could get lost down that rabbit hole very easily. [laughter] JOËL: Maybe that should be like a special anniversary episode for The Bike Shed, just one long taxonomy ramble. SARA: Where we talk about bikes. JOËL: Ooh, that's so perfect. I love it. One thing that I really appreciated during our time in Toronto was that we actually got to have lunch in person again. SARA: Yeah, that was so wonderful. Having folks coming together that had maybe never worked together directly on clients just getting to sit down and talk about our day. JOËL: Yeah, and talk about maybe it's work-related, maybe it's not. There's a lot of power to having some amount of deeper interpersonal connection with your co-workers beyond just the we work on a project together. SARA: Yeah, it's like camaraderie beyond the shared mission of the company. It's the shared interpersonal mission, like you say. Did you have any in-person pairing sessions in Toronto? JOËL: I did. It was actually kind of serendipitous. Someone was stuck with a weird failing test because somehow the order factories were getting created in was not behaving in the expected way, and we herd on it, dug into it, found some weird thing with composite primary keys, and solved the issue. SARA: That's wonderful. I love that. I wonder if that interaction would have happened or gotten solved as quickly if we hadn't been in person. JOËL: I don't know about you, but I feel like I sometimes struggle to ask for help or ask for a pair more when I'm online. SARA: Yeah, I agree. It's easier to feel like you're not as big of an impediment when you're in person. You tap someone on the shoulder, "Hey, can you take a look at this?" JOËL: Especially when they're on the same team as you, they're sitting at the next desk over. I don't know; it just felt easier. Even though it's literally one button press to get Tuple to make a call, somehow, I feel like I'm interrupting more. SARA: To combat that, I've been trying to pair more frequently and consistently regardless of if I'm struggling with a problem. JOËL: Has that worked pretty well? SARA: It's been wonderful. The only downside has been pairing fatigue. JOËL: Pairing fatigue is real. SARA: But other than that, problems have gotten solved quickly. We've all learned something for those that I've paired with. It goes faster. JOËL: So it was really great that we had this experience of doing our daily work but co-located in person; we have these experiences of working together. What would you say has been one of the highlights for you of that time? SARA: 100% karaoke. JOËL: [laughs] SARA: Only two folks did not attend. Many of the folks that did attend told me they weren't going to sing, but they were just going to watch. By the end of the night, everyone had sung. We were there for nearly three and a half hours. [laughs] JOËL: It was a good time all around. SARA: I saw a different side to Chad. JOËL: [laughs] SARA: And everyone, honestly. Were there any musical choices that surprised you? JOËL: Not particularly. Karaoke is always fun when you have a group of people that you trust to be a little bit foolish in front of to put yourself out there. I really appreciated the style that we went for, where we have a private room for just the people who were there as opposed to a stage in a bar somewhere. I think that makes it a little bit more accessible to pick up the mic and try to sing a song. SARA: I agree. That style of karaoke is a lot more popular in Asia, having your private room. Sometimes you can find it in major cities. But I also prefer it for that reason. JOËL: One of my highlights of this trip was this very sort of serendipitous moment that happened. Someone was asking a question about the difference between a Mac and Linux operating systems. And then just an impromptu gathering happened. And you pulled up a chair, and you're like, gather around, everyone. In the beginning, there was Multics. It was amazing. SARA: I felt like some kind of historian or librarian coming out from the deep. Let me tell you about this random operating system knowledge that I have. [laughs] JOËL: The ancient lore. SARA: The ancient lore in the year 1969. JOËL: [laughs] And then yeah, we had a conversation walking the history of operating systems, and why we have macOS and Linux, and why they're different, and why Windows is a totally different kind of family there. SARA: Yeah, macOS and Linux are sort of like cousins coming from the same tree. JOËL: Is that because they're both related through Unix? SARA: Yes. Linux and macOS are both built based off of different versions of Unix. Over the years, there's almost like a family tree of these different Nix operating systems as they're called. JOËL: I've sometimes seen asterisk N-I-X. This is what you're referring to as Nix. SARA: Yes, where the asterisk is like the RegEx catch-all. JOËL: So this might be Unix. It might be Linux. It might be... SARA: Minix. JOËL: All of those. SARA: Do you know the origin of the name Unix? JOËL: I do not. SARA: It's kind of a fun trivia piece. So, in the beginning, there was Multics spelled M-U-L-T-I-C-S, standing for the Multiplexed Information and Computing Service. Dennis Ritchie and Ken Thompson of Bell Labs famous for the C programming language... JOËL: You may have heard of it. SARA: You may have heard of it maybe on a different podcast. They were employees at Bell Labs when Multics was being created. They felt that Multics was very bulky and heavy. It was trying to do too many things at once. It did have a few good concepts. So they developed their own smaller Unix originally, Unics, the Uniplexed Information and Computing Service, Uniplexed versus Multiplexed. We do one thing really well. JOËL: And that's the Unix philosophy. SARA: It absolutely is. The Unix philosophy developed out of the creation of Unix and C. Do you know the four main points? JOËL: No, is it small sharp tools? It's the main one I hear. SARA: Yes, that is the kind of quippy version that has come out for sure. JOËL: But there is a formal four-point manifesto. SARA: I believe it's evolved over the years. But it's interesting looking at the Unix philosophy and seeing how relevant it is today in web development. The four points being make each program do one thing well. To this end, don't add features; make a new program. I feel like we have this a lot in encapsulation. JOËL: Hmm, maybe even the open-closed principle. SARA: Absolutely. JOËL: Similar idea. SARA: Another part of the philosophy is expecting output of your program to become input of another program that is yet unknown. The key being don't clutter your output; don't have extraneous text. This feels very similar to how we develop APIs. JOËL: With a focus on composability. SARA: Absolutely. Being able to chain commands together like you see in Ruby all the time. JOËL: I love being able to do this, for example, the enumerable API in Ruby and just being able to chain all these methods together to just very nicely do some pretty big transformations on an array or some other data structure. SARA: 100% agree there. That ability almost certainly came out of following the tenets of this philosophy, maybe not knowingly so but maybe knowingly so. [chuckles] JOËL: So is that three or four? SARA: So that was two. The third being what we know as agile. JOËL: Really? SARA: Yeah, right? The '70s brought us agile. Design and build software to be tried early, and don't hesitate to throw away clumsy parts and rebuild. JOËL: Hmmm. SARA: Even in those days, despite waterfall style still coming on the horizon. It was known for those writing software that it was important to iterate quickly. JOËL: Wow, I would never have known. SARA: It's neat having this history available to us. It's sort of like a lens at where we came from. Another piece of this history that might seem like a more modern concept but was a very big part of the movement in the '70s and the '80s was using tools rather than unskilled help or trying to struggle through something yourself when you're lightening a programming task. We see this all the time at thoughtbot. Folks do this many times there is an issue on a client code. We are able to generalize the solution, extract into a tool that can then be reused. JOËL: So that's the same kind of genesis as a lot of thoughtbot's open-source gems, so I'm thinking of FactoryBot, Clearance, Paperclip, the old-timey file upload gem, Suspenders, the Rails app generator, and the list goes on. SARA: I love that in this last point of the Unix philosophy, they specifically call out that you should create a new tool, even if it means detouring, even if it means throwing the tools out later. JOËL: What impact do you think that has had on the way that tooling in the Unix, or maybe I should say *Nix, ecosystem has developed? SARA: It was a major aspect of the Nix environment community because Unix was available, not free, but very inexpensively to educational institutions. And because of how lightweight it was and its focus on single-use programs, programs that were designed to do one thing, and also the way the shell was allowing you to use commands directly and having it be the same language as the shell scripting language, users, students, amateurs, and I say that in a loving way, were able to create their own tools very quickly. It was almost like a renaissance of Homebrew. JOËL: Not Homebrew as in the macOS package manager. SARA: [laughs] And also not Homebrew as in the alcoholic beverage. JOËL: [laughs] So, this kind of history is fun trivia to know. Is it really something valuable for us as a jobbing developer in 2022? SARA: I would say it's a difficult question. If you are someone that doesn't dive into the why of something, especially when something goes wrong, maybe it wouldn't be important or useful. But what sparked the conversation in Toronto was trying to determine why we as thoughtbot tend to prefer using Macs to develop on versus Linux or Windows. There is a reason, and the reason is in the history. Knowing that can clarify decisions and can give meaning where it feels like an arbitrary decision. JOËL: Right. We're not just picking Macs because they're shiny. SARA: They are certainly shiny. And the first thing I did was to put a matte case on it. JOËL: [laughs] So no shiny in your office. SARA: If there were too many shiny things in my office, boy, I would never get work done. The cats would be all over me. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So we've talked a little bit about Unix or *Nix, this evolution of systems. I've also heard the term POSIX thrown around when talking about things that seem to encompass both macOS and Linux. How does that fit into this history? SARA: POSIX is sort of an umbrella of standards around operating systems that was based on Unix and the things that were standard in Unix. It stands for the Portable Operating System Interface. This allowed for compatibility between OSs, very similar to USB being the standard for peripherals. JOËL: So, if I was implementing my own Unix-like operating system in the '80s, I would try to conform to the POSIX standard. SARA: Absolutely. Now, not every Nix operating system is POSIX-compliant, but most are or at least 90% of the way there. JOËL: Are any of the big ones that people tend to think about not compliant? SARA: A major player in the operating system space that is not generally considered POSIX-compliant is Microsoft Windows. JOËL: [laughs] It doesn't even try to be Unix-like, right? It's just its own thing, SARA: It is completely its own thing. I don't think it even has a standard necessarily that it conforms to. JOËL: It is its own standard, its own branch of the family tree. SARA: And that's what happens when your operating system is very proprietary. This has caused folks pain, I'm sure, in the past that may have tried to develop software on their computers using languages that are more readily compatible with POSIX operating systems. JOËL: So would you say that a language like Ruby is more compatible with one of the POSIX-compatible operating systems? SARA: 100% yes. In fact, to even use Ruby as a development tool in Windows, prior to Windows 10, you needed an additional tool. You needed something like Cygwin or MinGW, which were POSIX-compliant programs that it was almost like a shell in your Windows computer that would allow you to run those commands. JOËL: Really? For some reason, I thought that they had some executables that you could run just on Windows by itself. SARA: Now they do, fortunately, to the benefit of Ruby developers everywhere. As of Windows 10, we now have WSL, the Windows Subsystem for Linux that's built-in. You don't have to worry about installing or configuring some third-party software. JOËL: I guess that kind of almost cheats by just having a POSIX system embedded in your non-POSIX system. SARA: It does feel like a cheat, but I think it was born out of demand. The Windows NT kernel, for example, is mostly POSIX-compliant. JOËL: Really? SARA: As a result of it being used primarily for servers. JOËL: So you mentioned the Ruby tends and the Rails ecosystem tends to run better and much more frequently on the various Nix systems. Did it have to be that way? Or is it just kind of an accident of history that we happen to end up with Ruby and Rails in this ecosystem, but just as easily, it could have evolved in the Windows world? SARA: I think it is an amalgam of things. For example, Unix and Nix operating systems being developed earlier, being widely spread due to being license-free oftentimes, and being widely used in the education space. Also, because it is so lightweight, it is the operating system of choice. For most servers in the world, they're running some form of Unix, Linux, or macOS. JOËL: I don't think I've ever seen a server that runs macOS; exclusively seen it on dev machines. SARA: If you go to an animation company, they have server farms of macOS machines because they're really good at rendering. This might not be the case anymore, but it was at one point. JOËL: That's a whole other world that I've not interacted with a whole lot. SARA: [chuckles] JOËL: It's a fun intersection between software, and design, and storytelling. That is an important part for the software field. SARA: Yeah, it's definitely an aspect that deserves its own deep dive of sorts. If you have a server that's running a Windows-based operating system like NT and you have a website or a program that's designed to be served under a Unix-based server, it can easily be hosted on the Windows server; it's not an issue. The reverse is not true. JOËL: Oh. SARA: And this is why programming on a Nix system is the better choice. JOËL: It's more broadly compatible. SARA: Absolutely. Significantly more compatible with more things. JOËL: So today, when I develop, a lot of the tooling that I use is open source. The open-source movement has created a lot of the languages that we know and love, including Ruby, including Rails. Do you think there's some connection between a lot of that tooling being open source and maybe some of the Unix family of operating systems and movements that came out of that branch of the operating system family tree? SARA: I think that there is a lot of tie-in with today's open-source culture and the computing history that we've been talking about, for example, people finding something that they dislike about the tools that are available and then rolling their own. That's what Ken Thompson and Dennis Ritchie did. Unix was not an official Bell development. It was a side project for them. JOËL: I love that. SARA: You see this happen a lot in the software world where a program gets shared widely, and due to this, it gains traction and gains buy-in from the community. If your software is easily accessible to students, folks that are learning, and breaking things, and rebuilding, and trying, and inventing, it's going to persist. And we saw that with Unix. JOËL: I feel like this background on where a lot of these operating systems came but then also the ecosystems, the values that evolved with them has given me a deeper appreciation of the tooling, the systems that we work with today. Are there any other advantages, do you think, to trying to learn a little bit of computing history? SARA: I think the main benefit that I mentioned before of if you're a person that wants to know why, then there is a great benefit in knowing some of these details. That being said, you don't need to deep dive or read multiple books or write papers on it. You can get enough information from reading or skimming some Wikipedia pages. But it's interesting to know where we came from and how it still affects us today. Ruby was written in C, for example. Unix was written in C as well, originally Assembly Language, but it got rewritten in C. And understanding the underlying tooling that goes into that that when things go wrong, you know where to look. JOËL: I guess that that is the next question is where do you look if you're kind of interested? Is Wikipedia good enough? You just sort of look up operating system, and it tells you where to go? Or do you have other sources you like to search for or start pulling at those threads to understand history? SARA: That's a great question. And Wikipedia is a wonderful starting point for sure. It has a lot of the abbreviated history and links to better references. I don't have them off the top of my head. So I will find them for you for the show notes. But there are some old esoteric websites with some of this history more thoroughly documented by the people that lived it. JOËL: I feel like those websites always end up being in HTML 2; your very basic text, horizontal rules, no CSS. SARA: Mm-hmm. And those are the sites that have many wonderful kernels of knowledge. JOËL: Uh-huh! Great pun. SARA: [chuckles] Thank you. JOËL: Do you read any content by Hillel Wayne? SARA: I have not. JOËL: So Hillel produces a lot of deep dives into computing history, oftentimes trying to answer very particular questions such as when and why did we start using reversing a linked list as the canonical interview question? And there are often urban legends around like, oh, it's because of this. And then Hillel will do some research and go through actual archives of messages on message boards or...what is that protocol? SARA: BBS. JOËL: Yes. And then find the real answer, like, do actual historical methodology, and I love that. SARA: I had not heard of this before. I don't know how. And that is all I'm going to be doing this weekend is reading these. That kind of history speaks to my heart. I have a random fun fact along those lines that I wanted to bring to the show, which was that the echo command that we know and love in the terminal was first introduced by the Multics operating system. JOËL: Wow. So that's like the most common piece of Multics that as an everyday user of a modern operating system that we would still touch a little bit of that history every day when we work. SARA: Yeah, it's one of those things that we don't think about too much. Where did it come from? How long has it been around? I'm sure the implementation today is very different. But it's like etymology, and like taxonomy, pulling those threads. JOËL: Two fantastic topics. On that wonderful little nugget of knowledge, let's wrap up. Sara, where can people find you online? SARA: You can find me on Twitter at @csarajackson. JOËL: And we will include a link to that in the show notes. SARA: Thank you so much for having me on the show and letting me nerd out about operating system history. JOËL: It's been a pleasure. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Software Unscripted
Clickbait with Hillel Wayne

Software Unscripted

Play Episode Listen Later Apr 1, 2022 56:32


It's about time Software Unscripted had a real clickbait episode.

clickbait hillel wayne
SoftwareArchitektur im Stream
Hillel Wayne & Laurent Bossavit - Is It All Built on Sand - What Do We Actually Know About Software Development?

SoftwareArchitektur im Stream

Play Episode Listen Later Oct 26, 2021 65:07


We all have some ideas about what works in software engineering and what doesn't. But without real evidence and data that is just an opinion. Empirical software engineering tries to answer the question of what can be proven to work in software development. In this episode, Hillel Wayne and Laurent Bossavit will talk about what we know about software development, what we don't know - and the myths about it i.e. what we think we know but really don't. Links Laurent's Book “The Leprechauns of Software Development” Derek M. Jones: Evidence-based Software Engineering: based on the publicly available data Hillel's talk “What We Know We Don't Know” Hillel's consulting Additional Links How students learn Reframing the Liskov substitution principle through the lens of testing Executable Examples for Programming Problem Comprehension What we know It Will Never Work in Theory: Short summaries of recent results in empirical software engineering research Fixing Faults in C and Java Source Code: Abbreviated vs. Full-word Identifier Names Recurring opinions or productive improvements—what agile teams actually discuss in retrospectives Andy Oram, Greg Wilson: Making Software Code Reviews Expectations, Outcomes, and Challenges Of Modern Code Review Characteristics of Useful Code Reviews: An Empirical Study at Microsoft Criticism of existing reasearch Hillel about “This is How Science Happens” - criticism of a code minin paper Hillel's newsletter: I **ing hate Science Hillel about “Are We Really Engineers?”

The Swyx Mixtape
[Weekend Drop] What We Know We Don't Know — Hillel Wayne

The Swyx Mixtape

Play Episode Listen Later Aug 8, 2021 41:27


An exploration of Empirical Software Engineering. We know far less than we care to admit, and the stuff we do know is boring but definitely worth investing in.See talk slides, referenced papers, and video: https://hillelwayne.com/talks/what-we-know-we-dont-know/

Blog Cast
Ep 11: Hillel Wayne: I f****ing hate science"

Blog Cast

Play Episode Listen Later Aug 8, 2021 16:58


Today's post is "I f****ing hate science" by Hillel Wayne, which was first published in July 2021. The original post can be found at https://buttondown.email/hillelwayne/archive/i-ing-hate-science We're always looking for great blog post from the wonderful world of tech and data. Follow us on Twitter @blogcastpod to suggest your favorite posts, or email blogcastpod@gmail.com. We'd also love to hear from folks who want to read for us! Hear ya later :)

science fing hillel wayne
Adventures in DevOps
DevOps 064: Software Dependencies: Do you Know What’s Lurking in your Software?

Adventures in DevOps

Play Episode Listen Later Feb 23, 2021 45:09


Charles is joined by Caleb Fornari and Jeffrey Groman as we discuss the challenges of public versus private package managers and the security implications of using public repositories. Panel Caleb Fornari Charles Max Wood Jeffrey Groman Sponsors Dev Heroes Accelerator Links Adventures in DevOps - Devchat.tv Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies Devchat.tv | JSJ 357: Event-Stream & Package Vulnerabilities with Richard Feldman and Hillel Wayne Malicious code found in npm package event-stream downloaded 8 million times in the past 2.5 months GitHub | The Node Security Platform Picks Caleb- Have a plan to mitigate damage if someone is able to get inside your network. Don’t just secure the public side of your technical infrastructure, make sure your internal security as just as strong as your external security. Charles- Dev Heroes Accelerator | Devchat.tv Charles- The Umbrella Academy | Netflix Charles- Personal Retreat Jeffrey- Asset management: Know and document where all of your digital assets reside. Whether servers, VMs, EC2 instances, and all of your structured and unstructured data. Jeffrey- You can’t secure what you don’t know about  

Devchat.tv Master Feed
DevOps 064: Software Dependencies: Do you Know What’s Lurking in your Software?

Devchat.tv Master Feed

Play Episode Listen Later Feb 23, 2021 45:09


Charles is joined by Caleb Fornari and Jeffrey Groman as we discuss the challenges of public versus private package managers and the security implications of using public repositories. Panel Caleb Fornari Charles Max Wood Jeffrey Groman Sponsors Dev Heroes Accelerator Links Adventures in DevOps - Devchat.tv Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Other Companies Devchat.tv | JSJ 357: Event-Stream & Package Vulnerabilities with Richard Feldman and Hillel Wayne Malicious code found in npm package event-stream downloaded 8 million times in the past 2.5 months GitHub | The Node Security Platform Picks Caleb- Have a plan to mitigate damage if someone is able to get inside your network. Don’t just secure the public side of your technical infrastructure, make sure your internal security as just as strong as your external security. Charles- Dev Heroes Accelerator | Devchat.tv Charles- The Umbrella Academy | Netflix Charles- Personal Retreat Jeffrey- Asset management: Know and document where all of your digital assets reside. Whether servers, VMs, EC2 instances, and all of your structured and unstructured data. Jeffrey- You can’t secure what you don’t know about  

Josh on Narro
What engineering can teach (and learn from) us • Hillel Wayne

Josh on Narro

Play Episode Listen Later Jan 23, 2021 16:35


This is part three of the crossover project. Part 1 is here and part 2 is here. I met William at Deconstruct 2019.1 We were walking back from the pre-party—too loud for my comfort level—and I took the chance to interview him. He knew about my project and wanted to share his memories of mechanical engineering. “Most of my skills transferred seamlessly. There’s one book, Sketching User Experiences, that’s aimed at software engineers. https://www.hillelwayne.com/post/crossover-project/what-we-can-learn/ herehereDeconstruct 2019Sketching User ExperiencesThe First Snap-Fit HandbookRebase 2020DeconstructPycon!!conStrangeLoopprogramming language theoristsenterprise engineersorigami artistsdiff CAD fileslockout-tagoutweekly newsletterTwitterChelsea TroyWill CraftDan Luu

Josh on Narro
We Are Not Special • Hillel Wayne

Josh on Narro

Play Episode Listen Later Jan 22, 2021 22:09


This is part two of the crossover project. Part 1 is here. No one thinks about moving the starting or ending point of the bridge midway through construction. -Justin Cave I had to move a bridge. -Anonymous1 Carl worked as a mechanical verification engineer: he tested oil rigs to see how much they vibrated. Humans work and live on oil rigs for long stretches of time, and if they vibrate too much it can be impossible to sleep. https://www.hillelwayne.com/post/crossover-project/we-are-not-special/ herelast essayNoEstimates movementAgile ManifestoSpiral ModelV Modelfull clay replicaAustrian TunnelingNick Coghlanbridge plansFastenal labs pageOne of their pagesaerodynamic profilescrew conveyorRSSjoin my newslettertwitterChelsea TroyWill CraftDan Luu

Josh on Narro
Are We Really Engineers? • Hillel Wayne

Josh on Narro

Play Episode Listen Later Jan 18, 2021 25:22


I sat in front of Mat, idly chatting about tech and cuisine. Before now, I had known him mostly for his cooking pictures on Twitter, the kind that made me envious of suburbanites and their 75,000 BTU woks. But now he was the test subject for my new project, to see if it was going to be fruitful or a waste of time. “What’s your job?” “Right now I’m working on microservices for a social media management platform. https://www.hillelwayne.com/post/crossover-project/are-we-really-engineers/ The Coming Software ApocalypseA Bridge to NowhereCodeless Code storiesGlenn Vanderburgin the Intrado softwareunfairly sent people to jailWyomingSoftware Craftsmanship: The New ImperativeSoftware CraftsmanshipRSSjoin my newslettertwitterChelsea TroyWill CraftDan Luu

Marianne Writes a Programming Language
Certainty is a Programming Bug (featuring Hillel Wayne)

Marianne Writes a Programming Language

Play Episode Listen Later Nov 25, 2020 19:52


What kind of programming language is Marianne trying to write? Before we go any deeper into the guts of language design, Marianne and friend Hillel Wayne debate the shortcomings of various approaches to specifying and modeling program behavior. From first order logic verification to system visualizations, nothing Marianne has used before has quite fit the bill. She's beginning to get philosophical about the nature of abstraction and wants to rebel from the goal of certainty. - Want more programming history/culture/analysis? Sign Up for Hillel's Newsletter. - Hillel's Tutorials on TLA+ and his book Practical TLA+ - Mario Livio's comments at the 2010 World Science Festival

Last Week in .NET
A Magic String that takes down your system

Last Week in .NET

Play Episode Listen Later Sep 28, 2020 5:42


Microsoft Ignite was the 22nd - 24th of September and the news is hereLots of Azure, and lots of releases that large enterprises and governments would love.Top Ten APIs in .NET 5.0Good info here, and lots you may not have known about.How to build a Database application in Blazor Part 3Everything old is new again. Angular is the new Webforms, Blazor is the new Angular. Here we are, partying like it's 2009.Visual Studio for Mac now supports iOS 14 and XCode 12The magic phrase is redactedApparently if you make that text in the above tweet your password you can find out if anyone stores your password in plain text.Microsoft talks about what's new with the Windows SubSystem for Linux in September 2020.If we ever get to the year of Linux on the Desktop it will be through WSL.Ginny Caughey talks about Project ReunionI haven't quite figured out what they want it to do but as long as it's "Simplify the sheer number of ways you can develop for Windows", I'm in.Microsoft releases Microsoft.Data.Sqlite 5.0 RC1 for Entity Framework Core.Jetbrains Resharper 2020.3 EAP is out, with support for C# 9 features like top-level statements.The Desktop community Standup happened on September 24th, and this hour long standup dives into winforms and OSS.In case it isn't apparent, Microsoft uses the word 'standup' loosely.Microsoft's Channel 9 goes into Microsoft Identity and how to get started with it in this 15 minute video.Fifteen minutes. Around the time a standup should take. We see you, Channel 9.Matthew Leibowitz writes a deep dive into System.CommandLine.If you want to write a command line app in .NET, check out System.CommandLine. Finally there's a way to deal with parsing command line arguments that doesn't involve reinventing the wheel or using a go-clone library.Nat Friedman, CEO of Github, unfollows everyone in an attempt to hear less about Github putting children in cages.On September 24th, Nat tweeted "Github Stories

Josh on Narro
8LU Keynote w/ Hillel Wayne: Are we *really* engineers?

Josh on Narro

Play Episode Listen Later Jun 30, 2020 32:47


What makes software engineering different from “traditional” engineering? To find out, I interviewed 17 “crossovers”: people who have worked professionally as both a software and a traditional engineer. In aggregate, we learn three things: we are in fact engineers, we’re not actually that different as a field, and there’s a lot we can both teach and learn. https://www.youtube.com/watch?v=3018ABlET1Y

engineers keynote hillel wayne
Future of Coding
#38 - The Case for Formal Methods: Hillel Wayne

Future of Coding

Play Episode Listen Later Apr 10, 2019 93:43


Hillel Wayne is a technical writer and consultant on a variety of formal methods, including TLA+ and Alloy. In this episode, Hillel gives a whirlwind tour of the 4 main flavors of formal methods, and explains which are practical today and which we may have to wait patiently for. The episode begins with a very silly joke from Steve (about a radioactive Leslie Lamport) and if you make it to the end you're in store for a few fun tales from Twitter. https://futureofcoding.org/episodes/038

methods formal hillel alloys tla leslie lamport hillel wayne
Devchat.tv Master Feed
JSJ 357: Event-Stream & Package Vulnerabilities with Richard Feldman and Hillel Wayne

Devchat.tv Master Feed

Play Episode Listen Later Mar 26, 2019 70:16


Sponsors Triplebyte Sentry use the code “devchat” for $100 credit Clubhouse CacheFly Panel Aaron Frost AJ O’Neal Chris Ferdinandi Joe Eames Aimee Knight Charles Max Wood Joined by special guests: Hillel Wayne and Richard Feldman Episode Summary In this episode of JavaScript Jabber, Hillel Wayne kicks off the podcast by giving a short background about his work, explains the concepts of formal methods and the popular npm package - event-stream, in brief. The panelists then dive into the recent event-stream attack and discuss it at length, focusing on different package managers and their vulnerabilities, as well as the security issues associated with them. They debate on whether paying open source developers for their work, thereby leading to an increase in contribution, would eventually help in improving security or not. They finally talk about what can be done to fix certain dependencies and susceptibilities to prevent further attacks and if there are any solutions that can make things both convenient and secure for users. Links STAMP model in accident investigation Hillel’s Twitter Hillel’s website Richard’s Twitter Stamping on Event-Stream Picks Joe Eames: Stuffed Fables Aimee Knight: SRE book - Google Lululemon leggings DVSR - Band Aaron Frost: JSConf US Chris Ferdinandi: Paws New England Vanilla JS Guides Charles Max Wood: Sony Noise Cancelling Headphones KSL Classifieds Upwork Richard Feldman: Elm in Action Sentinels of the Multiverse Hillel Wayne: Elm in the Spring Practical TLA+ Nina Chicago - Knitting Tomb Trader

All JavaScript Podcasts by Devchat.tv
JSJ 357: Event-Stream & Package Vulnerabilities with Richard Feldman and Hillel Wayne

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Mar 26, 2019 70:16


Sponsors Triplebyte Sentry use the code “devchat” for $100 credit Clubhouse CacheFly Panel Aaron Frost AJ O’Neal Chris Ferdinandi Joe Eames Aimee Knight Charles Max Wood Joined by special guests: Hillel Wayne and Richard Feldman Episode Summary In this episode of JavaScript Jabber, Hillel Wayne kicks off the podcast by giving a short background about his work, explains the concepts of formal methods and the popular npm package - event-stream, in brief. The panelists then dive into the recent event-stream attack and discuss it at length, focusing on different package managers and their vulnerabilities, as well as the security issues associated with them. They debate on whether paying open source developers for their work, thereby leading to an increase in contribution, would eventually help in improving security or not. They finally talk about what can be done to fix certain dependencies and susceptibilities to prevent further attacks and if there are any solutions that can make things both convenient and secure for users. Links STAMP model in accident investigation Hillel’s Twitter Hillel’s website Richard’s Twitter Stamping on Event-Stream Picks Joe Eames: Stuffed Fables Aimee Knight: SRE book - Google Lululemon leggings DVSR - Band Aaron Frost: JSConf US Chris Ferdinandi: Paws New England Vanilla JS Guides Charles Max Wood: Sony Noise Cancelling Headphones KSL Classifieds Upwork Richard Feldman: Elm in Action Sentinels of the Multiverse Hillel Wayne: Elm in the Spring Practical TLA+ Nina Chicago - Knitting Tomb Trader

JavaScript Jabber
JSJ 357: Event-Stream & Package Vulnerabilities with Richard Feldman and Hillel Wayne

JavaScript Jabber

Play Episode Listen Later Mar 26, 2019 70:16


Sponsors Triplebyte Sentry use the code “devchat” for $100 credit Clubhouse CacheFly Panel Aaron Frost AJ O’Neal Chris Ferdinandi Joe Eames Aimee Knight Charles Max Wood Joined by special guests: Hillel Wayne and Richard Feldman Episode Summary In this episode of JavaScript Jabber, Hillel Wayne kicks off the podcast by giving a short background about his work, explains the concepts of formal methods and the popular npm package - event-stream, in brief. The panelists then dive into the recent event-stream attack and discuss it at length, focusing on different package managers and their vulnerabilities, as well as the security issues associated with them. They debate on whether paying open source developers for their work, thereby leading to an increase in contribution, would eventually help in improving security or not. They finally talk about what can be done to fix certain dependencies and susceptibilities to prevent further attacks and if there are any solutions that can make things both convenient and secure for users. Links STAMP model in accident investigation Hillel’s Twitter Hillel’s website Richard’s Twitter Stamping on Event-Stream Picks Joe Eames: Stuffed Fables Aimee Knight: SRE book - Google Lululemon leggings DVSR - Band Aaron Frost: JSConf US Chris Ferdinandi: Paws New England Vanilla JS Guides Charles Max Wood: Sony Noise Cancelling Headphones KSL Classifieds Upwork Richard Feldman: Elm in Action Sentinels of the Multiverse Hillel Wayne: Elm in the Spring Practical TLA+ Nina Chicago - Knitting Tomb Trader

React Round Up
RRU 043: Testing React Apps Without Testing Implementation Details with Kent C. Dodds

React Round Up

Play Episode Listen Later Dec 25, 2018 75:55


Panel: Lucas Reis Justin Bennett Charles Max Wood Special Guest: Kent C. Dodds In this episode, the panelist talk with today’s guest, Kent C. Dodds who works for PayPal, is an instructor, and works through open source! Kent lives in Utah with his wife and four children. Kent and the panel talk today about testing – check it out! Show Topics: 0:00 – Kendo UI 0:32 – Chuck: Hello! My new show is TheDevRev – please go check it out! 1:35 – Panel: I want all of it! 1:43 – Chuck: Our guest is Kent C. Dodds! You were on the show for a while and then you got busy. 2:06 – Guest.  3:09 – Panel: The kid part is impressive. 3:20 – Guest: Yeah it’s awesome, but the kid part is my wife!  4:09 – Panel: 10 years ago we weren’t having any tests and then now we are thinking about how to write better tests. It’s the next step on that subject. What is your story with tests and what sparked these ideas? 4:50 – Guest. 7:25 – Panel: We have a bunch of tests at my work. “There is no such thing as too many tests” are being said a lot! Then we started talking about unit tests and there was this shift. The tests, for me, felt cumbersome. How do I know that this suite of tests are actually helping me and not hurting me? 8:32 – Guest: I think that is a valuable insight. 11:03 – Panel: What is the make-up of a good test? 11:13 – Guest: Test every line – everything! No. 11:19 – Chuck: “Look at everything!” I don’t know where to start, man! 11:30 – Guest: How do you avoid those false negatives and false positives. 15:38 – Panel: The end user is going to be like more of integration test, and the developer user will be more like a unit tester? 16:01 – Guest: I don’t care too much of the distinction between unit and integration tests. 18:36 – Panel: I have worked in testing in the past. One of the big things that fall on the users’ flow is that it’s difficult b/c maybe a tool like Selenium: when will things render? Are you still testing things in isolation? 19:33 – Guest: It depends. When I talk about UI integration testing I am still mocking the backend. 23:10 – Chuck: I am curious, where do you decide these are expensive (so I don’t want to do too many of them), but at what point is it worth it to do it? 23:30 – Guest mentions the testing pyramid. 28:14 – Chuck: Why do you care about confidence? What is confidence and what does it matter? 28:35 – FreshBooks! 29:50 – Guest. 32:20 – Panel: I have something to add about the testing pyramid. Lucas talks about tooling, Mocha, JS Dong, and more! 33:44 – Guest: I think the testing pyramid is outdated and I have created my own. Guest talks about static testing, LINT, Cypress, and more! 35:32 – Chuck: When I was a new developer, people talked about using tests to track down bugs. What if it’s a hairy bug? 36:07 – Guest: If you can, you can use this methodical approach... 39:46 – Panel: Let’s talk about the React library for a little bit? Panel: Part of the confidence of the tests we write we ask ourselves “will it stand the test of time?” How does the React Testing library go about to solve that? 41:05 – Guest. 47:51 – Panel: A few more questions. When you are getting something and testing and grabbing the label by its text have you found that to be fragile? Is it reasonably reliable? 48:57 – Guest: Yeah this is a concern and it relies on content. 53:06 – Panel: I like this idea of having a different library. Sometimes we think that a powerful tool is better, but after spending some time with other tools that’s not always the case. 54:16 – Guest: “You tie your hands to free your mind.” It does less but what it does less it does better. 55:42 – Panel: I think that with Cypress, too? 55:51 – Guest: Yeah that’s why Cypress is great to use. 57:17 – Panel: I wrote a small library here at work and it deals with metrics. I automated all of those small clicks – write a bit – click a bit – and it was really good. I felt quite efficient. Those became the tests. 57:58 – Panel: One more question: What about react Native? That comes up a lot. At looking at testing libraries we try to keep parody between the two. Do you have any thoughts on that? 58:34 – Guest talks about React Native. 1:00:22 – Panel: Anything else? It’s fascinating to talk about and dive-into these topics. When we talk about confidence that is very powerful, too. 1:01:02 – Panelist asks the last question! 1:01:38 – Guest: You could show them the coverage support. Links: Ruby on Rails Angular JavaScript Elm Phoenix GitHub Get A Coder Job Enzyme React Testing Library Cypress.io Hillel Wayne Testing JavaScript with Kent C. Dodds Kent Dodds’ News Kent Dodds’ Blog Egghead.io – Kent C. Dodds Ready to Write a Novel? Practical TLA+ GitHub: Circleci-queue GitHub: sstephenson / bats Todoist Discord Kent’s Twitter Sponsors: Get a Coder Job Cache Fly Fresh Books Kendo UI   Picks: Lucas Hillel Wayne Practical TLA+ Justin Circle CI Queue Bats Todoists Charles MFCEO Project Podcast The DevRev Kent Discord Devs Who Write Finding your Why! TestingJavaScript.com kcd.im/news kcd.i./hooks-and-suspense NaNoWriMo

Devchat.tv Master Feed
RRU 043: Testing React Apps Without Testing Implementation Details with Kent C. Dodds

Devchat.tv Master Feed

Play Episode Listen Later Dec 25, 2018 75:55


Panel: Lucas Reis Justin Bennett Charles Max Wood Special Guest: Kent C. Dodds In this episode, the panelist talk with today’s guest, Kent C. Dodds who works for PayPal, is an instructor, and works through open source! Kent lives in Utah with his wife and four children. Kent and the panel talk today about testing – check it out! Show Topics: 0:00 – Kendo UI 0:32 – Chuck: Hello! My new show is TheDevRev – please go check it out! 1:35 – Panel: I want all of it! 1:43 – Chuck: Our guest is Kent C. Dodds! You were on the show for a while and then you got busy. 2:06 – Guest.  3:09 – Panel: The kid part is impressive. 3:20 – Guest: Yeah it’s awesome, but the kid part is my wife!  4:09 – Panel: 10 years ago we weren’t having any tests and then now we are thinking about how to write better tests. It’s the next step on that subject. What is your story with tests and what sparked these ideas? 4:50 – Guest. 7:25 – Panel: We have a bunch of tests at my work. “There is no such thing as too many tests” are being said a lot! Then we started talking about unit tests and there was this shift. The tests, for me, felt cumbersome. How do I know that this suite of tests are actually helping me and not hurting me? 8:32 – Guest: I think that is a valuable insight. 11:03 – Panel: What is the make-up of a good test? 11:13 – Guest: Test every line – everything! No. 11:19 – Chuck: “Look at everything!” I don’t know where to start, man! 11:30 – Guest: How do you avoid those false negatives and false positives. 15:38 – Panel: The end user is going to be like more of integration test, and the developer user will be more like a unit tester? 16:01 – Guest: I don’t care too much of the distinction between unit and integration tests. 18:36 – Panel: I have worked in testing in the past. One of the big things that fall on the users’ flow is that it’s difficult b/c maybe a tool like Selenium: when will things render? Are you still testing things in isolation? 19:33 – Guest: It depends. When I talk about UI integration testing I am still mocking the backend. 23:10 – Chuck: I am curious, where do you decide these are expensive (so I don’t want to do too many of them), but at what point is it worth it to do it? 23:30 – Guest mentions the testing pyramid. 28:14 – Chuck: Why do you care about confidence? What is confidence and what does it matter? 28:35 – FreshBooks! 29:50 – Guest. 32:20 – Panel: I have something to add about the testing pyramid. Lucas talks about tooling, Mocha, JS Dong, and more! 33:44 – Guest: I think the testing pyramid is outdated and I have created my own. Guest talks about static testing, LINT, Cypress, and more! 35:32 – Chuck: When I was a new developer, people talked about using tests to track down bugs. What if it’s a hairy bug? 36:07 – Guest: If you can, you can use this methodical approach... 39:46 – Panel: Let’s talk about the React library for a little bit? Panel: Part of the confidence of the tests we write we ask ourselves “will it stand the test of time?” How does the React Testing library go about to solve that? 41:05 – Guest. 47:51 – Panel: A few more questions. When you are getting something and testing and grabbing the label by its text have you found that to be fragile? Is it reasonably reliable? 48:57 – Guest: Yeah this is a concern and it relies on content. 53:06 – Panel: I like this idea of having a different library. Sometimes we think that a powerful tool is better, but after spending some time with other tools that’s not always the case. 54:16 – Guest: “You tie your hands to free your mind.” It does less but what it does less it does better. 55:42 – Panel: I think that with Cypress, too? 55:51 – Guest: Yeah that’s why Cypress is great to use. 57:17 – Panel: I wrote a small library here at work and it deals with metrics. I automated all of those small clicks – write a bit – click a bit – and it was really good. I felt quite efficient. Those became the tests. 57:58 – Panel: One more question: What about react Native? That comes up a lot. At looking at testing libraries we try to keep parody between the two. Do you have any thoughts on that? 58:34 – Guest talks about React Native. 1:00:22 – Panel: Anything else? It’s fascinating to talk about and dive-into these topics. When we talk about confidence that is very powerful, too. 1:01:02 – Panelist asks the last question! 1:01:38 – Guest: You could show them the coverage support. Links: Ruby on Rails Angular JavaScript Elm Phoenix GitHub Get A Coder Job Enzyme React Testing Library Cypress.io Hillel Wayne Testing JavaScript with Kent C. Dodds Kent Dodds’ News Kent Dodds’ Blog Egghead.io – Kent C. Dodds Ready to Write a Novel? Practical TLA+ GitHub: Circleci-queue GitHub: sstephenson / bats Todoist Discord Kent’s Twitter Sponsors: Get a Coder Job Cache Fly Fresh Books Kendo UI   Picks: Lucas Hillel Wayne Practical TLA+ Justin Circle CI Queue Bats Todoists Charles MFCEO Project Podcast The DevRev Kent Discord Devs Who Write Finding your Why! TestingJavaScript.com kcd.im/news kcd.i./hooks-and-suspense NaNoWriMo

The After Disaster Broadcast
Ep. 19 - Blackout

The After Disaster Broadcast

Play Episode Listen Later Sep 1, 2018 23:54


CW: gunshots (17:14), violence, threats of violence, death  Jo faces off with the leader of the Mors Tua Vita Men.  Please listen to the end of the credits for a very special thank you to everyone who helped and all the fans. WE FARKING LOVE YOU! Created by J.J. Ranvier and edited by Rory Strahn-Mauk. Jo Prendergast was J.J. Ranvier.  The voice of Connor is Jared Sanders. The voice of Park Ranger Dave is Firas Alexander. The outro person is Caitlin Robb. Thanks to Hillel Wayne, Andrew Leamon, Eric Garneau and Joe Gennaro for help with the writing process.  Special thanks, as always, for all those over at The Nerdologues for their mentorship.  We will be on hiatus until November 2018, which means miniepisodes because we love giving you things.  Follow us @AfterDisasterBC for updates on our hiatus and, more importantly, survival tips.  Find us on tumblr at https://theafterdisasterbroadcast.tumblr.com/ for pictures of the apocalypse. Like us a lot? Please leave us a review on iTunes and support us at https://www.patreon.com/TheAfterDisasterBroadcast. We're still sending stickers!

created cw blackout nerdologues hillel wayne jared sanders
The After Disaster Broadcast
Ep. 18 - Eerie Lights in the Distance

The After Disaster Broadcast

Play Episode Listen Later Aug 17, 2018 21:35


Jo comes face to face with an old acquaintance. Then Jo and Co run into what might be a trap -- or a rare stroke of luck -- or both.  Created by J.J. Ranvier and edited by Rory Strahn-Mauk. Jo Prendergast was J.J. Ranvier.  The outro person is Caitlin Robb. Thanks to Hillel Wayne for help with the writing process.  Special thanks, as always, for all those over at The Nerdologues for their mentorship.  Follow us @AfterDisasterBC for updates and, more importantly, survival tips.  Find us on tumblr at https://theafterdisasterbroadcast.tumblr.com/ for pictures of the apocalypse. Like us a lot? Please leave us a review on iTunes and support us at https://www.patreon.com/TheAfterDisasterBroadcast. We're still sending stickers!

created lights distance eerie nerdologues hillel wayne
The Accidental Engineer
Does your software do what it is supposed to? Hillel Wayne teaches TLA+

The Accidental Engineer

Play Episode Listen Later Mar 12, 2018 51:00


Today Hillel Wayne joined us to discuss TLA+ and software verification methodology:

The After Disaster Broadcast
Ep. 8 - The Last Rest Stop on the Left

The After Disaster Broadcast

Play Episode Listen Later Feb 15, 2018 22:51


After finding an amazingly full warehouse, Jo keeps getting the creeping sense that someone or something found her... Created by J.J. Ranvier and edited by Colin Votier. Help writing this episode comes from Hillel Wayne, Andrew Rostan and Firas Alexander.  Jo Prendergast was J.J. Ranvier. The voice of Park Ranger Dave is Firas Alexander. Follow us @AfterDisasterBC for updates and, more importantly, survival tips.  Like us a lot? Please leave us a review on iTunes and support us at https://www.patreon.com/TheAfterDisasterBroadcast And thanks to supporters thus far, like Alan and Luis.

The After Disaster Broadcast
Ep. 5 - The Suburbs of Apathy

The After Disaster Broadcast

Play Episode Listen Later Jan 1, 2018 26:00


Jo, Jee-Hyun and Scout are stuck in the suburbs. Forced by weather and fate, they confront their past inside a inhabited furniture store.  Created by J.J. Ranvier and edited by Colin Votier. Help writing this episode comes from Hillel Wayne and Firas Alexander.  The poem this episode was written by Colin Kibler.   Jo Prendergast was J.J. Ranvier. The voice of Park Ranger Dave is Firas Alexander. The voice of the Artist is Pearl Paramadilok. The outro person is Caitlin Robb.  Follow us @AfterDisasterBC for updates and, more importantly, survival tips.  Like us a lot? Please leave us a review on iTunes and support us at https://www.patreon.com/TheAfterDisasterBroadcast And thank you to all who have supported us so far, particularly some of our first patrons like Sonny and Jane.