POPULARITY
Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes. The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions. Transcript: AD: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky. And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error. STEPHANIE: That sounds reasonable to me. JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions. And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice. STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that. JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate. Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern. STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain? JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that. STEPHANIE: Gotcha. JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors. But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that. STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that? JOËL: This is more about the developer experience. STEPHANIE: Got it. JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape. And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that. So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated. STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense. JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on. STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts. JOËL: Yeah. STEPHANIE: Cool. JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system. STEPHANIE: Absolutely. That's very exciting. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet. JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor? STEPHANIE: Yeah, it's the latter. JOËL: Okay. STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs]. JOËL: Oh no! STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears. JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me. STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky. And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding. JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk. STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective. JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell? STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it. JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th. STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else. JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top? STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app. So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing. JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right? STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right? JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming. And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior. STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that. JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work? STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally. JOËL: But it's an optional test failure, so you're allowed to let that test fail. STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically. JOËL: I see. STEPHANIE: Or an ignore list, yeah. JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync. STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs]. JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you. STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods. JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different? STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done. JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant. STEPHANIE: Ooh. JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app. STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here? JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime? Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match. So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it. STEPHANIE: Hmm. That is very compelling to me. JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that? STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous. When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check. JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool. STEPHANIE: Ooh. JOËL: And what makes sense in a linter. STEPHANIE: What was the argument for the linter being a worse tool by doing that? JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article. STEPHANIE: I'll have to revisit it after the show [laughs]. JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes. STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team? JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you." STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen." I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking. JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go. STEPHANIE: Interesting. It's kind of like the honor system, then [laughs]. JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production. Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide. STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum. JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool. STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work? JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build. You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up." STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice. JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that. STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers? JOËL: I'm going to throw a fancy phrase at you. STEPHANIE: Ooh, I'm ready. JOËL: Signal-to-noise ratio. STEPHANIE: Whoa, uh-huh. JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds. So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful. STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process. JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check. And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem." STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system? [laughter] JOËL: I may be too much of a developer. STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations. JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up. STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
On episode #35, Jimmy speaks with Hillary Nussbaum from Code Climate. They discuss:- How her start in reality prepped for marketing to technical buyers- The fundamental marketing principles that power Code Climate's strategy- How (and why) she's so careful about voice, tone and messaging
David Beisel is a Co-founder and Partner at NextView Ventures, where he invests in technology-driven companies that are redesigning the Everday Economy. Some of his past portfolios include Attentive, TripleLift, Code Climate, Parsec (acquired by Unity for $320M), BookBub, thredUP (IPO'ed), MealPal, The Nudge, Hatch, and TapCommerce (acquired by Twitter for $100M) among others. Prior to NextView, he served as a Vice President at Venrock (an early-stage VC firm initially established as the venture capital arm of the Rockefeller family). You can learn more about: 1. What to look for when investing in early-stage startups 2. Raising funding as a startup and as a venture fund 3. How to attract top startups and build a high-quality network ===================== YouTube: @GraceGongCEO Newsletter: @SmartVenture LinkedIn: @GraceGong TikTok: @GraceGongCEO IG: @GraceGongCEO Twitter: @GraceGongGG Join the SVP fam with your host Grace Gong. In each episode, we are going to have conversations with some of the top investors, super star founders, as well as well known tech executives in the silicon valley. We will have a coffee chat with them to learn their ways of thinking and actionable tips on how to build or invest in a successful company. =====================
Forecasting is essential for every team, and it involves tracking key performance indicators and financial metrics to measure each team's success. But many people don't discuss best practices, especially when you begin to add headcount into the mix.Our host Joe Michalowski welcomes Brian Weisberg, the CFO at Tidelift, back to The Role Forward podcast. While they work through departmental headcount planning, Joe and Brian also discuss the importance of understanding the business's goals, why you need to be realistic with expectations, and why companies should invest early in a financial position.Links Referenced in This EpisodeDave Kellogg's BlogLauren Kelley and Tom Huntington on the R&D Magic NumberGuest-at-a-Glance
Are you working on Ruby on Rails Applications that are constantly on fire, overwhelmed by technical debt? What if you were building Technical Wealth instead? Learn which tools & strategies to work with legacy code effectively, remove dead code, and leave tech debt behind.Listen to and watch our conversation with M. Scott Ford and learn how to build technical wealth, enjoy working with legacy code, tools, and strategies to remove dead code, and how thrive in a world of makers as a mender.About our guestM. Scott Ford is the Co-Founder & Chief Code Whisperer of Corgibytes, where he has quietly led a software maintenance revolution for the past decade. Where most people find nothing but frustration, shame, and bugs in legacy code, Scott has centered his work around his genuine love of software modernization and helping others use joy, empathy, and technical excellence to make their systems more stable, scalable, and secure.Scott's ideas have been featured in books such as The Innovation Delusion and as a guest lecturer at Harvard University. Scott is the author of three courses on LinkedIn Learning: Dealing With Legacy Code And Technical Debt, Code Quality, and Clean Coding Practices.He is the host of the podcast Legacy Code Rocks and enjoys helping other menders find a sense of belonging in a world dominated by makers.Episode Links Watch the interview on YouTube Episode Notes and Links Legacy Code Rocks Legacy Code Rocks Slack Group (weekly meetups at 1pm EST on Wednesdays) MenderCon (May 10th, 2023) CorgiBytes Scott's LinkedIn profile Scott's Twitter profile Scott's Github profile How to Improve Code Quality on a Ruby on Rails Application Ruby Code Quality with Ernesto Tagwerker Get to Senior Chapters00:00 intro 01:57 makers vs menders 03:43 menders love improving legacy codebases 05:06 greenfield projects are glamorized 06:30 greenfield-legacy projects 09:07 working with legacy code: tools & strategies 09:53 building technical wealth vs tech debt 14:29 "the big rewrite" never works 18:54 removing redundant code22:56 features not used very often 25:41 static code analysis tools 27:23 charge extra for features used by fewer customers 30:52 find code that is never used 34:09 code audit with feature flags36:07 enforce code quality with tests and CI 39:26 measure code quality over time 41:09 churn, complexity, and CodeClimate score 42:43 bus factor45:59 working with makers 51:24 hanging out with other menders 53:27 follow hexdevs
Inspired by a Slack thread, Joël invites fellow thoughtbotter Aji Slater on the show to talk about when you should use class methods and when you should avoid them. Are there particular anti-patterns to look out for? How does this fit in with good object-oriented programming? What about Rails? What is an "alternate constructor"? What about service objects? So many questions, and friends: Aji and Joël deliver answers! Backbone.js collections (https://backbonejs.org/#Model-Collections) Query object (https://thoughtbot.com/blog/a-case-for-query-objects-in-rails) Rails is a dialect (https://solnic.codes/2022/02/02/rails-and-its-ruby-dialect/) Meditations on a Class Method (https://thoughtbot.com/blog/meditations-on-a-class-method) Why Ruby Class Methods Resist Refactoring (https://codeclimate.com/blog/why-ruby-class-methods-resist-refactoring/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter Aji Slater. AJI: Howdy. JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Aji, what's new in your world? AJI: Yeah, well, I just joined a new project, so that's kind of the newest thing in my day-to-day work world. I say just joined, but I guess it was about a month ago now. I'm on the Liftoff team at thoughtbot, which is different than the team that you're on. We do more closer to greenfield ideas and things like that. So there's actually not much to speak about there in that project just yet. Rails new is still just over the horizon for us. So I've been putting a lot of unused brain cycles toward a side project that is sort of a personal knowledge base concept, and that's a whole thing that I could probably host an entire podcast about. So we don't have to go too deep into my theories about that. But suffice it to say I've talked to some other ADHDers like myself who find that that space is not really conducive to the way that we think and have to organize ourselves and our personal knowledge stores. So sort of writing an app that can lend itself to our fast brains a little bit better. JOËL: Nice. I just recently recorded an episode of this podcast talking a little bit about note-taking approaches and knowledge-base systems. So, yeah, it's a topic that's very much top of mind for me right now. AJI: Yeah, what else is going on in your world? JOËL: I'm based in New England in the U.S. East Coast, and it is fall here. I feel like it happened kind of all of a sudden. And the traditional fall thing to do here is to go to an orchard and pick apples. It's a fun activity to do, and so I'm in the middle of planning that. Yeah, it's fun to go out into nature, very artificial space. AJI: [laughs] JOËL: But it's a fun thing to do every fall. AJI: Yeah, we do that here too. There's an orchard up north of us where my wife and I live in Chicago that we try to visit. And Apple Fest in Lincoln Square is this weekend, and we've been really looking forward to that. Try another time at making homemade hard cider this season, I think, and see how that goes. JOËL: Fun. When you say another time, does that mean there was a previous unsuccessful attempt? AJI: Yes. Did the sort of naive approach to it, and there is apparently a lot more subtlety to cidermaking than there is home-brew beer. And we got some real strong funk in that cider that did not make it necessarily an enjoyable experience. Like, it worked but wasn't the tastiest. JOËL: So it got alcoholic. It was just terrible to drink. AJI: Yeah, I would back that up. JOËL: So recently, at thoughtbot, we had a conversation among different team members about the use of Ruby class methods, when they make sense, when they are to be avoided. What is their use case? And different people had different opinions. So I'm curious what your take on class methods are. When do you like to use them? AJI: Yeah, I remember those conversations coming up. I think I might have even started one of those threads because this is something that comes up to me a lot. I'm a long-time listener, first-time caller to The Bike Shed. [laughs] I can remember awaiting new episodes from Sage and Derek to listen to on my way to and from my first dev job. And at one point, Sage had said, "Never put your business logic in something that you can't call .new on." And being a young, impressionable developer at the time, I took that to heart, and that seems something that just has been baked in and stayed very truthful to me. And I think one of the times that I asked that and got some conversation started was I was trying to figure out why did I feel that, and like, why did they say that? And I think, yeah, I try to avoid them. I like making instances of things. What is your stance on the Class Method, capital C, capital M? JOËL: I also generally avoid them. I have sort of two main scenarios that I like to use class methods, first is as an alternate constructor. So new is effectively a class method that's built into Ruby's object model. But sometimes, you want variations on your constructor that maybe sets values by default or that construct things with some slightly different inputs, things like that. And so those almost always make sense as class methods. The other thing that I sometimes use a class method for is as an alias for newing up an instance and then immediately calling an instance method on it. So it's just a slightly shorthand way to call some code. AJI: That's usually been my first line defense of when there's someone who might feel more comfortable doing class methods that sees me making an instance and says, "Well, you don't need an instance, just make a class method here because it'll get too long if you have to .new and then dot this other thing." And so I'll throw in that magic little trick and be like, here you go. You can call it a class method, and you still get all the benefits of your instance. I love that one. JOËL: Do you feel like that maybe defeats the purpose? In terms of the interface that people are using, if you're calling it a class method, do you lose the benefits of trying to do things at the instance level instead? Or is it more in the implementation that the benefits are not at the caller level? AJI: I think that's more true that the benefits are at the instance level, and you're getting all of that that goes along with it. And you're not carrying along a lot of what I see as baggage of the class method version, but you're picking up a little bit of that syntactic sugar. And sometimes it's even easier just to conceptualize, especially in the Rails space because we have all of these different class methods like, you know, Find is one I'm sure that we use all the time to call it on a class, and we get back an instance. And so that feels very natural in the Rails world. JOËL: I think you could make an argument that that is a form of alternate constructor. It's a class method you call to get an instance back. AJI: Yeah, absolutely. JOËL: The fact that it makes a background request to the database is an implementation detail. AJI: For sure. I agree with that. I had a similar need in a recent project where the data was kept on a third-party API. So I treated it the same way as, instead of going out to the database like ActiveRecord does, made a class method that went off to the API and then came back and made the object that was the representation of that idea in our application. So, yeah, I wholeheartedly agree with that. JOËL: So in Rails, we have the scope keyword, which will run some query to get a collection of records. But another way that they're often implemented is as class methods, and they're more or less interchangeable. How do you feel about that kind of use of class methods on an ActiveRecord object? Does that violate some of the ideas that we've been talking about? Does it sort of fit in? AJI: I think when reaching for that sort of need, I sort of fall into the camp of making a class method rather than using a scope. It feels a little less like extending some basic Rails functionality or implying that it's part of the inherent framework and makes it a little more like behavior that's been added that's specific to this domain. And I think that distinction comes into my thinking there. I'm sure there are other reasons. What are your thoughts there? Maybe it'll spark an idea for me. JOËL: For me, I think I also generally prefer to write them as class methods rather than using the scope keyword, even though they're more or less the same thing. What is interesting is that, in a way, they kind of feel like alternate constructors in that they don't give you an instance; they give you back a collection of instances back. So if we bend the rules a little bit...these are not hard and fast rules but the guidelines. If we bend the guidelines a little bit, they kind of fit under the general categories for best uses of class method that we discussed earlier. AJI: Yeah, I can definitely see that. I tend to think, or at least I think when you had first brought up the term of alternate constructors, my first thought was of one instance; you ask for a thing, and it gives you this thing back. But it's the same sort of idea with that collection because you're not getting just one instance; you're getting many instances. But it's the same kind of idea. You've asked the larger concept of the thing, the class, to give you back individuals of that class. So that totally falls in line with how I think about acceptable uses of these class methods the way that we've been talking about them. JOËL: Rails is something really interesting where a lot of the logic that pertains to a single item will live at the instance level. And then logic that pertains to a group of items will live at the class level. So you almost have like two categories of operations that you can run that semantically live either at the class or the instance level. Have you ever noticed that separation before? AJI: I think that separation feels natural to me because I came into programming through Rails. And I might have been colored in my thinking about this by the framework. The way that I conceptualize what a class is being sort of this blueprint or platonic ideal of what an individual might be and sort of describing the potential behaviors of such an individual. Having that kind of larger concept be able to work across multiple instances feels, yeah, it feels sort of natural. Like, if you were to think about this idea of a chair, then if you went in and modified what a chair is to mean, then any chair that you asked for later on would kind of come with that behavior along with it. Or if you ask for several chairs, they would all sort of have that idea. JOËL: I think similar to you; I had that outlook on that's almost like a natural structuring of things. And then, years ago, I got into the hot, new JavaScript framework that was Backbone.js. And it actually separates...it has like a model for individual instances, and then a separate kind of model thing for collections. And that kind of blew my mind. But what was interesting, then, is that you effectively have instance methods that can deal with all things collection-related, any sort of filtering, any sort of transformations. All of those are done, which you have an instance of a collection, basically, that you act on. And I guess if we were trying to translate that into Rails, that's almost like the concept of a query object. AJI: Hmm, it's sort of an interesting way to think about that. And Backbone, I feel like I did a day of that in bootcamp. But it has been some time, so I'm not sure that I've worked with that pattern specifically. But it does sort of bring up the idea of how much do you want to be in one model class? And do you want it to contain both of these concepts? If you have a lot of complex logic that is going to be dealing with a collection, rather than putting that in your model, I think I would probably reach for something like a service object that is going to be specifically doing that and sort of more along that Backboney approach maybe like a query object or something like that. JOËL: Interesting. When you use the term service object, do you mean something that's not a Rails model, just in general? Or are you talking specifically about one of these objects that can respond to call and is... I've heard them sometimes called Command objects or method objects. AJI: Yeah, that's an overloaded term certainly in the real space, isn't it? Service object, and what does that mean? I think generally, when I say it, I'm meaning just a plain, old Ruby object like something that is doing its one thing. You're going to use it to do its implementation details. They're all kind of hidden behind in private methods and return you something useful that you can then plug into what you were doing or what you need going on in some other place in your app. So it, to me, doesn't imply any specific implementation of, like, do you have call? Do you use it this way? Do you use it that way? But it's something that's kind of outside of it is either a model, a view, a controller, and it encapsulates some kind of behavior. So whether that, like we're saying, is a filtering or, you know, it's going to wrap that up. JOËL: I see. So, for you, a query object would be a service object. AJI: Yeah, I think so. You know, maybe this is one of the reasons why I generally don't like the overuse of the term service object in our space. I don't know if that's a hot take, and I'm going to get emails for this. But -- JOËL: Everybody send your angry tweets @Aji. AJI: Yeah, do it to @Aji on Twitter because I've been trying to get that three-letter handle for years. No, but if you want to talk to me, I'm @DoodlingDev. But, yeah, certainly, it does feel sometimes like an overloaded term, and I just want to go back to talking about plain, old Ruby objects. JOËL: So, service object is definitely an overloaded term. It's used for a lot of things. One thing that I've often seen it referring to are objects that respond to call. And just to keep away the confusion, maybe let's call them Command objects for the purposes of this conversation. AJI: Sounds good. JOËL: I commonly see them done where the implementation is done with a class method named call. Sometimes it delegates to an instance that also has call. Sometimes it's all implemented as a class method. How do you feel about that pattern? AJI: I don't mind the idea of a thing that responds to call. It, in a way, sort of implies that the class is sort of named as an action, which I don't like. It has an er name, and that kind of has a class named as a pattern. And that always sort of bugs me a little bit. But what I hope for when I open up one of those sorts of classes or objects is that it's going to delegate to an instance because then you're, again, picking up all of those wonderful benefits of the instance-level programming. JOËL: You keep mentioning the wonderful benefits of instance-level programming. What are some of those benefits? AJI: One of the ones that sort of strikes me most visibly or kind of viscerally when I see it is that they're very easy to understand. You can extract methods pretty easily that don't turn into kind of clumsy code of a bunch of different class methods that all have four arguments passed in because they're all operating on the same context. And when you're all operating on the same context, you have really a shared state. And if you're just passing that shared state around, it just gets super confusing. And you get into the order of your arguments, making a big impact on how you are interacting with these different things. And so I think that's sort of the first thing that comes to mind is just visually noisy, which for me is super hard to get my head around, like, well, how am I supposed to use this thing? Can I extend it? JOËL: Yeah, I would definitely say that if you have a group of class methods that all take, commonly, it's the first argument, the same piece of data and tries to operate on it, that's probably a code smell that points to the fact that these things want to be an instance that lives around it. This could be a form of primitive obsession if you're passing around, let's say, a hash, all of these, and maybe what you really want is to sort of reify that hash into an object. And then all these class methods that used to operate on the hash can now become instance methods on your richer domain object. AJI: Yeah. What do you say to the folks that come from maybe a more functional mindset or are kind of picking up on the wave of functional programming that's out there in the ethos that say that you've got a bunch of side effects when you don't have everything that your method is operating on, being passed on or passed in? JOËL: I think side effect is a broad term. You could refer to it as modifying the internal state of an object. Technically, mutation is a side effect. And then you have things like doing effects out in the outside world, like making an HTTP query, printing to the screen, things like that. I think those are probably two separate concepts. Functional programming is great. I love writing functional code. When you're writing Ruby, Ruby is primarily an object-oriented language with some functional aspects brought in. In my opinion, it's very, you know, a great combination of the two. I think they've gotten the balance well so that the two paradigms play nicely together rather than competing. But I think it's an object-oriented language first with some functional added in. And so you're not going to be, I mean, I guess you could; there is a way to write Ruby where everything is a lambda or where everything is a class method that is pure and takes in inputs. But that's not the idiomatic way to write Ruby. Generally, you're creating objects that have some state. That being said, if an object is mutating a lot of global state, that's going to become problematic. With regards to its internal state, though, because it is very much localized and it's private, nobody else gets to see it; in many ways, an object can mutate itself, and that chain stays pretty local. AJI: Yeah, absolutely. You've tripped onto another one of my favorite rabbit holes of idiomatic code, and, like, what does that mean, and why should we strive for that? But I absolutely agree that when Ruby is written to conform to other paradigms that aren't mostly object-oriented is when it starts to get hard to use. It starts to feel a little off. Maybe it has code smells around it. It's going to give me the heebie-jeebies, whatever that might mean for you or for different developers. I think we all have our things that are sort of this doesn't feel right. And you kind of dig into it, and you can sort of back that up. And whenever Ruby starts to look like something that isn't lots of little objects sending messages, is when I start to get a little on edge, maybe. JOËL: It is worth, I think, calling out the fact that Ruby is a very expressive language. And there are effectively many...you could call them dialects of it. You have sort of your pure sort of OO approach. You have what's typically written in Rails, which has some OO things. But Rails is also, in many ways, it's very DSL-heavy and, in some ways, very class method-heavy. So writing Rails is sort of its own twist on Ruby. And then, some people will try to completely retrofit a functional approach onto Ruby, and that's also a way that some people like to write their code. And some of these, you can't necessarily say they're not valid, but they're not what you'll mostly see in the wild. And they're not necessarily the approach that I would recommend. AJI: Yeah, that's the blessing, and the curse of both programming in general and such an expressive language like Ruby is that there are many different valid ways to do it. And what are your trade-offs going to be when you make those choices? I think that falls kind of smack dab into that idiomatic conversation. And it comes up for me, too, as a consultant because I try to tend towards that idiomatic, those common patterns and practices because I'm not going to live with this code forever. I need to hand this off. And the closer it is to what you might see out there in the wild more commonly, the easier it will be for the next Ruby developer to come pick it up and extend it. JOËL: So you'd mentioned earlier some of the benefits of instance programming. One of the things that I find is maybe a little bit weird when you go heavily into the class method approach is that there is only one instance of the class, and it is globally available. AJI: Are you talking about a singleton there? JOËL: Yes. And, in fact, your class is effectively a singleton, potentially with globally mutable state. I hope not, but potentially with all of the gotchas and warnings that that entails. And so, if you think of your user instance, you need a reference to it, and there can be multiple of them, and you can call methods on them. If everything is happening at the class level, there is a single user class in memory shared by anyone who wants to use it. It's globally accessible. You can all call methods on it. Yeah, in many ways, it does act like a singleton. AJI: And let's not even get into the Ruby chestnut of everything's an object. So it is an instance of a class in and of itself. JOËL: Yes. AJI: But, absolutely, it can start to act that way. But the singleton it's enshrined in the Gang of Four book of patterns. Like, so what's wrong about a singleton? I hope you can understand over the airwaves the devil's advocate that I'm playing here. [laughs] JOËL: Yes. There are little horns that have sprouted on your head right now. I think part of the problem with singletons is that, generally, they are globally accessible. There's the problem of global mutable state again. There was a time, I think, when the OO community went pretty wild with singletons, and people realized that this was not great. And so, over time, a consensus evolved that singletons are a pattern that, while useful, should be used rarely and in moderation. And a lot of warnings have been shared in the community, like, be careful not to overuse the singleton pattern or don't build your system out of singletons. And maybe that's what feels so weird about a system that's built primarily in terms of class methods for me is that it feels like it's built out of singletons. AJI: Yeah. When I think of object-oriented programming, I kind of fall back to maybe one of the ideals of it is that it represents the world more accurately or maybe more understandably. And that sort of idea doesn't fit that paradigm, does it? If you're a factory that is making widgets, there's not the one canonical widget that all of your customers are going to be talking to and using. They are going to each have their own individual widgets. And those customers can be thought of like the consumers of your methods, your objects. JOËL: The idea being the real-world thing you're simulating normally, there are multiple actors of every type rather than a single sort of generic one that stands in for everybody. AJI: If this singleton is going to be your interface or the way that you interact with each of these things that are conceptually different, like a user or something like that, then differentiating between which user becomes a lot harder to do. It takes a lot more setup and involved process in referring to this user when and that kind of thing and creating the little instances. Then you've got more kind of direct reference to a single concept, a single individual. JOËL: So what you've described is a very sort of classic OO mindset. You find the data and the behaviors that go together. You try to oftentimes simulate the world, model it in terms of actors that give and receive messages. In many ways, though, I think when you're building a system out of class methods, you're thinking about the world in an almost different paradigm. In many ways, it feels almost procedural. What are the behaviors that need to happen in my app? What are the things that need to be done? You'd mentioned earlier that oftentimes these classes or the methods on them will end up with E-R; they're all verbs. You have a thing-doer, a thing executor, thing manager. They all do things rather than having domain concepts extracted and pulled out. Would you say that that feels somewhat procedural to you as well? AJI: Yeah. I think a great way to divide it is the way that you have right there; it's these sorts of mindsets. Do you have collections of things that have behaviors, or do you have collections of behaviors that might refer to things? And where you're approaching the design of a system, either from that behavior side or from that object side, is going to be a different mindset. Procedural being more focused on that kind of behavior and telling it what to do rather than putting... I think this is probably a butchered Sandi Metz example, but putting your roommate who hates cats and a cat that doesn't want its tail stepped on in one room, and eventually, things will happen accordingly. And those two mindsets are going to end up with very different architectures, very different designs, very different ways of building these applications that we make. And, again, does that come back to...Ruby, potentially to a lesser extent but still in the same camp, is object-oriented language, and it sort of functions best when considered and then constructed in that mindset. And I often wonder sometimes if language developers and language designers make anti-patterns sort of purposefully awkward to use. Like, if you want to hide a lot of class methods, you can do the class shovels self version of things or have privateclassmethod littered all the way through your file. And it seems to me like that might be a little bit of a flag that, like, hey, you're working against the system here. You're trying to make it do a thing that it doesn't naturally want to do. JOËL: Yeah, because you'd mentioned this private_class method thing because, by default, it's hard to get class methods to be private. You have to use a special keyword. You can't just write private in the class and then assume that the methods below it are going to be private because that does not apply to class methods. AJI: Exactly. And that friction to making an object that has a smaller interface, that kind of hides its implementation, seems as though it's a purposeful way that Ruby itself was designed to maybe nudge us, developers, into a certain way of working or suggesting a certain mindset. JOËL: There's a classic Code Climate article titled Class Methods Resist Refactoring. And it mentions different ways that when you're relying heavily on class methods, it's harder to do some of the traditional refactors things like just extract method because it is clunkier because you can't have private methods as easily. You can't share state, so you have to thread variables through. I guess, technically, you can share state with things like class variables and class instance variables, but if you do that, you will probably be very sad. AJI: [laughs] Yeah, you're opening yourself up to a whole world of hurt there, aren't you? And, yeah, you're opening yourself up to a whole world of hurt there with that, aren't you? Sort of sharing data so dangerously around your app. JOËL: So I'm a big fan of test-driven development. And one of the things that TDD believes in is that test pain should help guide the design of your system and that, generally, things that are easier to test are better designed. AJI: Yeah. JOËL: It's often easier to test class methods because they are globally available singletons. I can easily stub a class. Whereas if I need to stub an instance, I need to do some uglier things like stub any instance of or stub the constructor to return a double, or do some other kind of dirty tricks like that. Does that mean that TDD would prefer a class method-based approach to writing code? AJI: I think that a surface-level reading of that might say that it does. And I think that maybe the first pass on things, if you're thinking about I want to get this thing done that's right in front of me right now and just move forward, might kind of imply that. But if you start to think about or have come back to something that was implemented in that way, anytime that sort of behavior is going to grow or change, then it's going to start to...the number of backflips that you have to do become a lot more complicated and a lot higher when you've got class methods. Because I find that, yes, you might have to stub out or pass in a created object or something like that. But if you've got a class method, especially if it is calling other class methods inside it, then all of a sudden, you have in your test this setup that looks completely unrelated to anything that you're running and testing, that you have to have all of this insight or knowledge of what those classes are doing just to set up your test framework before you can even run that. Another thing that is looked to as an axiom when writing tests that can imply this class approach is that you shouldn't change your code just for the test. If you're doing dependency injection or something like that, passing around little objects, then you're making your code more complicated to make your tests look a certain way. JOËL: That's interesting. So maybe I'm reacting to some test pain by trying to change my tests first. So I'm trying to deal with some collaborators, and it is tricky to do. And so I decide, well, the thing I want to do is I want to reach for stubbing. But then that's hard to do because it's instances. So in order to make already that compromise in my test work better, now I change the code to be nicer for the test to use mostly classes because those are global. Whereas maybe the correct path to take initially is, say, oh, there isn't test pain here because I'm trying to isolate an object from its collaborators. Maybe we need to pass an object in as an argument rather than hard coding it inside the class. AJI: Yeah, absolutely. JOËL: So I guess you follow the test pain, but maybe the problem is that you've already kind of gone down a path that might not be the best before you got to the point where you decided that you needed a class method. AJI: And I think that idea of following the test pain can be, again, there are only shades of gray; there is no black and white. It can be sort of taken in a lot of different ways. And the way that I think about it is that test pain is also sort of an early warning sign that there's going to be pain if you want to reuse this class or these behaviors somewhere else. And if it was useful somewhere, it's likely it's going to be useful in another place. And there are many different kinds of tests pain. The testing is a little easier with a class method because you're not stubbing out any instance of. You're just stubbing; really, what's the difference between stubbing out any instance of or stubbing out the class? Is that just a semantic difference? Is that -- JOËL: Because someone on the internet said that stubbing any instance of is bad. AJI: Ooh, right, the internet. I should have read that one. The thing that you can do with passing around instances or sending messages to instances as you do when you're calling a method is that you can easily swap in a different object if you need to stub it. It's similar to how you can change the implementation under the hood of an object and pass in an object that responds to the same messages and kind of keep moving forward with your duck typing. If you can go into your tests and pass it sort of an object that's always going to return a thing...because we're not testing what that does; we just need a certain response so that we can move forward with the pathway that is under test. You can do that in so many different ways. You could have FactoryBot, for instance, give you a certain shape of a thing. You can create a tiny, little class right there in your tests that does something specific, that can be easily understood what's going on under the hood here. And instead of having to potentially stub out or create all of these pathways that need to be followed that are overwriting logic that's happening in different class methods or different places otherwhere in the application, you can just pass in this one simplified thing to keep your tests sort of smaller and easier to wrap your head around all in just one go. JOËL: I think what I'm getting here is that when you design your code around instances, you're more likely to build it in a modular way where you pass objects to other objects. And when you build your code using class methods, you're more likely to write it in a hard-coded way. Because you have that globally available class, you just hard-code it and then call it directly rather than passing things in. And so things end up more coupled and, therefore, high coupling leads to more test pain. AJI: Yeah, I think you've really kind of hit on something here that the approach of using class methods is locking that class into kind of a single context or use case. Usually, it is this global thing that is this one way, and that's even kind of backed up by the fact that class methods are load-time logic instead of run-time logic. And it really kind of not only couples but it makes it more brittle and less amenable to kind of reuse. JOËL: That's a really interesting distinction. I often tend to think of runtime versus load time in terms of composition versus inheritance. Composition, you can combine objects together at runtime and get behaviors built on the fly as the code is executing, whereas inheritance sort of inherently freezes you into a particular combination of behaviors at the time of loading the code. It's something that the programmers set up, and so it is much less flexible. And that is one of the arguments why the Gang of Four patterns book recommends composition over inheritance in many situations is because of that runtime versus load time dichotomy. And I hadn't made that connection for class methods versus instance methods, but I think there's a parallel there. AJI: Yeah, absolutely. The composition versus inheritance thing, I think, goes very hand in hand with the conversation that we're having about putting your behavior on a class versus an instance because...and I don't know if this is again yielding my thoughts to 'the internet said' in that composition is preferable to inheritance. But without unpacking that right there, that is certainly something that I strive for as well. And while it might have, much like TDD, some kind of superficial, short-term complexity, it has long-term payoff in that flexibility and that reuse, and that extensibility, and all of those other buzzwords that we developers like to throw around. JOËL: So you've shared a lot of thoughts on the use of class methods. I think this could branch into so many other aspects of object-oriented design that we haven't looked at or that we could go deeper, things like TDD. We could look into how it works with the solid principles, all sorts of things. But I think the big takeaway for me is that class methods are very useful, but it's easy to use them as our single hammer to every problem being a nail. And it's good to diversify your toolset. And some tools are specialized; they're good to be used in very specific situations that don't come across very often, and others are used every day. And maybe class methods are the former. AJI: Absolutely. That hammer-and-nail metaphor was right where I was headed for too. Love it. JOËL: Well, thank you so much, Aji, for joining the conversation today. Where can people find you online? AJI: Yeah, anywhere you want to look for me: Instagram, GitHub, Twitter. I'm @DoodlingDev, so just send all your angry emails that way. JOËL: And with that, let's wrap up. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Today we're talking to Bryan Helmkamp, Founder and CEO of Code Climate. And we discuss how to implement data-driven engineering management with Code Climate. The impact that adopting a data-driven approach has on every level of the engineering organization, and how to think about creating a product that is a tool vs. a product that is a solution. All of this right here, right now, on the Modern CTO Podcast! To learn more about CodeClimate, check them out at https://codeclimate.com
Revenue Grid, an AI-based Guided Selling platform for sales teams, raised $20 million in a Series A funding round led by W3 Capital to transform the B2B sales process through AI-based revenue insights and its Guided Selling strategies.Code Climate raised $50M in a Series C funding round led by PSG to further its vision of helping engineers align initiatives with data-driven insights, drive continuous improvement and help advance its product vision, and extend its sales reach and customer success.Salesforce announced three Sales Cloud innovations weeks after its new slack integration announcement. They aim to empower sales teams to accelerate their growth through AI-powered insights, integrated sales enablement and subscription-based self-service that helps customers.Two companies, AxiomSL, a data management platform, and Calypso Technology, integrated trading, risk, and processing platform that merged in July this year, have formed a new company that will operate under the new name Adenza.Drift, a revenue acceleration platform, has formed a strategic partnership with Vista Equity Partners. Drift will benefit from Vista's industry experience, best practices, and large ecosystem as its portfolio business. Drift becomes a unicorn with Vista's growth investment.Trengo, an omnichannel communications platform, has announced that it has raised $36 million in a Series A funding round co-led by Insight Partners and Peak Capital. CoachHub, a digital coaching platform, has raised $80M in a Series B2 round, bringing its Series B fundraising total to $110M. CoachHub connects people with over 2,500 qualified business and well-being coaches in 70 countries across six continents using artificial intelligence.ICONIQ Growth has invested $50M in Vic.ai, an artificial intelligence platform for accounting firms and corporate finance departments. Vic.ai will use the new capital to grow its enterprise offering and provide further AI capabilities to customers in the US and Europe.Contentsquare announced the acquisition of Hotjar to help businesses of all sizes to build better digital experiences. The companies would operate independently for some time, with Hotjar learning from Contentsquare's technology and resources and Contentsquare benefiting from Hotjar's reach and product-led approach.
Watch the live stream: Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training pytest book Patreon Supporters Special guest Guy Royse Brian #1: How to make an awesome Python package in 2021 Anton Zhiyanov, @ohmypy Also thanks John Mitchell, @JohnTellsAll for posting about it. Great writing taking you through everything in a sane order. Stubbing a project with just .gitignore and a directory with a stub __init__.py. Test packaging and publishing use flit init to create initial pyproject.toml set up your ~/.pypirc file publish to the test repo Make the real thing make an implementation publish Extras Adding README.md & CHANGELOG.md and updating pyproject.toml to include README.md and a Python version selector. Adding linting and testing with pytest, tox, coverage, and others Building in the cloud with GH Actions, Codecov, Code Climate Adding badges Task automation with a Makefile Publishing to PyPI from a GH Action Missing (but possibly obvious): GitHub project Checking your project name on PyPI early Super grateful for: Do all of this early in the project Using flit publish --repository pypitest and spelling out how to set up a ~/.pypirc file. Start to finish workflow Example project with all filled out project files Michael #2: Kubestriker Kubestriker performs numerous in depth checks on kubernetes infra to identify the security misconfigurations Focuses on running in production and at scale. kubestriker is Platform agnostic and works equally well across more than one platform such as self hosted kubernetes, Amazon EKS, Azure AKS, Google GKE etc. Current Capabilities Scans Self Managed and cloud provider managed kubernetes infra Reconnaissance phase checks for various services or open ports Performs automated scans incase of insecure, readwrite or readonly services are enabled Performs both authenticated scans and unauthenticated scans Scans for wide range of IAM Misconfigurations in the cluster Scans for wide range of Misconfigured containers Scans for wide range of Misconfigured Pod Security Policies Scans for wide range of Misconfigured Network policies Scans the privileges of a subject in the cluster Run commands on the containers and streams back the output Provides the endpoints of the misconfigured services Provides possible privilege escalation details Elaborative report with detailed explanation Guy #3: wasmtime WebAssembly runtime with support for: Python, Rust, C, Go, .NET Documentation here: https://docs.wasmtime.dev/ Supports WASI (Web Assembly System Interface): WASI supports IO operations—it does for WebAssembly what Node.js did for JavaScript Brian #4: Depend-a-lot-bot Anthony Shaw, @anthonypjshaw A bot for GitHub that automatically approves + merges PRs from dependabot and PyUp.io when they meet certain criteria: All the checks are passing The package is on a safe-list (see configuration) Example picture shows an auto approval and merge of a tox version update, showing “This PR looks good to merge automatically because tox is on the save-list for this repository”. Configuration in a .yml file. I learned recently that most programming jobs that can be automated eventually devolve into configuring a yml file. Michael #5: Supreme Court sides with Google in API copyright battle with Oracle The Supreme Court has sided with Google in its decade-long legal battle with Oracle over the copyright status of application programming interfaces. The ruling means that Google will not owe Oracle billions of dollars in damages. It also has big implications for the broader software industry The ruling heads off an expected wave of lawsuits over API copyrights. The case dates back to the creation of the Android platform in the mid-2000s. Google independently implemented the Java API methods, but to ensure compatibility, it copied Java's method names, argument types, and the class and package hierarchy. Over a decade of litigation, Google won twice at the trial court level, but each time, the ruling was overruled by the Federal Circuit appeals court. The case finally reached the Supreme Court last year. Writing for a six-justice majority, Justice Stephen Breyer held that Google's copying of the Java API calls was permissible under copyright's fair use doctrine. Guy #6: RedisAI Module for Redis that add AI capabilities Turns Redis into a model server: Supports TF, PyTorch, and ONNX models Adds the TENSOR data type ONNX + Redis has positive architectural implications Extras Michael git for Windows JupyterLab reaches v3 (via via Allan Hansen) Why not support Python letter by Brian Skinn Django 3.2 is out & is LTS PyCharm 2021.1 just dropped with Code With Me Brian The PSF is hiring a Developer-in-Residence to support CPython! Joke Vim Escape Rooms Happiness -
Short Byte: Mai Irie - What to do when your engineering leader leavesMai Irie is a director of engineering, but twice in the last two years she’s had to deal with managing the engineering org after her CTO or general equivalent left. In this interview we’ll look at some best practices for taking over an engineering org in times of transition.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partners:CodeClimate - Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate and get a month free when you mention CTO Connection.Karat - Karat helps companies hire software engineers by designing and conducting technical interview programs that improve hiring signal, mitigate bias, and reduce the interviewing time burden on engineering teams. Check out this short video or visit Karat.com to learn how we can help you grow your team.LaunchDarkly - the Feature Management Platform Powering the Best Software ProductsOur vision is to eliminate risk for developers and operations teams from the software development cycle. As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value. Learn more at https://launchdarkly.com/CTO Connection is a community where senior engineering leaders can connect with and learn from their peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Short Byte: Lee Edwards - How to pick the right tech startupWhether you’re thinking of founding, joining or investing in an early stage tech company, Lee Edwards has some advice! From effective due diligence on an early stage company to some interesting focus areas for deep tech startups. Lee is a partner at Root Ventures - a hard tech seed VC, and was previously CTO/VPE at Teespring and an Engineering Manager at Groupon.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partners:CodeClimate - Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate and get a month free when you mention CTO Connection.Karat - Karat helps companies hire software engineers by designing and conducting technical interview programs that improve hiring signal, mitigate bias, and reduce the interviewing time burden on engineering teams. Check out this short video or visit Karat.com to learn how we can help you grow your team.LaunchDarkly - the Feature Management Platform Powering the Best Software ProductsOur vision is to eliminate risk for developers and operations teams from the software development cycle. As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value. Learn more at https://launchdarkly.com/CTO Connection is a community where senior engineering leaders can connect with and learn from their peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Short Byte: Johnny Ray Austin - Measuring outcomes vs. inputsHow can you best measure the effectiveness of your engineering team? Lines of Code written? Number of PR’s? There are absolutely some input metrics that can have some value, but we don’t hire engineers to write code, we hire them to solve business problems. In this discussion Johnny Ray Austin, CTO at Till shares his experiences of managing his org by focusing on the outcomes produced rather than the inputs delivered. Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partners:CodeClimate - Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate and get a month free when you mention CTO Connection.Karat - Karat helps companies hire software engineers by designing and conducting technical interview programs that improve hiring signal, mitigate bias, and reduce the interviewing time burden on engineering teams. Check out this short video or visit Karat.com to learn how we can help you grow your team.LaunchDarkly - the Feature Management Platform Powering the Best Software ProductsOur vision is to eliminate risk for developers and operations teams from the software development cycle. As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value. Learn more at https://launchdarkly.com/CTO Connection is a community where senior engineering leaders can connect with and learn from their peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Building vs buying softwareBuilding vs buying is always a fraught decision. Nominally buying a solution provides more capability and a quicker time to market, but then you have the potential of vendor lock in and being constrained by the limitations of the tool that you chose. And in the engineering world, it’s even harder. While Not Invented Here syndrome is becoming less prevalent, you still need to pick between a ground up implementation or leveraging open source, professional open source and commercial offerings. And then to think about how to incorporate low code APIs and no code solutions into your tech mix in a way that reduces build and maintenance costs without limiting the quality of your final solution. Join us to hear how James thinks about this balance - both as a CEO of a company that buys other peoples software and as the founder of a company that sells its own SaaS solution.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partners:CodeClimate - Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate and get a month free when you mention CTO Connection.Karat - Karat helps companies hire software engineers by designing and conducting technical interview programs that improve hiring signal, mitigate bias, and reduce the interviewing time burden on engineering teams. Check out this short video or visit Karat.com to learn how we can help you grow your team.LaunchDarkly - the Feature Management Platform Powering the Best Software ProductsOur vision is to eliminate risk for developers and operations teams from the software development cycle. As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value. Learn more at https://launchdarkly.com/CTO Connection is a community where senior engineering leaders can connect with and learn from their peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Building a startup within a startupThere are a unique set of constraints when you’re running an engineering team at a “startup within a startup” - from managing the planning cycle before you’ve hit product market fit, to customizing elements of your product and infrastructure designed for a very different audience. Join us to hear how Heidi Williams, Director of Engineering at Grammarly is working on these challenges by building out an enterprise offering for a traditionally consumer focused business.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partners:CodeClimate - Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate and get a month free when you mention CTO Connection.Karat - Karat helps companies hire software engineers by designing and conducting technical interview programs that improve hiring signal, mitigate bias, and reduce the interviewing time burden on engineering teams. Check out this short video or visit Karat.com to learn how we can help you grow your team.LaunchDarkly - the Feature Management Platform Powering the Best Software ProductsOur vision is to eliminate risk for developers and operations teams from the software development cycle. As companies transition to a world built on software, there is an increasing requirement to move quickly—but that often comes with the desire to maintain control. LaunchDarkly is the feature management platform that enables dev and ops teams to control the whole feature lifecycle, from concept to launch to value. Learn more at https://launchdarkly.com/CTO Connection is a community where senior engineering leaders can connect with and learn from their peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
JP learned engineering management the hard way - and he’s passionate about helping new engineering leaders to have a better pathway to building their skills. In this Short Byte we look at the differences between being an engineering leader at a startup vs a larger company, how JP learned to become a manager, resources that are now available for engineering leaders and some hints and tips on managing your boss and hiring effectively at a startup.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partner - CodeClimate! Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate by the end of the year and get a month free when you mention CTO Connection.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast-growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
Culture work is part of a developers jobAt Flatiron Health, the team thinks that getting IC’s to invest in the team culture is so important that they’ve built it into their career ladder. So I’ll be asking Cat why “culture work” (from onboarding and mentoring to planning hackathons and running socials) is everyone’s job and how it has helped her to build a cohesive and inclusive culture despite explosive growth of her engineering org.Special thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.And thanks to our sustaining partner - CodeClimate! Engineering leaders who are motivated to exceed 2021 goals know that gut-feel isn't enough to make performance-improving decisions. CTOs, VPEs and Directors at organizations like Slack, Gusto, Pizza Hut trust objective data from Code Climate Velocity to measure and improve productivity, efficiency and performance. Reach out to Code Climate by the end of the year and get a month free when you mention CTO Connection.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast-growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.
VP of Engineering is an expansive position. But what is it really like on the granular level? Alexandra Paredes is the VP of Engineering at Code Climate and joined Peter to discuss how she approaches the hiring process, keeps her team on track, and navigates metrics such as cycle time and code review involvement.Tune in to hear Alexandra’s thoughts on communication and transparency as well as mentoring new engineers to ensure they ask the right questions from the start. [01:28] - Alexandra's rise to VP of Engineering at Code Climate[07:21] - Internal metrics[12:02] - Milestones without estimates[18:40] - Cycle time[20:29] - Reliability requirements[21:59] - Code review involvement[23:29] - Incremental pull requests[26:34] - Tracking key abilities[29:58] - Communicating metrics to non-technical managementSpecial thanks to our global partner – Amazon Web Services (AWS). AWS offers a broad set of global cloud-based products to equip technology leaders to build better and more powerful solutions, reach out to aws-cto-program@amazon.com if you’re interested to learn more about their offerings.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast-growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.Podcast episode production by Dante32.
Robby speaks with Bryan Helmkamp, Founder and CEO at Code Climate. Bryan discusses the use of the term "technical debt" now vs. 15 years ago, what he's learned from having thousands of engineering teams use their tools, and the long-term benefits of choosing to build their main application in Ruby on Rails. You'll also get an overview of Code Climate's main products.Helpful LinksCode ClimateThe Code Climate blogBryan on TwitterCode Climate on TwitterSubscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.
When repaying debt, it helps to know how big it is. The same holds for technical debt. The problem is: how do you measure it? Today we talk with Daniel Okwufulueze, a technology leader, programming polyglot, writer, and senior engineer at dunnhumby. Daniel helps us define technical debt and tells us how to quantify it without falling into usual pitfalls while doing so. When you finish listening to the episode, make sure to connect with Daniel on LinkedIn and check out his writings at Medium.com. Mentioned in this episode: Daniel on LinkedIn at https://www.linkedin.com/in/dokwufulueze/ Daniel on media.com at https://medium.com/@DOkwufulueze Dunnhumby at https://www.dunnhumby.com M.M. Lehman, L.A. Belady, Program Evolution, Processes of Software Change at http://informatique.umons.ac.be/genlog/BeladyLehman1985-ProgramEvolution.pdf Code Climate at https://codeclimate.com
Corey Martin is a Customer Solutions Architect at Heroku. On this episode of Code[ish], he’s interviewing Joe Leo, the founder and CEO of Def Method, a service-oriented software consultancy based out of New York City. The conversation begins with Leo providing his personal definition of legacy software as being any software that is not currently in the process of being written. He emphasizes that this does not mean something is immediately obsolete once it’s been written, but that at that point an individual or company must shift its energy and focus to maintenance and improvement. From there Martin and Leo move on to discussing how customers Def Method helps customers find solutions to their software issues. Whether a company is dealing with brownfield, greenfield or minefield software, Def Method attempts to help them determine how to deal with the problems they are faced with and how they can keep their system healthy as they continue to evolve. As Leo points out, no piece of software is future-proof or bug-free, so honesty and openness are required if a company wants to succeed. It’s Def Method’s goal to help its customers get to this point. The pair round out the conversation by talking about Leo’s recent book, The Well-Grounded Rubyist. The text, which was originally penned by Leo’s close friend David A. Black, is considered one of the most influential pieces of writing on Ruby and Leo was brought in to help co-author the third edition, which is published by Manning Publications. Leo views The Well-Grounded Rubyist as a text book with a philosophical bent and says that his primary focus was capturing the how the language has developed throughout its history and the very real tectonic shifts it has undergone during that time. Links from this episode Def Method is Joe Leo's consultancy that focuses on rescuing legacy software Def Method has also been featured as a Heroku Customer Success case study Code Climate is a code analyzer that runs as part of CI to ensure the quality of your code The Well-Grounded Rubyist is the Joe's book, available from Manning
In a modern fast-moving business environment, we are obsessed with quantitative measurements. But without qualitative data, we might get the wrong impression and incentivize bad behavior. Today we talk with Dalia Havens, Vice-President of engineering at Netlify, about selecting appropriate metrics to measure outputs of your team, increase its productivity, and, most importantly, keep it happy. Building on her experience from Netlify, GitLab, SailPoint and IBM, she shares with us how to promote team health through positive metric-driven management. When you finish listening to the podcast, connect with Dalia on LinkedIn. Mentioned in this episode: Dalia Havens on LinkedIn at https://www.linkedin.com/in/daliahavens/ Netlify at https://www.netlify.com GitLab at https://about.gitlab.com IBM at https://www.ibm.com SailPoint at https://www.sailpoint.com SonarQube at https://www.sonarqube.org Code Climate at https://codeclimate.com John Doerr, Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs at https://www.amazon.com/dp/0525536221/ref=cm_sw_r_cp_api_i_15ClEbKXXGPGQ
Ushashi Chakraborty is a Director of Engineering for Backend Engineering at Mode Analytics in San Francisco. She moved to California in October 2018 from Chicago where she worked in Groupon for 5 years as an Engineering Manager for Consumer Applications and before that as a Software Development Engineer in Test. She has also worked in Microsoft and Thomson Reuters. She holds a Masters degree in Computer Science from North Dakota State University. Ushashi has been involved with theater since high school and, especially, with improv since 2015. She is a Second City Training Center, Chicago alumni and has performed improv, sketch and stand up across several small theaters in Chicago. Ushashi has spoken at major conferences including Grace Hopper Conference (SOL, 2014), expo:QA, Madrid, multiple chapters of CTO Summit in Nasdaq NYC, Austin, Chicago, and SF as well as in Leadership Summits by Code Climate, VerveCon, and General Assembly Breakfast Talk in 2018-19. She has recorded podcasts for the startupcto and frontier by gun.io. She, most recently, spoke at the Grace Hopper Conference, Orlando, 2019.
Technical due diligence is the process of studying a technology product and evaluating its value and feasibility. You’d think that such an important process would be part of the business plan from the outset, but most companies only undergo the process when they begin VC fundraising. Jason Mongue, founder of The Clover Group, is here to talk about the process and why it should start early. [00:11] - Velocity by Code Climate [01:03] - Technical due diligence [02:31] - Starting the process [05:08] - Dress rehearsal [09:18] - Ask questions [11:18] - Security awareness [15:02] - Disaster recovery plan [17:32] - How to think about scaling [20:31] - Health check for technology organizations [22:07] - Velocity by Code Climate CTO Connection is where you can learn from the experiences of successful engineering leaders at fast growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode produced by Dante32.
As products and companies develop more name-brand recognition, expansion is inevitable. Prashant Pandey has experience working at companies large and small and in his current role as Head of Engineering at Asana, he knows what it’s like to navigate the growing pains of managing expanded teams.On today’s episode, Prashant talks to Peter about what to take into consideration when judging a team member’s contribution, why failure is the single best learning tool, and how to manage expanding teams across time zones. Tune in for Prashant’s thoughts. [00:11] - Velocity by Code Climate [00:49] - Internship at Loudcloud [03:25] - Research at IBM [05:11] - Managing teams at Vdopia [06:53] - Learning through failure [09:27] - Getting up to speed with Amazon's best practices [10:54] - Viewing metrics [13:47] - Measuring impact [16:21] - Joining Asana [19:51] - Bifurcation challenges [24:28] - Remote vs distributed teams [26:28] - Collaboration tools [28:13] - Managing a successful rewrite [36:32] - Velocity by Code Climate Velocity by Code Climate is an engineering analytics tool that takes commit and Git insights and turns it into actionable metrics and dashboards for engineering leaders. Use concrete data to help you answer questions like:- How fast are we moving?- Did a recent change in our process have a positive or negative impact on efficiency?- Who are my top-performing teams and why?Sign up at https://codeclimate.com/ctoconnection/ and mention CTO connection to get a free month of the product.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Conneciton podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode produced by Dante32.
On CTO Connection, we mostly focus on the engineering side of tech, but today, we’re taking a break to look at product. Our guest Adam Nash, Vice President of Product and Growth at Dropbox, discusses the importance of defining the role of product in a tech company and why he believes delighting customers is just as crucial as making decisions based on metrics.Tune in to learn why Adam believes so strongly in the concept of delight and how he balances metrics with listening to customer requests when crafting product strategy. [01:06] - Transitioning to product management [04:20] - Strategy, prioritization, & execution [06:50] - Articulating strategy [09:53] - Driving product teams [13:51] - Making prioritization easier [17:18] - The three bucket system [17:44] - Metrics movers [18:37] - Customer requests [22:25] - Delight features [27:37] - Making delight part of team practice This episode of the CTO Connection Podcast was sponsored by Velocity by Code Climate. Velocity by Code Climate is an engineering analytics tool that takes commit and Git insights and turns it into actionable metrics and dashboards for engineering leaders. In a few weeks, they're releasing Velocity 3.0 with Jira, which will combine Git and PR data with Issue data for the first time to give engineering executives a complete understanding of how their team works. You can now see the status of every initiative, without manual reporting, and know exactly which engineering projects get off track and why. You can request early access by going to https://codeclimate.com/ctoconnection/.CTO Connection is where you can learn from the experiences of successful engineering leaders at fast growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode produced by Dante32.
According to Hector Aguilar, President of Technology at Okta, if you get too far away from technology, you can’t be an effective leader of technology. But when your day-to-day job is focused more on managing individuals, it can be hard to find the time to code.On today’s episode, Hector and Peter discuss how making coding mandatory for all managers can help instill company values and culture. Hector also shares how he prioritizes personal projects and learning to ensure he stays on top of the technology at work. [00:53] - Hector's origin story [02:42] - Stay close to the technology [04:02] - The challenge of keeping up [06:52] - Prioritizing what to learn and how [09:09] - The importance of credibility [09:49] - Interview code testing [16:38] - Measuring managers [19:23] - Clear expectations [21:41] - What does Okta do? [23:27] - Scaling an organization [25:52] - Keeping the quality bar high [27:49] - Culture of commitment [29:46] - Automate yourself and become a power mobile user This episode of the CTO Connection Podcast was sponsored by Velocity by Code Climate. Velocity by Code Climate is an engineering analytics tool that takes commit and Git insights and turns it into actionable metrics and dashboards for engineering leaders. In a few weeks, they're releasing Velocity 3.0 with Jira, which will combine Git and PR data with Issue data for the first time to give engineering executives a complete understanding of how their team works. You can now see the status of every initiative, without manual reporting, and know exactly which engineering projects get off track and why. You can request early access by going to https://codeclimate.com/ctoconnection/CTO Connection is where you can learn from the experiences of successful engineering leaders at fast growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode produced by Dante32.
When Adam Miller joined the Roblox team as the Vice President of Engineering, he assumed everything would just get saved to the cloud. Instead, he found that the Roblox platform’s unique infrastructure needs would necessitate more in-house development and management in order to effectively scale.In today’s episode, Peter and Adam discuss the challenges of managing hundreds of employees drawing from the same code, scaling infrastructure, and the limitations of cloud storage for the platform model. [00:53] - Adam Miller, Vice President of Engineering, Technology at Roblox [01:03] - Making an impact in engineering through leadership [02:31] - The path to product/market fit [04:45] - Enabling a new type of human co-experience with Roblox [06:21] - Architecture of the metaverse [10:13] - Building a scaling infrastructure [14:08] - Exposing persistent data [17:20] - Assumptions about the cloud [19:08] - Switching to microservices [23:46] - Flexibility with smaller teams This episode of the CTO Connection Podcast was sponsored by Velocity by Code Climate. Velocity by Code Climate is an engineering analytics tool that takes commit and Git insights and turns it into actionable metrics and dashboards for engineering leaders. In a few weeks, they're releasing Velocity 3.0 with Jira, which will combine Git and PR data with Issue data for the first time to give engineering executives a complete understanding of how their team works. You can now see the status of every initiative, without manual reporting, and know exactly which engineering projects get off track and why. You can request early access by going to https://codeclimate.com/ctoconnection/CTO Connection is where you can learn from the experiences of successful engineering leaders at fast-growth tech startups. Whether you want to learn more about hiring, motivating or managing an engineering team, if you're technical and manage engineers, the CTO Connection podcast is a great resource for learning from your peers!If you'd like to receive new episodes as they're published, please subscribe to CTO Connection in Apple Podcasts, Google Podcasts, Spotify or wherever you get your podcasts. If you enjoyed this episode, please consider leaving a review in Apple Podcasts. It really helps others find the show.This podcast episode produced by Dante32.
In this episode, we interview Noah Portes Chaikin. Noah is a senior software engineer at Code Climate. Code Climate creates analytics software for development teams so they can evaluate code risks and tie performance metrics to code changes. In our conversation, we learned about how Noah got into software engineering without a college degree, including how he taught himself the technical skills necessary to land his first developer job. We also discussed why Noah prefers working at startups vs. larger organizations and how that's contributed to his professional growth. Noah also shared his take on hiring, and what he looks for when hiring engineers. --- Support this podcast: https://anchor.fm/secret-sauce/support
En el podcast de hoy hemos hablado sobre sistemas de control de versiones, Git, GitHub, algunos concepto básicos, sus flujos de trabajo, algunos consejos y buenas prácticas, GUIs e integraciones con editores... En fin, esperamos que os guste! Blog Entredevyops : http://www.entredevyops.es Twitter Entredevyops: https://twitter.com/EntreDevYOps Enlaces comentados: La guia del autoestopista galactico: https://es.wikipedia.org/wiki/Gu%C3%ADa_del_autoestopista_gal%C3%A1ctico_(libro) Podcast BeSuricata: https://besuricata.com/ Python PEP8: https://www.python.org/dev/peps/pep-0008/ Git: https://git-scm.com/ Comparación de Git y Mercurial: https://importantshock.wordpress.com/2008/08/07/git-vs-mercurial/ GitHub: https://github.com/ GitHub Marketplace: https://github.com/marketplace/ Gitlab: https://gitlab.com/ Bitbucket: https://bitbucket.org/ Sourced: https://sourced.tech/ Tutorial Merging vs Rebase: https://www.atlassian.com/git/tutorials/merging-vs-rebasing Gitflow: http://nvie.com/posts/a-successful-git-branching-model/ GitHub Flow: https://guides.github.com/introduction/flow/ Extensión de Gitflow para el CLI de git: https://github.com/petervanderdoes/gitflow-avh CLI oficial de GitHub: https://github.com/github/hub gitsome, CLI para trabajar con GH: https://github.com/donnemartin/gitsome git-spindle, extensión de GitHub para el CLI de git: https://github.com/seveas/git-spindle Magit: https://magit.vc/ CodeClimate: https://codeclimate.com/ Travis-ci: https://travis-ci.org/
00:16 – Welcome to “I Rolled a Natural 20 For My Agility Check” …we mean, “Greater Than Code!” 01:31 – Background and Superpower; Empathy 09:08 – Cross-Cultural Communication Dynamics Women in Agile (https://www.agilealliance.org/events/women-in-agile-2018/) 15:48 – Biases, Understanding Dynamics, and Facilitating as an Ally @jessitron (https://twitter.com/jessitron/status/902935755533807616?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E902935755533807616&ref_url=https%3A%2F%2Fwww.greaterthancode.com%2F2017%2F09%2F20%2F047-communicating-across-boundaries-with-declan-whelan%2F) "To have biases is to be human. It's not a bad thing." @dwhelan @greaterthancode 21:02 – Being Authentic 25:31 – Is Agile something that you are or something that you do? 35:37 – Adopting Practices Across Teams “A foolish consistency is the hobgoblin of little minds.” – Ralph Waldo Emerson @jessitron (https://twitter.com/jessitron/status/902949956889313280?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E902949956889313280&ref_url=https%3A%2F%2Fwww.greaterthancode.com%2F2017%2F09%2F20%2F047-communicating-across-boundaries-with-declan-whelan%2F) People love consistency. Ask, what are the benefits? then strive for common outcomes, not common practices. @dwhelan@greaterthancode 41:57 – Technical Debt and Technical Health: How do we amplify what we want? Code Climate (https://codeclimate.com/) Reflections: Jessica: Strive for common outcomes; not common practices. Coraline: Some situations should be taken as a promise for a conversation. Janelle: Ask for permission. Sam: Identifying and clarifying outcomes that we want. Declan: Figure out how you can have effective conversations with your teams. This episode was brought to you by @therubyrep (https://twitter.com/therubyrep) of DevReps, LLC (http://www.devreps.com/). To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode (https://www.patreon.com/greaterthancode). To make a one-time donation so that we can continue to bring you more content and transcripts like this, please do so at paypal.me/devreps (https://www.paypal.me/devreps). You will also get an invitation to our Slack community this way as well. Amazon links may be affiliate links, which means you’re supporting the show when you purchase our recommendations. Thanks! Special Guest: Declan Whelan.
Tweet this Episode Matias Korhonen has been writing Rails apps professionally at Kisko Labs, a Rails-focused software consultancy in Finland, for almost a decade. In his spare time he works on too many side projects (including Piranhas.co), a book price comparison site, and TLS.care (an SSL certificate monitoring service). He also somehow manages to find time to homebrew beer. The Rogues talk to Matias about securing your Rails applications. Rails comes with a lot of security features built in, but you can still leave yourself open to exploitation if you're not careful. Most of these problems occur in the portion of the app your write as opposed to the parts of the app that Rails handles for you. We go over several tools and techniques for making sure your application, access, and data are all secure. In particular, we dive pretty deep on: Tools that you can use to scan for vulnerabilities or add more security checks to your applications Authentication and authorization mistakes Securely managing data and much, much more... Links: secureheaders brakeman Code Climate CloudFlare zxcvbn Troy Hunt article on pwned passwords Devise Security Extension pundit Drifting Ruby episode on Complex Strong Parameters gemnasium bundler-audit OWASP Zed Attack Proxy Project rack-attack Picks: Brian: Regex 101 Give and Take by Adam Grant Eric: Indie Hackers Dave: Sumo Logic Chuck: Ready Player One Comic-Con trailer breakdown Mattermost Ruby Rogues Parley Ruby Dev Summit (FREE) Matias: Webpacker 3.0 ActiveStorage Heroku
Tweet this Episode Matias Korhonen has been writing Rails apps professionally at Kisko Labs, a Rails-focused software consultancy in Finland, for almost a decade. In his spare time he works on too many side projects (including Piranhas.co), a book price comparison site, and TLS.care (an SSL certificate monitoring service). He also somehow manages to find time to homebrew beer. The Rogues talk to Matias about securing your Rails applications. Rails comes with a lot of security features built in, but you can still leave yourself open to exploitation if you're not careful. Most of these problems occur in the portion of the app your write as opposed to the parts of the app that Rails handles for you. We go over several tools and techniques for making sure your application, access, and data are all secure. In particular, we dive pretty deep on: Tools that you can use to scan for vulnerabilities or add more security checks to your applications Authentication and authorization mistakes Securely managing data and much, much more... Links: secureheaders brakeman Code Climate CloudFlare zxcvbn Troy Hunt article on pwned passwords Devise Security Extension pundit Drifting Ruby episode on Complex Strong Parameters gemnasium bundler-audit OWASP Zed Attack Proxy Project rack-attack Picks: Brian: Regex 101 Give and Take by Adam Grant Eric: Indie Hackers Dave: Sumo Logic Chuck: Ready Player One Comic-Con trailer breakdown Mattermost Ruby Rogues Parley Ruby Dev Summit (FREE) Matias: Webpacker 3.0 ActiveStorage Heroku
Tweet this Episode Matias Korhonen has been writing Rails apps professionally at Kisko Labs, a Rails-focused software consultancy in Finland, for almost a decade. In his spare time he works on too many side projects (including Piranhas.co), a book price comparison site, and TLS.care (an SSL certificate monitoring service). He also somehow manages to find time to homebrew beer. The Rogues talk to Matias about securing your Rails applications. Rails comes with a lot of security features built in, but you can still leave yourself open to exploitation if you're not careful. Most of these problems occur in the portion of the app your write as opposed to the parts of the app that Rails handles for you. We go over several tools and techniques for making sure your application, access, and data are all secure. In particular, we dive pretty deep on: Tools that you can use to scan for vulnerabilities or add more security checks to your applications Authentication and authorization mistakes Securely managing data and much, much more... Links: secureheaders brakeman Code Climate CloudFlare zxcvbn Troy Hunt article on pwned passwords Devise Security Extension pundit Drifting Ruby episode on Complex Strong Parameters gemnasium bundler-audit OWASP Zed Attack Proxy Project rack-attack Picks: Brian: Regex 101 Give and Take by Adam Grant Eric: Indie Hackers Dave: Sumo Logic Chuck: Ready Player One Comic-Con trailer breakdown Mattermost Ruby Rogues Parley Ruby Dev Summit (FREE) Matias: Webpacker 3.0 ActiveStorage Heroku
Livable Code With Sarah Mei Follow us on Twitter! @techdoneright (https://twitter.com/tech_done_right), leave us a review on iTunes, and please sign up for our newsletter (http://www.techdoneright.io/newsletter)! Guest Sarah Mei (https://twitter.com/sarahmei): Founder of RailsBridge (http://railsbridge.org/), Director of Ruby Central (http://rubycentral.org/), Chief Consultant at DevMynd Software (https://www.devmynd.com/). Summary Is your code the kind of cluttered house you might find on a reality TV show? Or the kind of sleek, minimalist house you might find in a architectural magazine. Neither one sounds like a place you could comfortably live. Sarah Mei joins the podcast to talk about Livable Code, what makes a codebase livable, how to negotiate tension between junior and senior developers and how Rails deals with developer happiness. Notes 01:33 - What is meant by “Livable Code”? 04:25 - Where does codebase abstraction go wrong? 05:41 - What makes a codebase livable? - Code Climate (https://codeclimate.com/) 09:16 - Calibrating the Right Level for Your Team: Retrospective Meetings 12:22 - Principles of a Codebase 18:21 - Alleviating Tension Between Junior and Senior Developers 22:57 - The Goal of Career Development 26:42 - Guiding Architecture Choices on a Team 30:37 - Does testing help? 34:23 - Programmer Happiness 37:42 - The Attitude Toward JavaScript 39:01 - The Right Design For Your Codebase is Subjective Special Guest: Sarah Mei.
Robert Sösemann is an Agile and lean-code enthusiast, Lead Product Developer at Up2Go International, and inventor of ApexMetrics, a Code Climate engine. Lorenzo Frattini is a Salesforce-certified Technical Architect and creator of Clayton.io, a code-review robot. In this episode, we discuss code quality, how to measure it, when code is “done,” its business value, and more!
00:45 - What deployments have we used? 3:22 - Heroku 5:10 - Dev/prod parity 10:30 - Deployment stories 11:50 - Continuous deployment CircleCI SnapCI 15:55 - Working with clients that are anti-testing and writing tests 28:50 - Server setup Docker Chef 34:05 - Nginx and Passenger 39:35 - Handling caching issues and increasing server space 44:25 - Methods for deploying 46:30 - Team size and deployment Capistrano 49:40 - Monitoring tools Code Climate Honey Badger Zabbix NewRelic TrackJS JSJ 138 with Todd Gardner Picks: Dinosaur Odyssey by Scott Sampson (Jason) Shadows of Forgotten Ancestors by Carl Sagan (Jason) Rails Solutions: Ruby on Rails Made Easy by Justin Williams (Jerome) Take My Money: Accepting Payments on the Web by Noel Rappin (Brian) Deploying with JRuby by Joe Kutner (Brian) RR Episode 281 with Noel Rappin RR 150 with Joe Kutner Echo Dot (Charles) The Life-Changing Magic of Tidying Up by Marie Kondo (Brian) Getting Things Done by David Allen (Charles)
00:45 - What deployments have we used? 3:22 - Heroku 5:10 - Dev/prod parity 10:30 - Deployment stories 11:50 - Continuous deployment CircleCI SnapCI 15:55 - Working with clients that are anti-testing and writing tests 28:50 - Server setup Docker Chef 34:05 - Nginx and Passenger 39:35 - Handling caching issues and increasing server space 44:25 - Methods for deploying 46:30 - Team size and deployment Capistrano 49:40 - Monitoring tools Code Climate Honey Badger Zabbix NewRelic TrackJS JSJ 138 with Todd Gardner Picks: Dinosaur Odyssey by Scott Sampson (Jason) Shadows of Forgotten Ancestors by Carl Sagan (Jason) Rails Solutions: Ruby on Rails Made Easy by Justin Williams (Jerome) Take My Money: Accepting Payments on the Web by Noel Rappin (Brian) Deploying with JRuby by Joe Kutner (Brian) RR Episode 281 with Noel Rappin RR 150 with Joe Kutner Echo Dot (Charles) The Life-Changing Magic of Tidying Up by Marie Kondo (Brian) Getting Things Done by David Allen (Charles)
00:45 - What deployments have we used? 3:22 - Heroku 5:10 - Dev/prod parity 10:30 - Deployment stories 11:50 - Continuous deployment CircleCI SnapCI 15:55 - Working with clients that are anti-testing and writing tests 28:50 - Server setup Docker Chef 34:05 - Nginx and Passenger 39:35 - Handling caching issues and increasing server space 44:25 - Methods for deploying 46:30 - Team size and deployment Capistrano 49:40 - Monitoring tools Code Climate Honey Badger Zabbix NewRelic TrackJS JSJ 138 with Todd Gardner Picks: Dinosaur Odyssey by Scott Sampson (Jason) Shadows of Forgotten Ancestors by Carl Sagan (Jason) Rails Solutions: Ruby on Rails Made Easy by Justin Williams (Jerome) Take My Money: Accepting Payments on the Web by Noel Rappin (Brian) Deploying with JRuby by Joe Kutner (Brian) RR Episode 281 with Noel Rappin RR 150 with Joe Kutner Echo Dot (Charles) The Life-Changing Magic of Tidying Up by Marie Kondo (Brian) Getting Things Done by David Allen (Charles)
DevOps Remote Conference NoSQL Remote Conference 1:50 - Introducing Simone Civetta Twitter Blog Frenchkit Twitter 2:16 - Automated Code Metrics in Swift 4:06 - Strategies to Determine Code Complexity Lizard 6:17- Adding a Language 7:28 - Why Is Cyclomatic Complexity Important? How Can We Use This Information To Improve Our Code? 11:02- Difference Between Cyclomatic Complexity and End Path Complexity 13:40 - Using and Understanding Different Values of Cyclomatic Complexity 15:10 - Automating The Process 16:38 - Integrating Metrics Into A Complete Dashboard SonarQube 18:12- Technical Debt Metric 21:16 - Stressing About Metric Values 25:50- Impact Of The Community on Swift’s Tools SwiftLane Carthage Tailor 27:55- First Steps To Evaluating Code Ace 30:15- Using Code Climate Slather 31:20 - Using Hound 33:30- French Kit Conference Picks: Slide Deck Link (Jaim) The Hero of Ages: Book Three of Mistborn by Brandon Sanderson (Layne) Wood Badge Scout Training (Charles) Boy Scouts of America (Charles) Postal (Simone)
DevOps Remote Conference NoSQL Remote Conference 1:50 - Introducing Simone Civetta Twitter Blog Frenchkit Twitter 2:16 - Automated Code Metrics in Swift 4:06 - Strategies to Determine Code Complexity Lizard 6:17- Adding a Language 7:28 - Why Is Cyclomatic Complexity Important? How Can We Use This Information To Improve Our Code? 11:02- Difference Between Cyclomatic Complexity and End Path Complexity 13:40 - Using and Understanding Different Values of Cyclomatic Complexity 15:10 - Automating The Process 16:38 - Integrating Metrics Into A Complete Dashboard SonarQube 18:12- Technical Debt Metric 21:16 - Stressing About Metric Values 25:50- Impact Of The Community on Swift’s Tools SwiftLane Carthage Tailor 27:55- First Steps To Evaluating Code Ace 30:15- Using Code Climate Slather 31:20 - Using Hound 33:30- French Kit Conference Picks: Slide Deck Link (Jaim) The Hero of Ages: Book Three of Mistborn by Brandon Sanderson (Layne) Wood Badge Scout Training (Charles) Boy Scouts of America (Charles) Postal (Simone)
In this episode, Adam talks to Joe Ferris of thoughtbot about the test-driven development workflow he uses to build Rails applications. Sponsors: Laracasts, use coupon code FULLSTACK2016 for 50% off your first month Rollbar, sign up at https://rollbar.com/fullstackradio to try their Bootstrap Plan free for 90 days Links: Test Driven Laravel, Adam's latest project Giant Robots podcast How We Test Rails Applications on the thoughtbot blog Capybara Capybara WebKit RSpec factory_girl The Rails Testing Pyramid on the Code Climate blog
03:42 - Derek Prior Introduction Twitter GitHub Blog thoughtbot @thoughtbot thoughtbot Code Review Guides The Bike Shed Podcast @_bikeshed 04:01 - Code Reviews Derek Prior: Implementing a Strong Code-Review Culture @ RailsConf 2015 Slides 05:14 - What happens when you don’t do code reviews? 06:30 - Not Emphasizing Code Quality, Setting Code Review Up for Failure Edge Cases Diverse Feedback, Team Conflict 10:43 - Code Reviewing Yourself: Answering Your Own Questions 12:03 - The Evolution of Code Review (Code Review as an Asynchronous Process) 14:51 - Small Changes, “Pull Request Bombs” Handling Architectural Disagreements and Discussions Improving the Design of Existing Code by Martin Fowler (with Kent Beck, John Brant, William Opdyke, and Don Roberts) 23:49 - Making Code Review a Supportive Process Stop Issuing Commands; Ask Probing Questions DON’T Use “Why didn’t you ________?” DO Use “Have you considered _________?” or “That’s interesting…I might have used _______.” 30:32 - What qualities should reviewees have? 34:27 - Getting Code Reviews Introduced Into Company Culture 38:30 - Making Sure Code Reviews Get Done 40:47 - Tagging Specific Team Members LGTM = Looks Good To Me Gerrit 44:39 - Other Handy Code Review Tools Style Guides rubocop JSHint sass-lint Hound repo Code Climate 47:49 - Code Review Feedback Resources for Solo Programmers exercism.io pairprogramwith.me CodeNewbie Ruby Monday JavaScript Tuesday Python Thursday Picks Code Newbie Podcast: Sandi Metz Part I (Saron) Code Newbie Podcast: Sandi Metz Part II (Saron) If Google Were A Guy (Saron) LEGO Ideas - Lovelace & Babbage (Coraline) CoverMyMeds is offering Ruby on Rails training for experienced developers (David) CoverMyMeds Billboard 1 (David) CoverMyMeds Billboard 2 (David) The Bike Shed Podcast (Derek) The Ember RFC Process (Derek) tota11y (Derek) Eileen Uchitelle: How to Performance @ GoRuCo 2015 (Derek) Olympus SP-100EE (Avdi)
03:42 - Derek Prior Introduction Twitter GitHub Blog thoughtbot @thoughtbot thoughtbot Code Review Guides The Bike Shed Podcast @_bikeshed 04:01 - Code Reviews Derek Prior: Implementing a Strong Code-Review Culture @ RailsConf 2015 Slides 05:14 - What happens when you don’t do code reviews? 06:30 - Not Emphasizing Code Quality, Setting Code Review Up for Failure Edge Cases Diverse Feedback, Team Conflict 10:43 - Code Reviewing Yourself: Answering Your Own Questions 12:03 - The Evolution of Code Review (Code Review as an Asynchronous Process) 14:51 - Small Changes, “Pull Request Bombs” Handling Architectural Disagreements and Discussions Improving the Design of Existing Code by Martin Fowler (with Kent Beck, John Brant, William Opdyke, and Don Roberts) 23:49 - Making Code Review a Supportive Process Stop Issuing Commands; Ask Probing Questions DON’T Use “Why didn’t you ________?” DO Use “Have you considered _________?” or “That’s interesting…I might have used _______.” 30:32 - What qualities should reviewees have? 34:27 - Getting Code Reviews Introduced Into Company Culture 38:30 - Making Sure Code Reviews Get Done 40:47 - Tagging Specific Team Members LGTM = Looks Good To Me Gerrit 44:39 - Other Handy Code Review Tools Style Guides rubocop JSHint sass-lint Hound repo Code Climate 47:49 - Code Review Feedback Resources for Solo Programmers exercism.io pairprogramwith.me CodeNewbie Ruby Monday JavaScript Tuesday Python Thursday Picks Code Newbie Podcast: Sandi Metz Part I (Saron) Code Newbie Podcast: Sandi Metz Part II (Saron) If Google Were A Guy (Saron) LEGO Ideas - Lovelace & Babbage (Coraline) CoverMyMeds is offering Ruby on Rails training for experienced developers (David) CoverMyMeds Billboard 1 (David) CoverMyMeds Billboard 2 (David) The Bike Shed Podcast (Derek) The Ember RFC Process (Derek) tota11y (Derek) Eileen Uchitelle: How to Performance @ GoRuCo 2015 (Derek) Olympus SP-100EE (Avdi)
03:42 - Derek Prior Introduction Twitter GitHub Blog thoughtbot @thoughtbot thoughtbot Code Review Guides The Bike Shed Podcast @_bikeshed 04:01 - Code Reviews Derek Prior: Implementing a Strong Code-Review Culture @ RailsConf 2015 Slides 05:14 - What happens when you don’t do code reviews? 06:30 - Not Emphasizing Code Quality, Setting Code Review Up for Failure Edge Cases Diverse Feedback, Team Conflict 10:43 - Code Reviewing Yourself: Answering Your Own Questions 12:03 - The Evolution of Code Review (Code Review as an Asynchronous Process) 14:51 - Small Changes, “Pull Request Bombs” Handling Architectural Disagreements and Discussions Improving the Design of Existing Code by Martin Fowler (with Kent Beck, John Brant, William Opdyke, and Don Roberts) 23:49 - Making Code Review a Supportive Process Stop Issuing Commands; Ask Probing Questions DON’T Use “Why didn’t you ________?” DO Use “Have you considered _________?” or “That’s interesting…I might have used _______.” 30:32 - What qualities should reviewees have? 34:27 - Getting Code Reviews Introduced Into Company Culture 38:30 - Making Sure Code Reviews Get Done 40:47 - Tagging Specific Team Members LGTM = Looks Good To Me Gerrit 44:39 - Other Handy Code Review Tools Style Guides rubocop JSHint sass-lint Hound repo Code Climate 47:49 - Code Review Feedback Resources for Solo Programmers exercism.io pairprogramwith.me CodeNewbie Ruby Monday JavaScript Tuesday Python Thursday Picks Code Newbie Podcast: Sandi Metz Part I (Saron) Code Newbie Podcast: Sandi Metz Part II (Saron) If Google Were A Guy (Saron) LEGO Ideas - Lovelace & Babbage (Coraline) CoverMyMeds is offering Ruby on Rails training for experienced developers (David) CoverMyMeds Billboard 1 (David) CoverMyMeds Billboard 2 (David) The Bike Shed Podcast (Derek) The Ember RFC Process (Derek) tota11y (Derek) Eileen Uchitelle: How to Performance @ GoRuCo 2015 (Derek) Olympus SP-100EE (Avdi)
Michael Bernstein of Code Climate explains how to monitor your code's quality with static analysis. He tells us how you can maintain or improve quality over time, and what you can do to fix poor code.
Beth Tucker Long, PHP Developer and Advocate at Code Climate, and I got in front of my cameras at the 2014 PHP World conference and got to talk about her history and priorities in development, PHP, and open source; her welcome into the PHP community, the culture of sharing and teaching in PHP, her work and the mission of Code Climate ... including how she got hired as a Perl programmer right out of college and was handed a PHP codebase to fix. "So I had to learn PHP really, really fast." When she had to ask some questions the weekend she learned PHP, the welcome the PHP community gave her convinced her to stay and inspires her to this day to give other just as warm a welcome to the project. Read the full post and see the conversation video at the Acquia Developer Center: https://dev.acquia.com/podcast/181-php-entire-world-your-development-team-beth-tucker-long
To explore the response to the recently-disclosed Git security vulnerability (which we wrote about at: http://thenewstack.io/major-git-security-vulnerability-discovered-causing-github-to-encourage-update-to-git-clients/) and to provide some context for it in a world of imperfect code, The New Stack Founder Alex Williams called upon Tal Klein of Adallom and Bryan Helmkamp, CEO and Founder of Code Climate, for this episode of The New Stack Analysts. Bryan refreshes us on the nature of the Git vulnerability: “It allows an attacker who has control of a Git repository to execute arbitrary code on the client machine of anybody connecting to that Git repository with a vulnerable version of the Git client.” Tal is not at all surprised by this news: “Vulnerabilities are going to happen; there's no such thing as perfect code,” he says. “Git was another popular attack vector for the Shellshock vulnerability,” says Tal, describing Git as the perfect candidate through which to attempt to obtain privileges to escalation. “It's actually the second scenario in which Git itself becomes an attack vector,” he says. Learn more at: https://thenewstack.io/the-new-stack-analysts-show-27-the-git-vulnerability-and-its-aftermath/
With everyone returning from Midwest.JS and Steel City Ruby, we reminisce about the conferences, complain about the post office, and debate what Devops is. Midwest.JS 0:15 Steel City Ruby 2:09 Dev>Input 3:50 Stamps.com 4:13 CodeClimate 4:43 Seth Vargo's talks 8:08 Jessica Kerr: Property-based testing: what is it? 8:37 Rantly 9:38 Generatron 10:03 DevOps 10:18 Chef 15:00 Why You Shouldn’t Use Vagrant: Real talk from a Vagrant burn-out 16:30 Test-Kitchen 19:10 Travis CI 20:26 Heroku 22:55 OpenShift 22:55 Continuously deploying your (free) OpenShift site with Travis CI 26:03 What Exactly is DevOps? 29:09
A conversation with Michael Bernsten (@mrb_bk) from Code Climate.
A conversation with Michael Bernsten (@mrb_bk) from Code Climate.
The panelists talk to Bryan Helmkamp, of Code Climate, and Anthony Eden, of DNSimple, about building and running a SaSS company.
The panelists talk to Bryan Helmkamp, of Code Climate, and Anthony Eden, of DNSimple, about building and running a SaSS company.
Kenta Murataさん, Ryo Nakamuraさんをゲストに迎えて、RailsConf, Ruby 2.0, Rails 4, Chanko 2.0, RubyKaigi, YAPC などについて話しました。 Show Notes RailsConf Blind Reviews at RailsConf 2013 RubyKaigi 2013 DHH keynote at RailsConf 2013 Not sure if I should be offended by the Kansas barbs DHH RailsConf 2012 Keynote gist.github.com launched with Rails 4 What's new in Rails 4.0 Cookpad の本番環境で使用している Ruby が 2.0.0-p0 になりました Ruby 1.8.7 and REE End of Life Ruby 1.8.7 EOL expected this June What a hard work to make the recipe sharing service available on Ruby 1.9.3! MIME encoding bug of NKF.nkf Rails 3.2.13 default_scope breaks chained scopes rails/strong_parameters Turbolinks Compatibility Android Is The New IE 6 Chanko: Rapidly & Safely prototyping your rails application プロトタイプ開発用のRailsプラグイン「Chanko」を2.0.0にアップデートしました Use Erubis when available for faster startup Travis CI Coveralls Code Climate rubygems travis Coveralls + Perl YAPC::Asia クックパッド採用情報 第4回 開発コンテスト24
Ben is joined by Bryan Helmkamp, the founder of CodeClimate. In Bryan's second appearance on the podcast, Ben and Bryan discuss the architecture behind CodeClimate, scaling the service, and growing the business. They also discuss speaking at conferences, proposal selection, two factor authentication and adding it to CodeClimate, marketing and content marketing, how to decide what to build and proving that it was worthwhile, strategies for testing at the beginning when you have few users, and Bryan reveals CodeClimate next big upcoming feature. Sidekiq JRuby Rubinius Just-in-time (JIT) compilation Librato metrics Rails Security Monitor by Code Climate Boston.rb, Rails Application Security in Practice_ railssecurity.com Follow @thoughtbot, @r00k, and @brynary on twitter.
The Rogues talk to Code Climate's Bryan Helmkamp about decomposing fat models.
The Rogues talk to Code Climate's Bryan Helmkamp about decomposing fat models.
Ben Orenstein is joined by Bryan Helmkamp, founder of Code Climate, hosted software metrics for Ruby apps. In this episode, recorded at RubyConf 2012, they discuss what code climate is, how Bryan considers it a small business not a startup, and what its like being a solo founder. They also discuss how code metrics can help you write and maintain better software, how it helps, and how it changes behavior. Finally they explore what the biggest surprise for him has been so far, some of his plans, and what success looks like for him. Code Climate Steve Berry, Thought Merchants Follow @thoughtbot, @r00k, @brynary and @codeclimate on twitter.
The Rogues talk code metrics with Bryan Helmkamp of Code Climate.
The Rogues talk code metrics with Bryan Helmkamp of Code Climate.
The Rogues talk code metrics with Bryan Helmkamp of Code Climate.