POPULARITY
Joël discusses the challenges he encountered while optimizing slow SQL queries in a non-Rails application. Stephanie shares her experience with canary deploys in a Rails upgrade. Together, Stephanie and Joël address a listener's question about replacing the wkhtml2pdf tool, which is no longer maintained. The episode's main topic revolves around the concept of multidimensional numbers and their applications in software development. Joël introduces the idea of treating objects containing multiple numbers as single entities, using the example of 2D points in space to illustrate how custom classes can define mathematical operations like addition and subtraction for complex data types. They explore how this approach can simplify operations on data structures, such as inventories of T-shirt sizes, by treating them as mathematical objects. EXPLAIN ANALYZE visualizer (https://explain.dalibo.com/) Canary in a coal mine (https://en.wikipedia.org/wiki/Sentinel_species#Canaries) Episode 413: Developer Tales of Package Management (https://bikeshed.thoughtbot.com/413) Docs for media-specific CSS (https://developer.mozilla.org/en-US/docs/Web/CSS/@media) Episode 386: Value Objects Revisited: The Tally Edition (https://bikeshed.thoughtbot.com/386) Money gem (https://github.com/RubyMoney/money) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've recently been trying to do some performance enhancements to some very slow queries. This isn't a Rails app, so we're sort of combining together a bunch of different scopes. And the way they're composing together is turning out to be really slow. And I've reached for a tool that is just really fun. It's a visualizer for SQL query plans. You can put the SQL keywords in front of a query: 'EXPLAIN ANALYZE,' and it will then output a query plan, sort of how it's going to attempt to do the work. And that might be like, oh, we're going to use this index on this table to join on this other thing, and then we're going to...maybe this is a table that we think we're going to do a sequential scan through and, you know, it builds out a whole thing. It's a big block of text, and it's kind of intimidating to look at. So, there are a few websites out there that will do this. You just paste a query plan in, and they will build you a nice, little visualization, almost like a tree of, like, tasks to be done. Oftentimes, they'll also annotate it with metadata that they pulled from the query plan. So, oh, this particular node is the really expensive one because we're doing a sequential scan of this table that has 15 million rows in it. And so, it's really useful to then sort of pinpoint what are the areas that you could optimize. STEPHANIE: Nice. I have known that you could do that EXPLAIN ANALYZE on a SQL query, but I've never had to do it before. Is this your first time, or is it just your first time using the visualizer? JOËL: I've played around with EXPLAIN ANALYZE a little bit before. Pro tip: In Rails, if you've got a scope, you can just chain dot explain on the end, and instead of running the query, it will run the EXPLAIN version of it and return the query plan. So, you don't need to, like, turn into SQL then manually run it in your database system to get the EXPLAIN. You can just tack a dot explain on there to get the query plan. It's still kind of intimidating, especially if you've got a really complex query that's...this thing might be 50 lines long of EXPLAIN with all this indentation and other stuff. So, putting it into a sort of online visualizer was really helpful for the work that I was doing. So, it was my first time using an online visualizer. There are a few out there. I'll link to the one that I used in the show notes. But I would do that again, would recommend. STEPHANIE: Nice. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I actually just stepped away from being in the middle of doing a Rails upgrade [chuckles] and releasing it to production just a few minutes before getting on to record with you on this podcast. And the reason I was able to do that, you know, without feeling like I had to just monitor to see how it was going is because I'm on a project where the client is using canary deploys. And I was so pleasantly surprised by how easy it made this experience where we had decided to send the canary release earlier this morning. And the way that they have it set up is that the canary goes to 10% of traffic. 10% of the users were on Rails 7 for their sessions. And we saw a couple of errors in our error monitoring service. And we are like, "Okay, like, let's take a look at this, see what's going on." And it turns out it was not too big of a deal because it had to do with, like, a specific page. And, for the most part, if a user did encounter this error, they probably wouldn't again after refreshing because they had, like, a 90% chance [chuckles] of being directed to the previous version where everything is working. And we were kind of making that trade-off of like, oh, we could hotfix this right now on the canary release. But then, as we were starting to debug a little bit, it was a bit hairier than we expected originally. And so, you know, I said, "I have to hop on to go record The Bike Shed. So, why don't we just take this canary down just for the time being to take that time pressure off? And it's Friday, so we're heading into the weekend. And maybe we can revisit the issue with some fresh eyes." So, I'm feeling really good, actually. And I'm glad that we were able to do something that seems scary, but there were guardrails in place to make it a lot more chill. JOËL: Yay for the ability to roll back. You used the term canary release. That's not one that I'm familiar with. Can you explain what a canary release is? STEPHANIE: Oh yeah. Have you heard of the phrase 'Canary in the coal mine'? JOËL: I have. STEPHANIE: Okay. So, I believe it's the same idea where you are, in this case, releasing a potentially risky change, but you don't want to immediately make it available to, like, all of your users. And so, you send this change to, like, a small reach, I suppose, and give it a little bit of a test and see [chuckles] what comes back. And that can help inform you of any issues or risks that might happen before kind of committing to deploying a potentially risky change with a bigger impact. JOËL: Is this handled with something like a feature flag framework? Or is this, like, at an infrastructure level where you're just like, "Hey, we've got the canary image in, like, one container on one server, and then we'll redirect 10% of traffic to that to be served by that one and the other 90% to be served by the old container or something like that"? STEPHANIE: Yeah, in this case, it was at the infrastructure level. And I have also seen something similar at a feature flag level, too, where you're able to have some more granularity around what percent of users are seeing a feature. But I think with something like a Rails upgrade, it was nice to be able to have that at that infrastructure level. It's not necessarily, like, a particular page or feature to show or not show. JOËL: Yeah, I think you would probably want that at a higher level when you're changing over the entire app. Is this something that you had to custom-build yourself or something that just sort of came out of the box with some of the infrastructure tools you're using? STEPHANIE: It came out of the box, actually. I just joined this client project this week and was very delighted to see just some really great deployment infrastructure and getting to meet the DevOps engineers, too, who built it. And they're really proud of it. They kind of walked us through our first release earlier this week. And he was telling me, the DevOps engineer, that this was actually his favorite part of the job, is walking people through their first release and being their buddy while they do it. Because I think he gets to also see users interact with the tool that he built, and he had a lot of pride in that, so it was a very delightful experience. JOËL: That's so wonderful. I've been on so many projects where the sort of infrastructure side of things is not the team's strong point, and releasing can be really scary. And it's great to hear the opposite of that. We recently received a question for Stephanie based on an earlier episode. So, the question asks, "In episode 413, Stephanie discussed a recent issue she encountered with wkhtml2pdf. The episode turned into a deeper discussion about package management, but I don't think it ever cycled back to the conclusion. I'm curious: how did Stephanie solve this dilemma? We're facing the same issue on a project that my team maintains. It's an old codebase, and there are bits of old code that use wkhtml2pdf to generate print views of our data in our application. The situation is fairly dire. wkhtml2pdf is no longer maintained. In fact, it won't even be available to install from our operating system's package repositories in June. We're on FreeBSD, but I assume the same will be eventually true for other operating systems. And so, unless you want to maintain some build step to check out and compile the source code for an application that will no longer receive security updates, just living with it isn't really an option. There are three options we're considering. One, eliminate the dependency entirely. Based on user feedback, it sounds like our old developers were using this library to generate PDFs when what users really wanted was an easy way to print. So, instead of downloading a PDF, just ensure the screen has a good print style sheet and register an onload handler to call window dot print. We're thinking we could implement this as an A/B test to the feature to test this theory. Or two, replace wkhtml2pdf with a call to Headless Chrome and use that to generate the PDF. Or, three, replace wkhtml2pdf with a language-level package. For us, that might be the dompdf library available via Composer because we're a PHP shop." Yeah, a lot to unpack here. Any high-level thoughts, Stephanie? STEPHANIE: My first thought while I was listening to you read that question is that wkhtml2pdf is such a mouthful [laughs]. And I was impressed how you managed to say it at least, like, five times. JOËL: So, I try to say that five times fast. STEPHANIE: And then, my second high-level thought was, I'm so sorry to Brian, our listener who wrote in, because I did not really solve this dilemma [chuckles] for my project and team. I kind of kicked the can down the road, and that's because this was during a support and maintenance rotation that I've talked a little bit about before on the show. I was only working on this project for about a week. And what we thought was a small bug to figure out why PDFs were a little bit broken turned out, as you mentioned, to be this kind of big, dire dilemma where I did not feel like I had enough information to make a good call about what to do. So, I kind of just shared my findings that, like, hey, there is kind of a risk and hoping that someone else [laughs] would be able to make a better determination. But I really was struck by the options that you were considering because it was actually a bit of a similar situation to the bug I was sharing where the PDF that was being generated that was slightly broken. I don't think it was, like, super valuable to our users that it be in the form of a PDF. It really was just a way for them to print something to have on handy as a reference from, you know, some data that was generated from the app. So, yeah, based on what you're sharing, I feel really excited about the first one. Joël, I'm sure you have some opinions about this as well. JOËL: I love sort of the bigger picture thinking that Brian is doing here, sort of stepping back and being like, wait, why do we even need PDF here, and how are our customers using it? I think those are the really good questions to ask before sinking a ton of time into coming up with something that might be, like, a bit of a technical wonder. Like, hey, we managed to, like, do this PDF generation thing that we had to, like, cobble together so many other things. And it's so cool technically, but does it actually solve the underlying problem? So, shout out to Brian for thinking about it in those terms. I love that. Second cool thing that I wanted to shout out, because I think this is a feature of browsers that not many people are aware of; you can have multiple style sheets for your page, and you can tag them to be for different media. So, you can have a style sheet that only gets applied when you print versus when you display on screen. And there are a couple of others. I don't remember exactly what they are. I'll link to the docs in the show notes. But taking advantage of this, like, this is old technology but making that available and saying, "Yeah, we'll make it so that it's nice when you print, and we'll maybe even, you know, a link or a button with JavaScript so that you could just Command-P or Control-P to print. But we'll have a button in there as well that will allow you to print to PDF," and that solves your problem right there. STEPHANIE: Yeah, that's really cool. I didn't know that about being able to tag style sheets for different media types. That's really fascinating. And I like that, yeah, we're just eliminating this dependency on something, like, potentially really complex with a, hopefully, kind of elegant and modern solution, maybe. JOËL: And your browser is already able to do so many of these things. Why do we sort of try to recreate it? Printing is a thing browsers have been able to do for a long time. Printing to PDF is a thing that you can do for a long time. I will sometimes use that on sites where I need to, let's say I'm purchasing something, and I need some sort of receipt to expense, but they won't give me a download, a PDF download that I can send to the accounting team, so I will print to PDF the, like, HTML view. And that works just fine. It's kind of a workaround hack. Sometimes, it doesn't work well because the HTML page is just not well set up to, like, show up on a PDF page. You get some, like, weird, like, pagination issues or things like that. But, you know, just a little bit of thought for a print style sheet, especially for something you know that people are likely going to want to print or to save to PDF, that's a nice touch. STEPHANIE: Yeah. So, good luck, Brian, and let us know how this goes and any outcomes you find successful. So, for today's longer topic, I was excited because I saw, Joël, you dropped something in our topic backlog: Multidimensional Numbers. I'm curious what prompted this idea and what you wanted to say about it. JOËL: We did an episode a while back where we talked about value objects, wrapping numbers, wrapping collections. This is Episode 386, and we were talking about tallying, specifically working with collections of T-shirt sizes and doing math on these sort of objects that might contain multiple numbers. And a sort of sidebar from that that we didn't really get into is the idea that objects that contain sort of multiple numbers can be treated as a number themselves. And I think a great example of this is something like a point in two-dimensional space. It's got an x coordinate, a y coordinate. It's two numbers, but you can treat sort of the combination of the two of them together as a single number. There's a whole set of coordinate math that you can do to do things like add coordinates together, subtract them, find the distance between them. There's a whole field of vector math that we can do on those. And I think learning to recognize that numbers are not just instances of the integer or the float class but that there could be these more complex things that are also numbers is maybe an important realization and something that, as developers, if we think of these sort of more complex values as numbers, or at least mathematical objects, then that will help us write better code. STEPHANIE: Cool. Yeah. When you were first talking about 2D points, I was thinking about if I have experience working with that before or, like, having to build something really heavily based off of, like, a canvas or, you know, a coordinate system. And I couldn't think of any really good examples until I thought about, like, geographic locations. JOËL: Oh yeah, like a latitude, longitude. STEPHANIE: Yeah, exactly. Like, that is a lot more common, I think, for various types of just, like, production applications than 2D points if you're not working on, like, a video game or something like that, I think. JOËL: Right, right. I think you're much more likely to be working with 2D points on some more sort of front-end-heavy application. I was talking with someone this week about managing a seat map for concerts and events like that and sort of creating a seat map and have it be really interactive, and you can, like, click on seats and things like that. And depending on the level of libraries you're using to build that, you may have to do a lot of 2D math to make it all come together. STEPHANIE: Yeah. So, I would love to get into, you know, maybe we've realized, okay, we have some kind of compound number. What are some good reasons for using them differently than you would a primitive? JOËL: So, you mentioned primitives, and I think this is where maybe I'm developing a reputation about, like, always wanting value objects for everything. But it would be really easy, let's say, for an xy point to be just an array of two numbers or maybe even a hash with an x key and a y key. What's tricky about that is that then you don't have the ability to do math on them. Arrays do define the plus operator, but they don't do what you want them to do with points. It's the set union. So, adding two points would not at all do what you want, or subtracting two points. So, instead, if you have a custom 2D point class and you can define plus and minus on there to do the right thing, now they're not pairs of numbers, two values; they're a single value, and you can treat them as if they are just a single number. STEPHANIE: You mentioned that arrays don't do the right thing when you try to add them up. What is the right thing that you're thinking of then? JOËL: It probably depends a little bit on the type of object you're working with. So, with 2D points, you're probably trying to do vector addition where you're effectively saying almost, like, "Shift this point in 2D space by the amount of this other point." Or if you're doing a subtraction, you might even be asking, like, "What is the distance between these two points?" Euclidean distance, I think, is the technical term for this. There's also a couple of different ways you can multiply values. You can multiply a 2D point by just a sort of, not by another point, but by just an integer. That's called scaling. So, you're just like, oh, take this point in 2D space, but make it bigger, make it five times bigger or five times further from the origin. Or you can do some stuff with other points. But what you don't want to do is turn this into, if you're starting with arrays, you don't want to turn this into an array of four points. When you add two points in 2D space, you're not trying to create a point in 4D space. STEPHANIE: Whoa, I mean [laughs], maybe you're not. JOËL: You could but -- [laughter] STEPHANIE: Yeah. While you were saying that, I guess that is what is really cool about wrapping, encapsulating them in objects is that you get to decide what that means for you and your application, and -- JOËL: Yeah. Well, plus can mean different things, right? STEPHANIE: Yeah. JOËL: On arrays, plus means combining two arrays together. On integers, it means you do integer math. And on points, it might be vector addition. STEPHANIE: Are there any other arithmetic operators you can think of that would be useful to implement if you were trying to create some functionality on a point? JOËL: That's a good question because I think realizing the inverse of that is also a really powerful thing. Just because you create a sort of new mathematical object, a point in 2D space, doesn't mean that necessarily every arithmetic operator makes sense on it. Does it make sense to divide a point by another point? Maybe not. And so, instead of going with the mindset of, oh, a point is a mathematical object, I now need to implement all of arithmetic on this, instead, think in terms of your domain. What are the operations that make sense? What are the operations you need for this point? And, you know, maybe the answer is look up what are the common sort of vector math operations and implement those on your 2D point. Some of them will map to arithmetic operators like plus and minus, and then some of them might just be some sort of custom method where maybe you say, "Oh, I want the Euclidean distance between these two points." That's just a thing. Maybe it's just a named instance method on there. But yeah, don't feel like you need to implement all of the math operators because that's a mistake that I have made and then have ended up, like, implementing nonsensical things. STEPHANIE: [laughs] Creating your own math. JOËL: Yes, creating my own math. I've done this even on where I've done value objects to wrap single values. I was doing a class to represent currency, and I was like, well, clearly, you need, like, methods to, like, add or subtract your currency, and that's another thing. If you have, let's say, a plus method, now you can plug it into, let's say, reduce plus. And you can just sum a list of these currency objects and get back a new currency. It's not even going to give you back an integer. You just get a sort of new currency object that is the sum of all the other ones, and that's really nice. STEPHANIE: Yeah, that's really cool. It reminds me of all the magic of enumerable that you had talked about in a previous conference talk, where, you know, you just get so much out of implementing those basic operators that, like, kind of scales in handiness. JOËL: Yes. Turns out Ruby is actually a pretty nice system. If you have objects that respond to some common methods and you plug them into enumerable, and it just all kind of works. STEPHANIE: So, one thing you had said earlier that I've felt kind of excited about and wanted to highlight was you mentioned all the different ways that you could represent a 2D point with more primitive data stores, so, you know, an array of two integers, a hash with xy keys. It got me thinking about how, yeah, like, maybe if your system has to talk to another system and you're importing data or exporting data, it might eventually need to take those forms. But what is cool about having an encapsulated object in your application is you can kind of control those boundaries a little bit and have more confidence in terms of the data types that you're using within your system by having various ways to construct that, like, domain object, even if the data coming in is in a different shape. JOËL: And I think that you're hitting on one of the real beauties of object-oriented programming, where the sort of users of your object don't need to know about the internal representation. Maybe you store an array internally. Maybe it's two separate instance variables. Maybe it's something else entirely. But all that the users of your, let's say, 2D point object really need to care about is, hey, the constructor wants values in this shape, and then I can call these domain methods on it, and then the rest just sort of happens. It's an implementation detail. It doesn't matter. And you alluded, I think, to the idea that you can sort of create multiple constructors. You called them constructors. I tend to call them that as well. But they're really just class methods that will kind of, like, add some sugar on top of the constructor. So, you might have, like, a from array pair or from hash or something like that that allows you to maybe do a little bit of massaging of the data before you pass it into your constructor that might want some underlying form. And I think that's a pattern that's really nice. STEPHANIE: Yeah, I agree. JOËL: Something that can be interesting there, too, is that mathematically, there are multiple ways you can think of a 2D point. An xy coordinate pair is a common one, but another sort of system for representing a point in 2D space is called the polar coordinate system. So, you have some sort of, like, origin point. You're 0,0. And then, instead of saying so many to the left and so many up from that origin point, you give an angle and a distance, and that's where your point is. So, an angle and distance point, I think, you know, theta and magnitude are the fancy terms for this. You could, instead of creating a separate, like, oh, I have a polar coordinate point and a Cartesian coordinate point, and those are separate things, you can say, no, I just have a point in 2D space. They can be constructed from either an xy coordinate pair or a magnitude angle pair. Internally, maybe you convert one to the other for internal representation because it makes the math easier or whatever. Your users never need to know that. They just pass in the values that they want, use the constructor that is most convenient for them, and it might be both. Maybe some parts of the app require polar coordinates; some require Cartesian coordinates. You could even construct one of each, and now you can do math with each other because they're just instances of the same class. STEPHANIE: Whoa. Yeah, I was trying to think about transforming between the two types as well. It's all possible [laughs]. JOËL: Yes. Because you could have reader-type methods on your object that say, oh, for this point, give me its x coordinate; give me its y coordinate. Give me its distance from the origin. Give me its angle from the origin. And those are all questions you can ask that object, and it can calculate them. And you don't need to care what its internal representation is to be able to get all four of those. So, we've been talking about a lot of these sort of composite numbers, not composite numbers, that's a separate mathematical thing, but numbers that are composed of sort of multiple sub-numbers. And what about situations where you have two things, and one of them is not a number? I'm thinking of all sorts of units of measure. So, I don't just have three. I have three, maybe...and we were talking about currency earlier, so maybe three U.S. dollars. Or I don't just have five; I have five, you know, let's say, meters of distance. Would you consider something like that to be one of these compound number things? STEPHANIE: Right. I think I was–when we were originally talking about this, conflating the two. But I realized that, you know, just because we're adding context to a number and potentially packaging it as a value object, it's still different from what we're talking about today where, you know, there's multiple components to the number that are integral or required for it to mean what we intended to mean, if that makes sense. JOËL: Yeah. STEPHANIE: So yeah, I guess we did want to kind of make a distinction between value objects that while the additional context is important and you can implement a lot of different functionality based on what it represents, at the end of the day, it only kind of has one magnitude or, like, one integer to kind of encapsulate it represented as a number. Does that sound right? JOËL: Yeah. You did throw out the words encapsulation and value object. So, in a situation maybe where I have three US dollars, would you create some kind of custom object to wrap that? Or is that a situation where you'd be more comfortable using some kind of primitive? Like, I don't know, maybe an array pair of three and the symbol USD or something like that. STEPHANIE: Oh, I would definitely not do that [laughter]. Yeah. Like I, you know, for the most part, I think I've seen that as a currency object, and that expands the world of what we can do with it, converting into a lot of different other currencies. And yeah, just making sure those things don't get divorced from each other because that context is what gives it meaning. But when it comes to our compound numbers, it's like, without all of the components, it doesn't make sense, or it doesn't even represent the same, like, numerical value that we were trying to convey. JOËL: Right. You need both, or, you know, it could be more than two. It could be three, four, or five numbers together to mean something. You mentioned conversions, which I think is something that's also interesting because a lot of units of measure have sort of multiple ways of measuring, and you often want to convert between them. And maybe that's another case where encapsulation is really nice where, you know, maybe you have a distance object. And you have five meters, and you put that into your distance object, but then somebody wants it in feet somewhere else or in centimeters, or something like that. And it can just do all the conversion math safely inside that object, and the user doesn't have to worry about it. STEPHANIE: Right. This is maybe a bit of a tangent, but as a Canadian living in the U.S., I don't know [laughs] if you have any opinions about converting meters and feet. JOËL: The one I actually do the most often is converting Celsius to Fahrenheit and vice versa. You know, I've been here, what, 11 years now? I don't have a great intuition for Fahrenheit temperatures. So, I'm converting in my head just [laughs] on a daily basis. STEPHANIE: Yeah, that makes sense. Conversions: they're important. They help out our friends who [laughs] are on different systems of measurement. JOËL: There's a classic story that I love about unit conversions. I think it's one of the NASA Mars missions. STEPHANIE: Oh yeah. JOËL: You've heard of this one. It was trying to land on Mars, and it burned up in the atmosphere because two different teams had been building different components and used different unit systems, both according to spec for their own module. But then, when the modules try to talk to each other, they're sending over numbers in meters instead of feet or something like that. And it just caused [laughs] this, like, multi-year, multi-billion dollar project to just burn up. STEPHANIE: That's right. So, lesson of the day is don't do that. I can think of another example where there might be a little bit of misconceptions in terms of how to represent it. And I'm thinking about time and when that has been represented in multiple parts, such as in hours and, minutes and seconds. Do you have any initial impressions about a piece of data like that? JOËL: So, that's really interesting, right? Because, at first glance, it looks like, oh, it's, like, a triplet of hour, minute, seconds. It's sort of another one of these sort of compound numbers, and I guess you could implement it that way. But in reality, you're tracking a single quantity, the amount of time elapsed, and that can be represented with a single number. So, if you're representing, let's say, time of day, what would show up on your clock? That could be, depending on the resolution, number of, let's say, seconds since midnight, and that's a single counter. And then, you can do some math on it to get hours, minutes, seconds for a particular moment. But really, it's a single quantity, and we can do that with time. We can't do that with a 2D point. Like, it has to have two components. STEPHANIE: So, do you have a recommendation for what unit of time time would best be stored? I'm just thinking of all the times that I've had to do that millisecond, you know, that conversion of, you know, however many thousands of milliseconds in my head into something that actually means [laughs] something to me as a human being who measures time in hours and minutes. JOËL: My recommendation is absolutely go for a single number that you store in your, let's say, time of day object. It makes the math so much easier. You don't have to worry about, like, overflowing from one number into another when you're doing math or anything like that. And then the number that you count should be at the whatever the smallest resolution you care at. So, is there ever any time where you want to distinguish between two different milliseconds in time? Or maybe you're like, you know what? These are, like, we're tracking time of day for appointments. We don't care about the difference between two milliseconds. We don't need to track them independently. We don't even care about seconds. The most granular we ever care about things is by the minute. And so, maybe then your internal number that you track is a counter of minutes since midnight. But if you need more precision, you can go down to seconds or milliseconds or nanoseconds. But yeah, find what is the sort of the least resolution you want to get away with and then make that the unit of measure for a single counter in your object. And then encapsulate that so that nobody else needs to care that, internally, your time of day object is doing milliseconds because nobody wants to do that math. Just give me a nice, like, hours and minutes method on your object, and I will use that. I don't need to know internally what it's using. Please don't just pass around integers; wrap it in an object, especially because integers, there's enough times where you're doing seconds versus milliseconds. And when I just have an integer, I never know if the person storing this integer means seconds or milliseconds. So, I'm just like, oh, I'm going to pass to this, like, user object, a, like, time integer. And unless there's a comment or a constant, you know, that's named something duration in milliseconds or something like that, you know, or sometimes even, like, one year in milliseconds, or there's no way of knowing. STEPHANIE: Yeah. That makes a lot of sense. When you kind of choose a standard of a standard unit, it's, like, possible to make it easier [laughs]. JOËL: So, circling back to sort of the initial thing that sparked this conversation, the previous episode about T-shirt inventories, there we were dealing with what started off as, like, a hash of different T-shirt sizes and quantities of T-shirts that we had in that size, so small (five), medium (three), large (four). And then, we eventually turned that into a value object that represented...I think we called it a tally, but maybe we called it inventory. And this may be wrong, so tell me if I'm wrong here, I think we can kind of treat that as a number, as, like, one of these compound numbers. It's a sort of multidimensional number where you say, well, we have sort of three dimensions where we can have numbers that sort of increase and decrease independently. We can do math on these because we can take inventories or tallies and add and subtract them. And that's what we ended up having to do. We created a value object. We implemented plus and minus on it. There are rules for how the math works. I think this is a multidimensional number with the definition we're working on this show. Am I wrong here? STEPHANIE: I wouldn't say that you're wrong. I think I would have to think a little [laughs] more to say definitively that you're right. But I know that this example came from, you know, an application I was actually working on. And one of the main things that we had to do with these representations [laughs], I'm hesitant to call them a number, especially, but we had to compare these representations frequently because an inventory, for example, in a warehouse, wanting to make sure that it is equal to or there's enough of the inventory if someone was placing an order, which would also contain, like, a representation of T-shirt size inventory. And that was kind of where some of that math happened because, you know, maybe we don't want to let someone place an order if the inventory at the warehouse is smaller than their order, right? So, there is something really compelling about the comparison operations that we were doing that kind of is leaning me in the direction of, like, yeah, like, it makes sense to me to use this in a way that I would compare, like, quantities or numbers of something. JOËL: I think one thing that was really compelling to me, and that kind of blew my mind, was that we were trying to, like, figure out some things like, oh, we've got so many people with these size preferences, and we've got so many T-shirts across different warehouses. And we're summing them up and we're trying to say like, "How many do we need to purchase if there is a deficit?" And we can come up with effectively a formula for this. We're like, sum these numbers, when we're talking about just before we introduce sizes when it's just like, oh, people have T-shirts. They all get the count of people and a count of T-shirts in our warehouse, and we find, you know, the difference between that. And there's a few extra math operations we do. Then you introduce size, and you break it down by, oh, we've got so many of each. And now the whole thing gets really kind of messy and complicated. And you're doing these reduces and everything. When we start treating the tally of T-shirts as an object, and now it's a number that responds to plus and minus, all of a sudden, you can just plug those back into the original formula, and it all just works. The original formula doesn't care whether the numbers you're doing this formula on are simple integers or these sort of multidimensional numbers. And that blew my mind, and it was so cool. STEPHANIE: Yeah, that is really neat. And you get a lot of added benefits, too. I think the other important piece in the T-shirt size example was kind of tracking the state change, and that's so much easier when you have an object. There's just a lot more you can do with it. And even if, you know, you're not persisting every single version of the representation, you know, because sometimes you don't want to, sometimes you're really just kind of only holding it in memory to figure out if you need to, you know, do something else. But other times, you do want to persist it. And it just plugs in really well with, like, the rest of object-oriented programming [laughs] in terms of interacting with the rest of your business needs, I think, in your app. JOËL: Yeah, turns out objects, they're kind of nice. And you can do math with them. Who knew? Math is not just about integers. STEPHANIE: And on that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.
In this episode of Views on Vue Charles Max Wood interviews speakers at JAMstack Conf SF His first interview is with Ire Aderinokun. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Next, Charles interviews Shawn Erquhart work runs the Netlify CMS project. Charles share his experience using Netlify and Shawn address some of the issues Charles has come across. Charles does say the using Netlify is simple, clean and nice. Shawn shares the origin story of Netlify. They discuss what it means to be a git-based content management system. They discuss how to contribute to the Netlify CMS open source project. Charles mentions his book and they discuss how contributions to open source projects like these are a great way to get a job. Shawn explains how to get started implementing Netlify CMS and how they target different static site generators. Finally, Charles interviews Tammy Everts. Tammy gives listeners a sneak peek into her talk about website performance, more specifically JavaScript performance. Charles discusses the performance of Devchat.tv and Google Lighthouse scores. Tammy explains that while Google Lighthouse is good it isn’t completely reliable and can miss chunks of time when your JavaScript is failing and you have unhappy users. Tammy shares ways to drill down and see how your JavaScript is behaving in the wild. She talks about blocking Javascript which every developer is familiar with and non-blocking JavaScript that has high blocking CPU time which makes for janky sites. Tammy and Charles discuss what CPU is and what it measures. Tammy names resources and tools to help avoid this problem. Rules of thumb for avoiding these issues are explained by Tammy. First, Reduce, make sure all the JavaScript needs to be there. Next, Monitor, track your metrics. She also suggests working with vendors and maintaining a performance budget for metrics that matter. The interview ends with a little about Speedcurve and what they do. Tammy is the CXO of Speedcurve. Panelists Charles Max Wood Guest Ire Aderinokun Shawn Erquhart Tammy Everts Sponsors Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://jamstackconf.com/sf/ https://speedcurve.com/ https://twitter.com/tameverts? https://buycoins.africa/ https://www.netlify.com/ https://www.netlifycms.org/ https://twitter.com/erquhart Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/ViewsonVue https://twitter.com/viewsonvue
In this episode of Views on Vue Charles Max Wood interviews speakers at JAMstack Conf SF His first interview is with Ire Aderinokun. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Next, Charles interviews Shawn Erquhart work runs the Netlify CMS project. Charles share his experience using Netlify and Shawn address some of the issues Charles has come across. Charles does say the using Netlify is simple, clean and nice. Shawn shares the origin story of Netlify. They discuss what it means to be a git-based content management system. They discuss how to contribute to the Netlify CMS open source project. Charles mentions his book and they discuss how contributions to open source projects like these are a great way to get a job. Shawn explains how to get started implementing Netlify CMS and how they target different static site generators. Finally, Charles interviews Tammy Everts. Tammy gives listeners a sneak peek into her talk about website performance, more specifically JavaScript performance. Charles discusses the performance of Devchat.tv and Google Lighthouse scores. Tammy explains that while Google Lighthouse is good it isn’t completely reliable and can miss chunks of time when your JavaScript is failing and you have unhappy users. Tammy shares ways to drill down and see how your JavaScript is behaving in the wild. She talks about blocking Javascript which every developer is familiar with and non-blocking JavaScript that has high blocking CPU time which makes for janky sites. Tammy and Charles discuss what CPU is and what it measures. Tammy names resources and tools to help avoid this problem. Rules of thumb for avoiding these issues are explained by Tammy. First, Reduce, make sure all the JavaScript needs to be there. Next, Monitor, track your metrics. She also suggests working with vendors and maintaining a performance budget for metrics that matter. The interview ends with a little about Speedcurve and what they do. Tammy is the CXO of Speedcurve. Panelists Charles Max Wood Guest Ire Aderinokun Shawn Erquhart Tammy Everts Sponsors Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://jamstackconf.com/sf/ https://speedcurve.com/ https://twitter.com/tameverts? https://buycoins.africa/ https://www.netlify.com/ https://www.netlifycms.org/ https://twitter.com/erquhart Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/ViewsonVue https://twitter.com/viewsonvue
In this episode of React Round Up Charles Max Wood interviews Ire Aderinokun at JAMstack conf 2019. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Panelists Charles Wood Guest: Ire Aderinokun Sponsors Nrwl | Nx.Dev/React Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://buycoins.africa/ Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/React-Round-Up https://twitter.com/reactroundup
In this episode of React Round Up Charles Max Wood interviews Ire Aderinokun at JAMstack conf 2019. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Panelists Charles Wood Guest: Ire Aderinokun Sponsors Nrwl | Nx.Dev/React Sentry– use the code “devchat” for two months free on Sentry’s small plan CacheFly Links https://buycoins.africa/ Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/React-Round-Up https://twitter.com/reactroundup
In this episode of Adventures in Angular Charles Max Wood interviews Ire Aderinokun at JAMstack conf 2019. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Panelists Charles Wood Guest: Ire Aderinokun Adventures in Angular is produced by DevChat.TV in partnership with Hero Devs Sponsors CacheFly Links https://buycoins.africa/ Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/adventuresinangular https://twitter.com/angularpodcast
In this episode of Adventures in Angular Charles Max Wood interviews Ire Aderinokun at JAMstack conf 2019. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Panelists Charles Wood Guest: Ire Aderinokun Adventures in Angular is produced by DevChat.TV in partnership with Hero Devs Sponsors CacheFly Links https://buycoins.africa/ Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/adventuresinangular https://twitter.com/angularpodcast
In this episode of Adventures in Angular Charles Max Wood interviews Ire Aderinokun at JAMstack conf 2019. Ire works for Buycoins, a cryptocurrency exchange for Africa. She gave a lightning talk, “Headless Chrome & Cloudinary for progressively enhanced dynamic content on the web”. After giving a brief overview of her talk to Charles, Ire defines progressive enhancement for the listeners. Walking through how progressive enhancement works, she explains how Headless Chrome and Cloudinary helped her with the project she shared in the talk. Ire and Charles consider the blindspot that developers experience because they work on high-end devices and how using progressive enhancement helps those who use lower-end devices. Ire shares her experience with JAMstack and explains how progressive enhancement works with JAMstack. Charles shares his experience using JAMstack. The episode ends with Ire giving advice and resources to help get started with progressive enhancement. Panelists Charles Wood Guest: Ire Aderinokun Adventures in Angular is produced by DevChat.TV in partnership with Hero Devs Sponsors CacheFly Links https://buycoins.africa/ Headless Chrome & Cloudinary for progressively enhanced dynamic content https://github.com/ireade/caniuse-embed https://ireaderinokun.com/ https://twitter.com/ireaderinokun https://github.com/ireade https://www.facebook.com/adventuresinangular https://twitter.com/angularpodcast
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Ruby 2.7.0-preview1 Released, Tests that sometimes fail и Ruby Pills: Enums Options From capybara-webkit to Headless Chrome and ChromeDriver, Phonelib - a gem allowing you to validate phone number и Graphsrb allows to create simple directed and undirected graphs Web Version 8 of Angular — Smaller bundles, CLI APIs, and alignment with the ecosystem, Dependabot is joining GitHub и 8 Useful JavaScript Tricks Fix 85% of your Web Accessibility issues in 5 easy steps, Color contrast accessibility tools и Why we prefer CSS Custom Properties to SASS variables Xstyled - consistent theme based CSS for styled-components, Scene.js is JavaScript & CSS timeline-based animation library, Zdog - round, flat, designer-friendly pseudo-3D engine for canvas & SVG и Zoom-level - a comprehensive cross-browser package that allow you to determine page's and element's zoom level
In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions: @bigtest/cli to start / launch (Karma recommended for now) @bigtest/react, @bigtest/vue, etc for setup & teardown @bigtest/interactor for interactions @bigtest/convergence for assertions @bigtest/network in the future (Mirage recommended for now) Resources: Justin Searls – Please don't mock me This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman. WIL: Hello. CHARLES: Hello, Wil. WIL: How's it going? CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year. WIL: Yeah. It's been about a year now. CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today. Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you? WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view. CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product. What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big. WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application. CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues. One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right? WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well. CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times. WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing. CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do. We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that? WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers. CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and – WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you. CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big. WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing. CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim? WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results. CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI. WIL: Yeah, Rollup even. CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work. WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates. CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles. WIL: Yeah. CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away? WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount. CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else? WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks. CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently. WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something. CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up. WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with. CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would. WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test. CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look. But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions. WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on. CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM. Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions. WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests. CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events. WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button. CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page. WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially. CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog. WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful. CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best. WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library. CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories. WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date. CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level. WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing. CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite. If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite. WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity. CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue. WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors. CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from? WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it. CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates. WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas -- CHARLES: -- Straight up flaky. WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well. CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side. WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now. CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would. WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out. CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers. I think when we tell people about the fact that it's basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive." WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast. CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests. WIL: Yeah, talking like 100 tests in just tens of seconds. CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test -- WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup. CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck. WIL: Yeah. CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences. WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction. CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose. WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on. CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It's a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust. WIL: Yeah, very. CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but -- WIL: It definitely helps. CHARLES: It doesn't helps. We couldn't have BigTest interactor without that. WIL: No, definitely not. CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural. WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true. CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality. WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API. CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don't mock me.' WIL: Yeah, it's one of my favorite talks. CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down -- WIL: Have their own concerns. CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working. This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server. WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network. CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests. WIL: Basically, as far as the application is concerned, it's talking to a real server on us. CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion? WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion. CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage. WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff. CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented. WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures. CHARLES: Yeah, higher order of fixtures. WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in. CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development. WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage. CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios. Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant? BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you. But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come. WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good. CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us? WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them. CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil. WIL: Thank you, Charles.
Mark and Melanie are your hosts again this week as we talk with Steren Giannini and Stewart Reichling discussing what’s new with App Engine. Particularly its new second generation runtime, allowing headless Chrome, and better language support! And automatic scalability to make your life easier, too. App Engine also has an interesting way of inspiring new Google products. Tune in to learn more! Steren Giannini Steren Giannini is a Product Manager on Google Cloud Platform (GCP). He graduated from École Centrale Lyon, France and then was CTO of a startup that created mobile and multi-device solutions. After joining Google, Steren launched Stackdriver Error Reporting and now focuses on GCP’s serverless offering. Recently, Steren has been working on upgrading App Engine’s auto scaling system and bringing Node.js to App Engine standard environment. Stewart Reichling Stewart Reichling is a Product Manager on Google Cloud Platform (GCP). He is a graduate of Georgia Institute of Technology and has worked across Strategy, Marketing and Product Management at Google. He currently works on bringing new runtimes (Python, Node.js, +more to come!) to App Engine and Cloud Functions. Cool things of the week Robot dance party: How we created an entire animated short at Next ‘18 blog What’s happening in BigQuery: integrated machine learning, maps, and more blog Protecting against the new “L1TF” speculative vulnerabilities blog Interview App Engine site Deploying Node.js on App Engine standard environment video Introducing headless Chrome support in Cloud Functions and App Engine blog Node 8 site Python 3.7.0 site App Engine PHP 7.2 Runtime Environment Beta site Headless Chrome site GCPPodcast Episode 23: Humble Bundle with Andy Oxfeld podcast Google Cloud Datastore site App Engine Task Queue site Ubuntu site gVisor site Open-sourcing gVisor, a sandboxed container runtime blog App Engine Documentation site gcloud app deploy site To send feedback, email stewartr@google.com or steren@google.com App Engine Google Group forum Operating Serverless Apps with Google Stackdriver video App Engine’s new auto scaling system - scheduler blog Question of the week What does it mean when the recommendation is to update your image? Getting Image Vulnerabilities site Updating Managed Instance Groups site Node Images site Where can you find us next? Melanie will be at Deep Learning Indaba and Strangeloop. Mark will be at Pax Dev and Pax West starting August 28th. In September, he’ll be at Tokyo NEXT and Strangeloop.
Jake and Michael return to share some of the Aussie slang Jake has been learning and cover such groundbreaking topics as this year's Hacktoberfest, deep delegation, gathering requirements, and more!
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Gemfile's new clothes, Introduction to Concurrency Models with Ruby. Part II и Rails on Docker: Using Docker Compose with Your Ruby on Rails Apps Headless Chrome vs PhantomJS Benchmark и Superfast CSV imports using PostgreSQL's COPY command What service objects are not, About Rails concerns и Interactor - a common interface for performing complex user interactions JavaScript Announcing Yarn 1.0, Why we moved from Angular 2 to Vue.js (and why we didn't choose React) и Cycle.js: A Unified Theory of Everything for JavaScript Size Limit: Make the Web lighter, LookForward.js - a small library that helps you to create smooth transitions between pages with the easiest way, React PowerPlug - set of components to you add different types of state in your dumb components и Rythm.js - a javascript library that makes your page dance В гостях - Иван Фокеев Github Work
This week, we’re wrapping our heads around Headless Chrome with Tim Holman!
Es ist soweit. Andreas hat bald eine Episode zu jeder Passion. Dieses Mal geht es um das Rasieren – egal ob Stirn, Bauchfalte oder Zehennagel, hier und heute wird alles rasiert. Der Profi gibt eine Übersicht für Mann und Frau. Lieber Fluggast, wenn dir das Gehörte gefällt oder dir Sorgenfalten auf die edle Stirn fabriziert, dann haben wir etwas für dich: iTunes Bewertungen. Follow-up Headless Chrome: Chrome 59: Headless Chrome, Native Notifications on macOS and the Image Capture API - YouTube Monument Valley 2 Day One Encryption ist da Final Cut Pro Radio lobt die neuen Macs FCPX und Mac sind aktuell die einzige Plattform die 16 4k Streams gleichzeitig schaffen Durch das erweiterte Farbprofil kann man den Mac auch für die meisten ausgiebigen Farbkorrekturen verwenden. Der neue iMac ist gut um 99% aller Video Jobs zu erledigen. Die neuen Macs bringen eine Performance mit denen Video erstmals nahe an die gemütlichen Audio Umgebungen kommt. Binky: Your New Favorite App Zugriff auf Whatsapp: Innenminister wollen Messenger-Dienste abhören Mac Rumors: Report Reveals In-App Purchase Scams in the App Store AdBlock Rasier mich und ich mach dich glatt Ex und Überbleibsel: Remington MB4030 Bartschneider Philips Bodygroom 7000 TT2040/32 Platinum Ohr und Nasenhaartrimmer Der kleinste Nasen- / Ohrhaarklipper Rasierschaum (vielleicht warm?) Referenz Post-Shave Alum Mehr Produkte Welche Richtung ist die beste? Face Map Einteilung “Quadranten”. Charles Roberts’ Method Shaving Method Shaving bei mantic59 Empfehlenswerte YouTube Channels mantic59 Adjustable Safety Razors How many shaves in a DE blade Ritual Shave Busta PaulH geofatboy nick shaves Marken Merkur Feather Mühle Edwin Jagger Kiehls Taylor of Old Bond Street Parker Guilette (Fat Boy) Jack Black Proraso Nivea Was gibt es zu kaufen? Rasierer Rasierhobel MÜHLE - Klassischer Rasierhobel - offener Kamm Doppelseite Sicherheitsrasierer Einseitige Sicherheitsrasierer Es gibt Sicherheitsrasierer mit “Ladungsmechanismus” Rasiermesser Shavette Elektrische Rasierer Klingen und Unterschiede Tipp: Sample Pack bestellen und ausprobieren Pinsel Pferd, Schwein, irgendein anderes Tier Synthetik Rasierschaum Das Problem mit “klassischem” Rasierschaum Rasierschaum Auf Reise: Nivea Speick Rasierstick Rasierseife Öle After Shave Alkohol vs. Sensitive Sonstiges Zubehöre Bladebank von Mühle Razorpit Rasierklingenverglech DE Blade Challenge - GDCarrington1 What Is The Best Razor Blade Blackland Sabre: Sabre — Blackland Unsere Picks Andreas: Mountain Duck ($39) Patrick: Web Snapper In Spenderlaune? Wir haben Flattr und PayPal am Start und würden uns freuen.
Добрый день уважаемые слушатели. Представляем новый выпуск подкаста RWpod. В этом выпуске: Ruby Sinatra 2.0.0, Active Admin 1.0, The Lesser-known Features in Rails 5.1 и Building a Rack::Attack Dashboard Improving capistrano deployment performance, Crafting Better Code Reviews и Announcing the RubyLetter Podcast Crystal from a Rubyist's Perspective, Capistrano AWS, Pwrake: Parallel Workflow extension for Rake, runs on multicores, clusters, clouds и RailsConf 2017: Why Software Engineers Disagree About Everything (video) JavaScript Node.js 8.0.0 has been delayed and will ship on or around May 30th, Prepack - a partial evaluator for JavaScript, Autoprefixer 7.0 and Browserslist 2.0 и PostCSS 6.0 ECMAScript modules in browsers, JavaScript: The compilation epoch, UX drives all of this и 45% Faster React Functional Components, Now Getting Started with Headless Chrome, SmartPhoto.js - the most easy to use responsive image viewer especially for mobile devices, Pkg - package your Node.js project into an executable, Typefont - recognises the font of a text in a image и SpectorJS - explore and troubleshoot your WebGL scenes
In this episode we discuss custom lightning development, PhantomJS and Headless Chrome, Apple’s Q2 earnings, Hulu TV, Oracle restructuring its sales team, Benioff’s $400 billion job creation goal, and where to meet up for happy hour at Texas Dreamin 2017. How Hulu Reinvented Itself for Live Tv Salesforce CEO Marc Benioff dishes on his $400 billion job creation goal Apple boss Tim Cook says 'reports about future products' likely delayed quarterly iPhone purchases Massive Oracle sales re-org to accelerate cloud cash drive Ecobee4 PhantomJS Getting Started with Headless Chrome Trailhead.com Roger Mitchell Blog Brett Nelson Blog Eureka - Austin, TX