Podcasts about qunit

  • 14PODCASTS
  • 21EPISODES
  • 58mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jul 4, 2022LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about qunit

Latest podcast episodes about qunit

UI5 NewsCast
UI5 NewsCast 028 wdi5 - UI5s Open-Source End-to-End Testing Framework

UI5 NewsCast

Play Episode Listen Later Jul 4, 2022 42:57


SAP integrated and developed several testing frameworks to test UI5 applications, libraries, controls or modules: QUnit for functional testing, OPA5 to do integration testing and UIveri5 for end-to-end testing. In 2020, wdi5 (pronounced as /vdiai5/) has been started by Volker Buzek at j&s-soft with the focus to test UI5 web apps in hybrid scenarios - the scenarios which aren't supported by UIveri5. Beginning of 2022, UIveri5 got deprecated mainly due to technical reasons and a successor was needed. SAP started sponsoring the wdi5 project of j&s-soft to establish it as UI5s open-source end-to-end testing framework. In this podcast, Christoph talks with Volker, Hristo and Peter about the testing options of UI5, the history and the now, how wdi5 should be handed over into the responsibility of the UI5 community, migration steps from UIveri5 to wdi5, technical insights into wdi5 and the open gaps and next steps. Tune in and learn about the history, getting started, migration and the future direction of UI5 testing. Todays guests: - Volker Buzek, IT-Architect at j&s-soft GmbH and inventor of wdi5 - Hristo Manchev, Product Owner for UI5 test automation tools at SAP - Peter Muessig, Chief Architect of SAPUI5 at SAP

Programming By Stealth
PBS 135 of X – Introducing Jest (and re-Introducing Test Driven Development & Unit Testing)

Programming By Stealth

Play Episode Listen Later Feb 20, 2022 70:23


In this week's installment of Programming By Stealth, Bart takes us down memory lane to 102 episodes ago when he first introduced us to the concept of test-driven development. He explains why back then he taught us how to use QUnit for our TDD work, and why it's no longer in favor with him. It's not just the advancements in technology like ES6, but it's also because QUnit makes it terribly hard to write tests and to interpret what you've written when you've been away from it for a while. He walks us through his criteria for picking a new TDD tool, and why he chose Jest for the job. He then walks us through a worked example of how to write some simple tests on a module and of course, explains how Jest does its job running our tests. I liked it, even though my head hurt during a bit of it! You can find Bart's fabulous tutorial shownotes at pbs.bartificer.net.

Chit Chat Across the Pond
CCATP #715 – Bart Busschots on PBS 135 of X – Introducing Jest (and re-Introducing Test Driven Development & Unit Testing)

Chit Chat Across the Pond

Play Episode Listen Later Feb 20, 2022 70:23


In this week's installment of Programming By Stealth, Bart takes us down memory lane to 102 episodes ago when he first introduced us to the concept of test-driven development. He explains why back then he taught us how to use QUnit for our TDD work, and why it's no longer in favor with him. It's not just the advancements in technology like ES6, but it's also because QUnit makes it terribly hard to write tests and to interpret what you've written when you've been away from it for a while. He walks us through his criteria for picking a new TDD tool, and why he chose Jest for the job. He then walks us through a worked example of how to write some simple tests on a module and of course, explains how Jest does its job running our tests. I liked it, even though my head hurt during a bit of it! You can find Bart's fabulous tutorial shownotes at pbs.bartificer.net.

The Frontside Podcast
090: Big Testing in JavaScript

The Frontside Podcast

Play Episode Listen Later Nov 30, 2017 35:56


How do we ensure a high level of quality and maximize the refactorability of our code? Frontsiders, Wil and Charles, talk about their battle tested techniques for testing web applications, not only in React JS, but in any JavaScript framework. Links Acceptance Testing Integration Testing jest Cypress.io Assertion Ember CLI Mirage mirage-server react-trigger-change Transcript CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 90. My name is Charles Lowell, a developer here at The Frontside and your podcast host-in-training. And with me today is Mr. Wil Wilsman. Wil, who just got back from Nodevember just walked straight into the office and is ready to podcast with us on a very, very, very interesting subject, I think, today. We're going to be talking about acceptance testing in JavaScript applications, especially some of the techniques that we've developed here around testing React applications based on the lessons that we learned from the Ember community. But really, more than just React applications. Really, testing any JavaScript application from the inside out, making acceptance tests for that. So, I think we're going to talk about some of the challenges that you encounter and some of the really novel solutions that are out there that we had nothing to do with. And I guess we really didn't have, just more of cobbling together of various techniques for a powerful witch's brew for acceptance testing. Anyway, so Wil, just to round out the problem space or explore the problem space, what are some of the challenges that you encounter with an acceptance test? Actually, let me back it up even further. What is an acceptance test in a JavaScript application compared to what people normally encounter? WIL: Acceptance testing or end-to-end testing is just a problem that every JavaScript app should face. Not everyone does, but they definitely should. And basically, it's how the user interacts with your app through the browser. And every part of that we want to test, from the browser triggering browser events, interacting with the app, not calling functions or clicking buttons, and we're pretending we're a user. CHARLES: Yeah. You know, I know that when we showed up in the React space, that was not really the way that most people tested their applications. WIL: No, not at all. they're all about unit testing. Make sure every small piece of your code works, and to some degree integration testing, making sure your components work with other components. But, nothing is out there really for those big acceptance tests that you want the user to click a button and expect them be brought to a page or these fields to be filled out, et cetera. CHARLES: Mmhmm. Yeah. And there certainly was a very high level of maturity around unit testing, like you said. There are tools like Enzyme and… WIL: Jest. CHARLES: Yeah, Jest. But I was actually shocked to find out that Jest didn't even run in the browser. WIL: Yeah, it's all virtual. CHARLES: It's all virtual. It's completely and totally simulated and stubbed. And that presents some problems. WIL: Yeah. The main problem is cross-browser testing. Some people might consider that to be separate from their acceptance testing but you should be able to just run your acceptance tests in multiple browsers and be able to also test cross-browser support. CHARLES: Mmhmm. Yeah. And so, if you're using something like Jest, you're never actually running the code inside Safari. You're never actually running it inside Internet Explorer. You're actually running it in NodeJS. WIL: And you know, your user is not going to run it in Node. [Laughter] WIL: They're going to use a browser. CHARLES: I don't know about your users. WIL: [Chuckles] CHARLES: [Laughs] You know, we like to stick to the pretty advanced. It's like, go to getNodeJSBrowser.com. WIL: [Laughs] CHARLES: Enough of this Firefox BS. But not seriously, it was certainly a problem. We were looking around, because we never like to build anything ourselves if we can avoid it. But it really just seemed like there was not an off-the-shelf solution for writing these big style acceptance tests in JavaScript. There are some services out there. There's a couple now. What was the… WIL: I think the main one here is Cypress. CHARLES: Cypress, yeah. So, there's Cypress now. I've watched the instructional videos but never actually tried to integrate it into my application. WIL: Yeah. I think at its core it takes the same approach that we've been doing with how we're interacting with our tests. CHARLES: Mmhmm. Okay. The main difference is, is it that it's a service? Like you have to edit your tests through… WIL: Yeah. CHARLES: Their web browser, their web interface, and use their assertion library? WIL: Yeah. I'm not sure about the editing part. But yeah, it's their assertion library. I'm pretty sure it's their test runner and it's their testing environment. Really, the only control through that is through their UI, or through settings, basically. And you're stuck with those. You can't use other… I don't think you can use Mocha with Cypress. CHARLES: Right. WIL: Although it's very much like Mocha… CHARLES: Right. WIL: It's not. CHARLES: Right, right. And I also noticed, we'll touch on this later, the assertions, most of the side effects that were happening were happening right there inline inside your assertions. And that might be an opaque statement, but we will actually get into that later. WIL: Yeah. And I think one of the things about their side effects so to speak is everything leading up to a side effect is a promise with Cypress. CHARLES: Mmhmm. WIL: So, when you select a button and click it, Cypress is going to wait for that button to actually exist before it clicks it. CHARLES: Right, right, which is actually pretty cool. So, that's actually a perfect into into one of the primary challenges with doing acceptance testing in general in a JavaScript application. This is a problem when you're doing it in Ember. It's a problem in React. It's really a problem anywhere. And that is, how do you know when the effects of a user's interaction have been realized? Right? WIL: Yeah. And in Ember you take advantage of the run loop. Once that action happens, you wait for the run loop to complete and then your tests run. CHARLES: Right. So, the idea is that I've clicked some button or I've typed some key or I've moved the mouse. And then I listen for the run loop and when it's “settled” then I can now run my assertions because I know that the side effects that I was looking for have now been realized. WIL: Yeah, hopefully, if you're... CHARLES: Hopefully. WIL: Writing your app right. [Chuckles] CHARLES: Right, right. But that actually presents some problems in itself because it requires visibility into the internals of the framework. WIL: Yeah, so Ember is built with testing in mind. CHARLES: Mmhmm. WIL: And other libraries like React just being a view library might not be built with testing in mind. So, we don't have those hooks to wait for this loop to complete, wait for all of these things to be rendered before you continue. CHARLES: Exactly. And so, this is, I think it's actually kind of both a blessing and a curse. Because there are such strong conventions in Ember, they were able to build this wonderful acceptance testing regimen from the get go. WIL: Yeah. CHARLES: But like you said, that doesn't exist at all in the React ecosystem. And so, what do you do? There's no run loop. You're cobbling together a bunch of different components. And maybe you're using Redux, maybe you're using MobX, maybe you're using… you're certainly using React. And all of these things have their own asynchronies built-in. And there's not one unifying abstraction that's keeping track of all the asynchrony in the system. And so that presents a challenge. So, the question is then, if you're trying to not actually check and observe the state of a system until the right moment, how do you know when that right moment is? WIL: Yeah. And in early testing of a side React project I had, I would basically wait for a state to be complete before I continued my ‘before each'. And in the testing we're doing now, it's essentially what we're doing except the state is what the browser sees, or what the user would see in the browser. CHARLES: So you were actually querying the... WIL: Yeah. So, I was using Redux. So, in my app I was saying, when the Redux app is done loading... CHARLES: Mmhmm. WIL: The instance is set to true or false, then continue the test. CHARLES: Mmhmm. And so, what that means, what we're doing, is doing the same thing except observing at the DOM... WIL: Yeah, exactly. CHARLES: Level. And what it means is we actually… I would love to set this up and have a big reveal but I guess we'll just have a big reveal, is that essentially what we do is polling. WIL: Yeah. CHARLES: Right? So, when we run an assertion, let's say you click a button and you want the button to become disabled, there's an inherent asynchrony there. But what we will do is we'll actually run the assertion to see if it's disabled not one time. We'll run it a thousand times. WIL: Yeah, as many times as needed until it passes. CHARLES: Right, exactly. As many times as needed until it passes. And I think that is, at least to most programmer instincts, an odious idea. WIL: Yeah. CHARLES: [Laughs] WIL: It's like, “Oh wait, you're just looping over every single assertion how many times?” CHARLES: Yeah, exactly. And it feels, yeah, it feels weird as an idea. But when you actually see the code that it produces, it just sweeps away so much complexity. WIL: Yeah. And… CHARLES: Because you don't worry about asynchrony at all. WIL: Yeah. And it's pretty genius. If I'm a user and I click a button, it's loading when I see that it's loading. So, our tests are going to wait until that button says it's loading. And then the test passes. CHARLES: Right. And so, what we do is we essentially, we use Mocha but you could do it with QUnit or anything else, is that when you run your assertion, you declare, you have an ‘it' block or I guess, what would it be in QUnit? WIL: A test? CHARLES: A test? WIL: I think it's just a test. CHARLES: You have your test block. And so that function that actually runs the assertion and checks the state will actually run, yeah it could run three times. It could run a thousand times. It's just sitting there waiting. And it will time out. And it will only fail if that assertion has failed a thousand times or it has failed through, I think two seconds is our default. WIL: Yeah, yeah. I think we default to the runner's default timeout. CHARLES: To the runner's default timeout, yeah. WIL: Yeah. Or you can set that yourself with how we have it set up. And the other thing that comes from that is if your tests are only failing when they time out, how do you know what's actually failing? And our solution to that was we catch the error every time it fails and right before the timeout actually happens we throw the real error. CHARLES: Yeah. Exactly. But the net effect is that you're able to write your assertions completely and totally oblivious of asynchrony. You don't, we don't have to worry about asynchrony pretty much at all. I mean, we do, and we'll get into that. So, I made a global statement and then immediately contradicted it. WIL: [Chuckles] CHARLES: But hey, you got to be controversial. But for the most part, asynchrony just disappears because asynchrony is baked into the fabric. So, rather than thinking about it as a one-off concern or a onesie-twosie, it's just every single assertion is just assumed to be asynchronous. And so, that actually means you don't have to deal with promises. You don't have to deal with run loops. You don't have to deal with anything. You just write your assertion and when it passes, it passes. And there are some really unique benefits for this. And there are some challenges. So, I think one of the first benefits is that it's actually way faster. WIL: Right. CHARLES: Which is counterintuitive. WIL: It's incredibly fast. CHARLES: It's very fast. WIL: Yeah, for all the loops that's happening you might think every loop is going to slow it down slightly. But it really doesn't. Our tests, each test, even though it asserts five or six times, it takes milliseconds. CHARLES: Mmhmm. WIL: The test itself might only loop twice. CHARLES: Right. Exactly. Whereas if you're waiting for a run loop to settle, you might have some… you click a button, it disables, it also fires off an Ajax request and does all this stuff. But if all my assertion wants to know is “Is this button disabled?” then I only need to assert until that has happened. I don't need to wait until all the side effects have settled... WIL: Mmhmm. CHARLES: And then do the assertion. I just know, “Hey, my assertion, the thing that I was waiting for - that happened. Let's move on.” Yeah. And so, it's so, so fast. And that was actually, I didn't predict that. But I was definitely pleasantly surprised. WIL: Yeah, that was a very nice surprise. And all of our tests ran so much quicker than they would have in a run loop environment with Ember or something. CHARLES: Right. Yeah, yeah. That was, we actually had just come off a project where we were having that thing, that exact problem, which was that yeah, our animations were slow. Or, the animations were fine. They were perfect. [Laughs] But there were slowing down the tests. WIL: Yes. I think in that project, it was like, 30-minute tests... CHARLES: Mmhmm WIL: For the whole suite to run. CHARLES: Yeah. To which I'll add a public service announcement. I think this is a conjecture but I do believe that animations are best applied not to individual components but by the thing that uses a component. So, I shouldn't have an animation that's like, implicit to a dialog. It should be the thing that's showing the dialog that gets to decide the animation to use. Anyway, just throwing that out there. WIL: [Laughs] CHARLES: Because animations are about context. And so, the context should provide the animation, not the individual atom. Anyway, moving on. WIL: Some other podcast. CHARLES: Yeah. [Laughter] CHARLES: That's another podcast right there. But there's also, this does present some challenges or requires code to be structured in a way that facilitates this. So, there are some challenges with this approach, some things you need to be aware of if you're using this kind of system. We've kind of settled on a name for what we call these types of assertions and these types of systems. WIL: Yeah. We call them convergent assertions, because you're converging on something to happen. It's going over and over until it happens. CHARLES: Right. WIL: And yeah, a lot of these challenges that we've come across are things that you might not think of, like there are a few instances of false positives... CHARLES: Mmhmm. WIL: That happen with these convergent assertions. CHARLES: Right. So, what would be an example there? WIL: So, the most common example that I'm seeing so far is when you're asserting that something didn't happen. CHARLES: Mm, right. WIL: That would immediately pass. But if it takes your app... CHARLES: [Laughs] WIL: A few seconds for it to actually happen, then you could still have an actual failure but your test passed immediately. CHARLES: Right, right. So, what's the countermeasure then? WIL: We invert our assertions. So, we make sure they fail for a certain amount of time. CHARLES: Right. So, the normal case where you just want to say, “I want to make sure that my state converges to this particular state.” WIL: Alright. I said fail at first. I meant, pass. We have to make sure it passes for a certain period of time. CHARLES: Right, exactly. WIL: So yeah, the normal way is it fails until it passes, and then it passes. When you invert one of these convergent assertions, you're just making sure it passes repeatedly and if it fails at any point, you throw a failure. CHARLES: Right, okay. And so, that's like, if I want to check that the button is not disabled, I need to check again and again and again and again. WIL: Until you're comfortable with saying, “Alright. It's probably not going to be disabled.” CHARLES: Yeah, exactly. And so there, it's kind of weird because it is dependent on a timeout. WIL: Mmhmm. CHARLES: You could go for two seconds and then at the very end it becomes disabled. So, you just kind of have to take that on faith. But... WIL: Yeah. CHARLES: In practice, I don't think that's been much of a problem. WIL: No. CHARLES: It's more indicative of, if your button disables after... WIL: A few seconds. CHARLES: A few seconds, what's up with your... WIL: Yeah, what's up with your app? CHARLES: Yeah, exactly. WIL: If you're waiting for an Ajax request or something, an example, then you should be using something like Mirage Server. CHARLES: Right. Which is, man, we got to get into that, too. There are a couple of other things that I wanted to talk about too, with these convergent assertions. And that is, typically when you look through the READMEs for most testing frameworks, you see the simple case of the entire test, the setup, the teardown, and the actual assertions, are in the actual test. WIL: Yeah. The ‘it' block in Mocha or the test block in QUnit. You click a button, and then make sure it's disabled, and it moves onto the next test. CHARLES: Mmhmm. WIL: Then you click a different button or the same button and you assert something else in the next test. CHARLES: Mmhmm. Right. WIL: And yeah, you can't do that with convergent assertions because they're looping. So, if you click a button in a loop it's going to keep clicking that button over and over and over again. CHARLES: Yeah. [Laughs] Right, right. So, it means that you need to be very conscientious about separating the parts of your tests that actually do the things that actually act the part of the user from the part of your test that's about observation. WIL: Yeah. So, our solution to that is we move out all of our things that have side effects like clicking a button or filling in a form, all that stuff happens in ‘before each's. And all of our actual assertions happen in these convergent ‘it' blocks that loop over and over again. So, our ‘before each' runs and clicks the button and then we have 10 or so tests that will loop and wait for various states to... CHARLES: Yeah. WIL: Be true. CHARLES: Right. That means that yeah, all these assertions do is they read state. And you just have to, you do have to be conscientious. You're not allowed to have any side effects inside your tests, your actual assertion blocks. WIL: Mmhmm. CHARLES: But that's actually, it's a good use case for the whole Act-Arrange-Assert, which has been around way, way, way before these techniques. But here, we're doing Act and Arrange in our ‘before each'... WIL: Mmhmm. CHARLES: And then we're doing Assert later. And I think it actually leads for more readable... WIL: Yeah, definitely. CHARLES: Things. WIL: And it also opens the door to something that we can't really take advantage of yet but if you have 10 assertions with one ‘before each' side effect, you could run all of those assertions in parallel. CHARLES: That's right. WIL: And your tests would be 10 times more faster. CHARLES: Mmhmm. Yeah, exactly. Or you could run them in parallel or you could just run them one after the other but you wouldn't have to run that ‘before each' 10 times. WIL: Yeah. But something with that, that I found, is if we move all of our side effects to ‘before' blocks instead of ‘before each' blocks, sometimes a test three tests down that's waiting for something to happen... CHARLES: Yeah. WIL: That thing might have already happened earlier... CHARLES: Yeah. WIL: And it already went away. A loading state is the best example of that. CHARLES: Mmhmm. WIL: You show the loading state, the loading state goes away. So, if you move that button click into a ‘before' and that loading state test is three tests in, that loading state is already going to be gone. CHARLES: Yeah, so I think the long story short is we've kind of come to the conclusion that we would have to write our own runner. WIL: Yeah. CHARLES: Essentially to take advantage of this. But that said, we've done some sketching about what we would gain by writing our own runner. And the speed, we're talking about exponential speedups. WIL: Yeah. CHARLES: Maybe taking an entire acceptance test suite and having it run in five or six seconds. WIL: Yeah. We're talking about these tests that are already extremely fast. CHARLES: Mmhmm. WIL: Each test takes a few milliseconds or tens of milliseconds to complete. But then if you can run all of those at the same time, all of your tests for that entire ‘describe' block just ran tens of milliseconds. CHARLES: Right. Yeah. So, it's really exciting and pretty tantalizing. And we would love to invest the time in that. I've always wanted to write our own test runner. But never had, [chuckles] never really had a reason. WIL: Yeah. CHARLES: Certainly not just for the sheer joy of it. Although I'm sure there is joy in writing it. But that, yeah, we'll have to wait on that. But I am actually really excited about the idea of being able to maybe bring this back to the Ember community. WIL: Yeah. CHARLES: Because acceptance tests getting out of control in terms of the speed is I think a problem with Ember applications. And I think this would do a lot to address that. WIL: Yeah. CHARLES: I think, how long, if we were just using a stock Ember acceptance testing setup for this, I think we have about 250 tests in this React app... WIL: Yeah. CHARLES: How long does it take to run? WIL: Right now I think our tests take something like 20 seconds. And that's also somewhat due to they have to print the tests on the screen on Travis so that takes a little time. In an Ember setup, that could maybe take a few minutes. I mean, that's not that big of a deal, a few minutes. CHARLES: Right. WIL: But compared to 20 seconds. CHARLES: Right. you're still talking about an order of magnitude... WIL: Yeah, exactly. CHARLES: Difference. And using this, I think you could get a 30-minute test suite... WIL: Yeah. CHARLES: Down to the order of 3 minutes. WIL: Now when we're talking about those times, we're talking about the tests themselves. Of course, the CI would have to download stuff and set up the [inaudible]. CHARLES: Mmhmm, right, right. WIL: And that of course all adds to the time. CHARLES: Yeah, mmhmm, yeah. Earlier you mentioned, we talked about Ember CLI Mirage. This is actually something that is having now been using it for what, 2 years or something like that, it's just… it's impossible... WIL: To go back, yeah. CHARLES: To go back. It is. It's like [chuckles] you come outside the Ember community and you're like, “How is anybody ever dealing without this?” WIL: Yeah. CHARLES: [Laughs] WIL: A lot of the mocks are usually mocking the function that makes the request and it returns it in that function. That's what's out there currently, minus the Mirage stuff. CHARLES: Mmhmm. WIL: But once you use Mirage, you're mocking the requests themselves. CHARLES: Yeah. And you've got such great support for the whole factories. I love factories. It's something that is very prevalent in the Ruby community, and maybe not so much elsewhere. But the ability to very, very quickly crank out high-fidelity production data... WIL: Yeah. And you don't have to have files upon files of fixtures. CHARLES: Yeah, exactly. And you can change, if something about your schema changes, you can change the factory and now your test data is up and running. So, Ember has this tool called Mirage which is just, like I said, it's so fantastic. Oh yeah, it's also go support for running your application. Not just in your tests, but you can actually run your application with Mirage on and... WIL: Oh right, yeah. CHARLES: And you've got now the most incredible rapid prototyping tool. WIL: Yeah. You don't need to connect to a server to see fake data. CHARLES: Right, right. And we were even talking about this yesterday to a potential client. they're trying to, they've got to present something to investors. And how wonderful is it to just be like, “You know what? We just don't want to invest in all, we don't want to move the inertia, invest the money to generate the force to move the inertia of a backend.” Especially in this particular use case, the backend was going to be really, really heavy. WIL: Yeah. And there were some questions about the backend that we couldn't address quite yet but we wanted to start working on something that we could show, something demo-able. CHARLES: Right, exactly. And so, Mirage is just so wonderful for that. But again, Mirage is, it's an Ember-specific project. WIL: Right. CHARLES: So, the question was, “How are we going to use that?” WIL: And you actually took this on yourself. CHARLES: Mmhmm. WIL: I just saw this pop-up one day and boom, you converted Ember Mirage to vanilla JavaScript. [Chuckles] CHARLES: So, I did extract it. But the lion's share of the credit goes to the developers of Mirage themselves. Sam Selikoff and the Mirage community, they built Mirage not using much of Ember. There were some utilities that they were using, but mainly things like string helpers to convert between camel-case and dash-case, and using a Broccoli build, or using an Ember CLI build. WIL: Yeah. That was one of the challenges that we came across using Mirage outside of Ember, was how do we autoload this Mirage folder with all this Mirage config and Mirage factories and models, et cetera. CHARLES: Right. The internals were all just straight up JavaScript classes, for the most part. And so, extracting it, it was a lot of work. But 90% of the work was already done. It only took three or four days to do it. WIL: Amazing. CHARLES: Yeah. So, it was actually a really pleasant experience. I was able to swap out all of the Ember string helpers for Lodash. So now, it's good to go. It shares a Git history with Ember CLI Mirage, so it's basically a fork. WIL: Mmhmm. CHARLES: Like, a very heavily patched Ember CLI Mirage. But I keep it up-to-date so that it doesn't... WIL: Good. [Inaudible] CHARLES: Yeah, so I think the last time I merged in from master was about a month ago, something like that. Because it's got all the features that we need but it's not a big deal to rebase or just to merge it on over in. Because yeah, it's a really straightforward set of patches. WIL: Was there any talk with the creators of Ember Mirage about getting this upstream? CHARLES: So, I've talked a little bit with Sam about it. And from what I can tell, his feeling on it is like, “Hey, my goal right now is to focus on this being the best testing and data stubbing platform for Ember. Anything that happens out there, outside of that scope, that's great. And I certainly won't get in the way of it. But I'm pretty maxed out in terms of the open source credits that I have to spend.” WIL: [Chuckles] CHARLES: And there hasn't been much motion there. I'm happy where it is right now. I would like to see it merged into upstream. I think it would be great to have basically this Mirage Server and then have Ember CLI bindings for it. WIL: Yeah, yeah. I was going to say, either another Ember CLI specific package for Mirage or maybe to make it a non-breaking change or something. CHARLES: Yeah. WIL: Just like an Ember-specific entry point. CHARLES: Right, exactly. And I think that's definitely doable, if someone wants to take it on. I will say, we have been using this extracted plain vanilla JavaScript Mirage Server now for what, almost six months? WIL: Yeah. CHARLES: And it really hasn't... WIL: Yeah, I don't think I ran into one issue with that. CHARLES: Yeah. It's solid. It's really, really good. So, kudos to the Mirage team for doing that. And if anybody is interested in using Mirage in their projects, it's definitely there and we'll put it in the show notes. WIL: Yeah. We call it Mirage Server. CHARLES: Mirage Server, yeah. So, I don't know. Maybe it's time to reopen that conversation. But it has become a very integral and critical piece of the way that we test our JavaScript applications now. So, what are the foundations of it? We've got, we're using Mirage. We're using these convergent assertions. We're using Mocha, although that's really... WIL: Yeah, we have jQuery and Chai jQuery just to help us out with interacting with the browser as a user would. And I think one of the big challenges with that actually, I just remembered, was triggering changes in React. CHARLES: Yeah. WIL: I think this is pretty specific to React. You might run into problems with the view. I don't know how to mess with view. But in React, at least I think 15 or React 16, one of them, they changed the descriptor of the value property on an element so that they can appropriately interact with it, make changes, watch for changes, et cetera. So, when you set this value property using jQuery or just straight up ‘.value()', that change event isn't triggered in React. Your on-change handlers are never called. CHARLES: Wait, they actually update the JavaScript property descriptor of the DOM element. WIL: Yes. CHARLES: Boo. WIL: Yeah. So, there's a nice little helper out there called React Trigger Change. I dug through it and I've stripped some of it down to just be for more modern browsers, more modern React. But there's a lot of good code in there. And basically, it just caches that descriptor, updates the value, triggers the change, and then adds the descriptor back. And that ends up triggering that React element. CHARLES: Okay. [Laughs] WIL: [Chuckles] Yeah, so that calls your handlers and that's how we get around that. CHARLES: Right. There's a few fun little hacks there. But I think it is good to tie that into a larger point, is that the amount of touch that you have with the framework is actually very low. WIL: Yeah. CHARLES: So, the amount of affordances that we've had to make just for React, there's that, that you just mentioned. And is there… there's not much else. We had to write a test harness to mount the app. But that's like... WIL: Yeah, yeah. Our describe application helper is pretty React-specific. CHARLES: Right. WIL: So, you'd have to render it and set up a Mirage server, et cetera. CHARLES: Right. But that's application-specific setup. WIL: Yeah, that's one file. CHARLES: Right. WIL: So, your goal for acceptance tests is you want to be able to have a refactor and your acceptance tests still pass. CHARLES: Right. WIL: So, what if that refactor involves switching libraries? CHARLES: Right. WIL: If you're writing Ember acceptance tests, you're going to have to rewrite all your acceptance tests. CHARLES: Right. WIL: That's a huge downside. CHARLES: Right. WIL: So, with this method of interacting with the actual library very little, we have that one file that sets up our app and then we have that one trigger change helper, we remove those, we can use whatever framework we want underneath this. And our tests would still work. CHARLES: Yeah, exactly. And I think that we actually could theoretically, and honestly I have enough confidence in this style that we're developing the tests now, we could refactor this application to Ember and not have to rewrite our tests. WIL: Yeah, exactly. CHARLES: In fact, the tests would be an aide to do that. WIL: Yeah, and the tests would be faster than Ember testing with that run loop problem. CHARLES: Yeah, exactly. that's really something to think about or to think on, is like, “Wow. You're really at this point completely, not completely, but very loosely coupled to the actual internal library code.” Which is one of the goals of a nice, big acceptance test, is to be able to make major changes, break big bones, and be able to set them and have your acceptance test suite be the bulwark that holds it all together. WIL: Yeah. CHARLES: So, I actually don't know what a bulwark is. WIL: [Laughs] CHARLES: I just know that it's a really strong thing. [Laughter] CHARLES: Maybe we could put that in the show notes. [Laughter] WIL: A link to what that is. CHARLES: [Laughs] So, alright. Well, I'm trying to think if there is anything else that we wanted to mention. Any challenges? Any next things? WIL: So, one of our next steps is something we mentioned that Cypress does, is they wait for elements to exist before they interact with them. And we're actually not doing that in our app currently. And we don't have helpers out there for it yet. CHARLES: Right. WIL: But that's very much the next step. When we go to click an element in our ‘before each', we have these describes that are nested. Say, you have nested describes and you get down three levels into a ‘before each' when you're clicking a button. That button might not exist yet. CHARLES: Right. WIL: And especially since we're using jQuery, if you trigger a change on an empty jQuery element, it's not going to throw an error. It's just not going to tell you that it triggered anything. CHARLES: Right. WIL: So, we get those skips where that button's not getting clicked and we should really be waiting for that button to exist. CHARLES: Right. So, what we've done right now is we're converging on our assertions at the backend of a test. But at the frontend of a test we need to also be converging at some state before we can actually interact with the application. WIL: Right, yeah. CHARLES: So yeah, so that part is missing. And that actually brings up, we are very slowly but nevertheless doing, we're collecting these convergent assertions and convergent helpers in a repository on our GitHub account. We're going to be adding these things so that you can either use them out of the box or use them to make your own testing library. WIL: Yeah. And one of the other next steps that goes along with waiting for the element to exist is when you need to chain convergences. Like, wait for this element to exist and then click it and then wait for this thing to happen before actually running a test. And that presents the problem of our convergences are waiting for that timeout and those timeouts will accumulate. So if you have three chained convergences, that's now a six thousand millisecond timeout as opposed to a two thousand millisecond timeout. CHARLES: Right. WIL: So, one of the next steps is getting that tracking under control so if you chain three convergences together, they're smart about it and they still fail under the two thousand millisecond timeout. CHARLES: Right, right. So yeah, so we're going to be collecting all this stuff that we're learning into some publicly available code. We have a repository set up. I don't know if I want to announce it just yet, because it's really early days. WIL: Yeah. CHARLES: But that definitely is the plan. And that way, whether you're using Mocha or whether you're using QUnit or whether you're using Chai or jQuery, you've got these underlying primitives that help you converge on a state, whether that state is to interact with some piece of the DOM or to just assert some observation is made about that state. We'll be continuing on that. But by all means, get in touch if this is something that is of interest to you. Let's make something happen, because it's something that we're pretty excited about. And honestly, it's pretty comfortable living inside the four walls of this test suite. WIL: Yeah. CHARLES: It feels pretty good. WIL: It does, yeah. They're very fast. And some places in the test might need a little reworking, but for the most part all of our tests are very well-written, very well-readable. And you can just open up a test and know exactly what's going on. CHARLES: Yeah. Alright. Well, I think that about does it for Episode 90. Wow, Episode 90. WIL: Man, coming up on that 100. CHARLES: Yeah. We're going to have to have a birthday cake or something. WIL: Do we celebrate Episode 100 or Episode 104? CHARLES: What's 104? WIL: 104 would be 2 years. CHARLES: Oh really? WIL: Well, I mean 2 years' worth of podcasts. CHARLES: Oh, right. 2 years' worth of podcasts. Yeah. WIL: Yeah, like if you go every week. CHARLES: Maybe we should celebrate a hundred hours or something like that. WIL: Oh yeah. CHARLES: We can add up the thing or celebrate… I don't know, be like, “You've literally wasted 2 years of your life.” WIL: [Laughs] CHARLES: “2 weeks of your life listening to the podcast.” Anyway, so that's it for Episode 90. and thank you so much, Wil. WIL: Thanks for having me. CHARLES: It's always a pleasure to talk about these topics with you. And as always, if you need to get in touch with us, please reach out to us on Twitter. We are @TheFrontside. Or you can send an email to contact@frontside.io.

Programming By Stealth
PBS 35 of x - HTML Text Input | Introducing ‘Life’

Programming By Stealth

Play Episode Listen Later May 19, 2017


In this installment, Bart walks us through a little bit of how he wrote his Test Driven Development with QUnit for the Bartificer Link Toolkit. Bart even explains how it helped him find a couple of pretty major bugs in his own code, proving how important this is. Then we make a start on text input in HTML forms, and we move on to formatted sub-stets of tea like numbers, email addresses and so on. Finally, we make a start on what will be an on-going project. The idea is to combine our understanding of HTML, CSS, JavaScript, jQuery, and QUnit to implement a zero-player with a really cool computer science back-story.

Programming By Stealth
PBS 34 of X – More JS Testing with QUnit

Programming By Stealth

Play Episode Listen Later Apr 30, 2017 76:05


In this installment of Bart's Programming By Stealth series, we review our test code using QUnit, and then learn how to use QUnit to test our code within a real browser page. We do that using the API we built together, the Bartificer Link Toolkit that identifies external links on a web page, makes them open in new tabs, adds the tag rel=noopener, and adds a cute icon to identify them as external links. As always Bart's terrific written tutorials and downloadable examples are available at pbs.bartificer.net/...

Programming By Stealth
PBS 33 of x – JS Testing with QUnit

Programming By Stealth

Play Episode Listen Later Apr 14, 2017 91:16


In this installment of Programming By Stealth, Bart FINALLY lets us start learning Test Driven development, or TDD. He shows us how to use a free and open source tool called QUnit, made by the fine developers of jQuery, to analyze our test code. It's something I've been itching to learn more about, ever since listener Jill tipped us off to the concept. It's a really fun episode where everything kind of comes together. Hope you enjoy it as much as I did. As always, Bart's excellent written tutorial for the episode can be found at pbs.bartificer.net/...

testing bart jquery tdd test driven qunit
The Frontside Podcast
054: The Ember Ecosystem & ember-try with Katie Gengler

The Frontside Podcast

Play Episode Listen Later Jan 20, 2017 37:46


Katie Gengler @katiegengler | GitHub | Code All Day Show Notes: 01:23 - Testing 06:20 - ember-try 14:11 - Add-ons; Ember Observer 17:43 - Scoring and Rating Add-ons 25:25 - Contribution and Funding 27:41 - Code Search 30:59 - Data Visualization 32:27 - Change in the Ember Ecosystem Since Last EmberConf? 34:35 - Code All Day 35:39 - What's Next? Resources: ember-qunit liquid-fire capybara Selenium appraisal emberCLI Bower Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast Episode 54. I am your host, Charles Lowell, with me is Alex Ford. Today, we're going to be interviewing Katie Gengler. I remember very distinctly the first time that I met Katie, it was actually at the same dinner, I think that I met Godfrey at EmberConf in 2014. That was just a fantastic conversation that was had around the table and I did not realize how important the people that I was meeting were going to be in my life over the next couple of years. But Katie has gone on to do things like identify a hole in Ember add-on ecosystem so she created Ember Observer. There's a huge piece missing from being able to test this framework that spans multiple years and multiple versions and being able to make sure that your tests, especially for add-on authors, run against multiple versions so she created and maintains Ember-try. She's a part of the EmberCLI core team. She's a principal at Code All Day, which is a software consultancy and just an all-around fantastic woman. Thank you, Katie for coming on to the show and talking with us. KATIE: Thanks for having me. CHARLES: One of the things I wanted to start out the conversation with is something that's always struck me about you is there's a lot of people when it comes to testing, they talk the talk but you have always struck me as someone who walks the walk. Not just in terms of you make sure that your apps have tests in them, where your add-ons have tests in them but talking to people about testing patterns, making sure that when there are huge pieces of the ecosystem missing like Ember-try. I remember this as something that I struggled with. I was running up against this problem and all of the sudden, here comes Ember-try and you've been such a huge part of that. I want to know more about kind of your walk with testing and how that permeates so much of what you do because I think it's very important for people to hear that. KATIE: I got really lucky right out of college. My first job was at a place that where people think of mythical themes, XP-focused developers so the first thing I was told is everything is test first, everything is test-driven. I was primarily doing Ruby in Rails at the time but also JavaScript. At the beginning, we didn't have a way to test JavaScripts and there was a lot of missteps in the way of testing JavaScript until we came right around to QUnit. I was QUnit long before Ember even came along. It's kind of bit ingrained in my whole career. Michelle as well. Michelle is my partner in Code All Day. We're both very test focused. I think that's what drew us to start a company together and working together. Every project we're on, we try to write encompassing tests: test drive everything, if we're on it, projects upgrade or any project to fix. We try to write tests as a framework for everything that we're doing so we know whether we're doing something right or not. When it comes to Ember-try, that wasn't entirely my own idea. That was something that Robert Jackson and Edward Faulkner were looking for something right. I remembered that appraisals gem from Ruby. I really enjoyed being [inaudible] gems that I had written Rails so I wanted it to exists for Ember so I just kind of took a promise of to do it. It was extracted from Liquid Fire. I had some scripts that would sort of test multiple versions but it was rough. It wasn't as easy as it is today. CHARLES: Yeah, it does speak to a certain philosophy because if you're coming to a problem and it's difficult to test, you often come to a crossroads where you say, "You know what? I have a choice to make here. I can either give up and not write a test or trying and test some subset of it," Or, "I can write the thing that will let me write the test." It seems like you fall more into that second category. What would you say to people who are either, new to this idea or new in their careers and they butt up against this problem of not knowing when to give up and when to write the thing to write the test. KATIE: I almost never don't write the test so if you're suspicions are true, I will write something to be able to write the test. But there are times that I'm [inaudible] and sometimes I'm just like, "This is not going to be tested. This is not going to happen." Finding that line is pretty hard but it should be extremely rare. It's not when people come to me, I work with a client and they're telling me, "No, it's too hard to write the tests." A lot of times, it's not only how you write the test, spreading the test and learning how to write a test. It's the code you're trying to test that could be a problem. If you have a very complicated code, very side-effect driven code, it's very hard to write the easier sell, which might Ember acceptance tests. What you're really kind of on a level of integration because you do have a little bit of knowledge of what's going on and you have to be within the framework of what Ember tests wanted you to do, which is async is all completed by the time you want to have this assertions and test. That means to look different tools like going back to something like Capybara or Selenium and have some sort of test around on what you're doing in order to replace the code that makes it hard to test it at a lower level to begin with. I think a lot of people are just missing the framework for knowing what to do when their code is intractable or not, necessarily that the testing and the guides that have you tested. I think most people could go through tutorial and do tests for a little to do MVC app perfectly fine. But that's easy when you [inaudible] size of the equation so if you're already struggling with code and you're not quite sure, either in Ember, it can seem very, very hard to write tests for that. I think that's true with Rails as well. I think people that begin in Rails don't understand what they're going to testing, especially if they have an existing app that trying to add test to but unfortunately, Rails not a long ago, kind of got into everybody's heads that your tests go with what you're doing. It's just ingrained part of the Rails community. Hopefully, that will become how it is with Ember. But a lot of people are kind of slowly bring their apps Ember so they really have a lot of JavaScript and they don't necessarily know what to do or they're write in JavaScript have always are written with jQuery and a little bit [inaudible]. They don't understand how to test that. ALEX: How does Ember-try help with that? Actually I want to roll back and talk about what is Ember-try and how does it fit into testing? You mentioned the appraisal gem which I'm not familiar with. I haven't done much Rails in my life or Ruby. But can we talk about what Ember-try is? KATIE: Sure. Ember-try, at the basis, let's you run different scenarios with your test. At some point, I would've said, let's run different scenarios of dependencies for your tests so primarily changing your Ember version and that's pretty much what add-ons do but a lot of people are using it for scenarios that are completely outside of dependencies so different environment variables, different browsers. They're just having one place to have all these scenarios, where if you just put it in travis.yml like your CI configuration, you want it as easily be able to run that locally. But with Ember-try, you can do that locally. I found that it's kind of beyond my intentions, expanded beyond dependencies. Primarily, it lets you run your test in your application with different configurations. I could see running it with different feature flags, it would be what something to be interesting to do, if that's something you use. Primarily, it just lets you try the conversions and appraisal gems let you run test with different gem sets so you have a different gem file for each scenario you possibly had. That was definitely dependency-focus. CHARLES: That sounds really cool. It's almost sounds like you could even get into some sort of generative testing, where you're kind of not specifying the scenarios upfront but having some sort of mechanism to generate those scenarios so you can try and surface bugs that would only occur outside of what you're explicitly testing for, which is kind of randomly choosing different versions of environment, variables, feature flags, dependencies and stuff like that. They didn't thought of that? KATIE: Randomization [inaudible] but Ember-try really does have a kind of general route way of working on and that's we're leading to that. If you wanted to, especially for add-ons, you can specify this version compatibility keyword and your packet at JSON and give Ember and give an Ember string a version and it will generate the scenarios for you and test all those versions. These Ember strings are pretty powerful so you can say specifically versions you want. You can do a range of versions and it will take the latest patch release and a reminder, at least you don't want to be too crazy and test each of those for that add-on. But I can definitely see something random, they're really cool. Some testing thing that's like just tries to do random input into all of your inputs on a page. I've really been meaning to try that out. Sounds like [inaudible]. CHARLES: Yeah, just like to try and break it. I remember a world before Ember-try and I can't speak highly enough about it and the fact that how many bugs it has caught in the add-ons that we maintain because you're always working on the latest, hottest, greatest version of Ember and you're not thinking what about two-point releases back. They're might be not a deal breaker but some subtle bug that surfaces and break your tests and the coverage has just gotten so much better. In fact, I think that they're as brilliant as it is bundled with EmberCLI when you are building an add-on. It's like you now you get it for free. It's one of those things where it's hard to imagine what it was like, even though we lived it. KATIE: And it was less than a year ago. [Laughter] KATIE: Ember-try existed for more than 15 bundle with EmberCLI so since last EmberConf or so. CHARLES: Yeah, but it's absolutely a critical piece of the infrastructure now. KATIE: I'm glad it caught bugs for you. I don't think I've actually caught a bug with it. CHARLES: Really? KATIE: Yeah, but I don't do a lot of Ember re-add-ons. I do a lot of EmberCLI-ish add-ons. It can't change versions of EmberCLI. Not yet, we're working on that. I get some weird npm errors when I tried it but I haven't dug into it much yet. CHARLES: I don't want to dig too much into the mechanics but even when I first heard about it, I was like, "How does that even work?" Just replacing all the dependencies and having a separate node modules directory and bower and I'm like, "Man, there's so many moving parts." It was one of those things where we're so ambitious. I didn't even think it was possible. Or I didn't even think about writing it myself or whatever. It's one of those like, "Wow, okay. It can be done." ALEX: This exists now. CHARLES: Yeah. ALEX: Add-on authors are accountable now for making their add-ons work with revisions or versions a few points back like you said but it makes it so easy. The accountability is hardly accountability, turning Ember-try. It's really amazing. KATIE: What I'm laughing about is that what it actually does is not very sophisticated or crazy at all. For instance, for bower, it moves your existing bower components structure. It's a placeholder. It changes the bower.json run install. Then after the scenario, it put's everything back. CHARLES: But I don't know, it sounds so hard. It's intimidating. You got all this state and you got to make sure you put it all back. What do you do with if something hit you and abort midway. I'm sure you had to think about and deal with all that stuff at some point. ALEX: I kill my tests all the time in Ember-try and I was like, "Oops." I forgot I shouldn't do that just like for this. KATIE: Yeah, it doesn't recover so well. It's pretty hard to do things on process exit in node correctly, at least and I don't think I gotten it quite right. But there is a clean up command. Unfortunately, with the way it interacts with EmberCLIs dependency tracker so when you run an Ember command, Ember checks them to make sure all your dependencies are installed. If you still have the different bower.json and install haven't run, you have to run install before you can run the cleanup command which is kind of a drag. CHARLES: I have one final question about Ember-try. Have you given any thought to how this might be extracted and more generally applicable to the greater JavaScript ecosystem because I see this is something that Ember certainly was a trailblazer in this area. Some of these ideas came from Rails and other places. This is going to be more generally applicable and had you given thought to extracting that? KATIE: Yeah, some of [inaudible] since we first did it because we realized very early on that it doesn't depend on EmberCLI. I didn't even using it as a command line arguments parser which doesn't seem too important. But there are some assumptions we get to make. For it being an Ember app, we know how EmberCLI was structured. Some of those assumptions, I wouldn't really know with the greater node community and I gotten those assumption might not be possible at all because they don't have the standards we have on EmberCLI. It generated EmberCLI. There's generally certain things that are in place in Ember [inaudible] works for [inaudible] so there's no part of it that could be extracted. But I worry about some assumptions about no modules always being in the directory that they are in because then can be link the node modules above it. In EmberCLI, it usually doesn't support that. But other places, obviously have to. I realized that it could definitely happen but I'm not so sure that I'd want to personally support that because it's a little bit time commitment. CHARLES: Right. Maybe if someone from the outside want to step in involuntarily, you might work with it but not to personally champion that cause. KATIE: Definitely. I think it would be really cool and I do think it will end up having its own [inaudible] parser eventually, just to be able to do things like different EmberCLI versions. As long as it's not part of EmberCLI, I think that would be less confusing, though. In theory, that can be done with an EmberCLI still but I'm not clear on that. I've had people talk to me about that and I haven't fully process it yet. CHARLES: Right. Alex you mentioned something earlier I had not thought about but that was the technologies like Ember-try keep the add-on community accountable and keep them healthy by making sure that add-ons are working across a multiplicity of Ember versions and working in conjunction with other add-ons that might have version ranges. Katie, you've been a critical part of that effort. But there's also something else that you've been critical part of that you built from the ground up and that is Ember Observer. That is a different way of keeping add-ons accountable. But I think perhaps, even in a more valuable way, more of a social engineering way and that's through the creation of Ember Observer. Maybe we can talk about Ember Observer a little bit. What it is and what gave you the insight that this is something that needs to be built so I'm going to step forward and I'm going to build it? KATIE: I'm definitely going to refer again back to the Rails community. I'm a big fan of Ruby Toolbox. Whenever I needed to jam, I would go there and try to see what was available in that category kind of way. There's variables that have on there. It will have something like the popularity in the number of GitHub stars and the last time it was updated. You can see a lot of inspiration for Ember Observer in there so maybe I should step back and explain the Ember Observer. Ember Observer is a listing of all of the add-ons for the Ember community. Anything that has the Ember add-on keyword will show up there. We pull it off from npm and it all show you all that kind of information: the last updated, the number of GitHub commits, the number of stars, the number of contributors and we put all of that information and a manual view together to put a score on each add-on. You can look at it and we categorized them as well. If you look at a category, say, you're looking at a category for doing models. You would see all of a different model add-ons and be able to look between them and compare them, decide which ones to use. Or if you're thinking of building something, you can go in there and be like, "This already exists. Maybe I should just contribute to an existing thing." What gave me the idea for is I was looking at Ember add-ons, which just shows you the most recently published add-ons for that Ember add-on keyword everyday and every time I was clicking on these add-ons I go, "They did the same thing," and it just seemed like such a waste in [inaudible]. People were creating the same things and then they started clicking into and I was like, "Why they bother clicking into this. It doesn't have anything. It's just an empty add-on." We're pushing add-ons just to try that out so I thought I'd be nice if something filtered that out and I happened to have some time so I got started and dragged my husband, Phil into it. He's also an Ember and Rails developers so that's pretty convenient and my friend, Lew. Now, Michelle works on it a little bit as well. That's what drove us to build it and it's been pretty cool. I like looking all the add-ons when they come up anyway. I feel like it's not any actual work for me. It's quicker than my email each day to look at the new add-ons. ALEX: How many new add-ons are published every day on average? KATIE: On average, it's probably four to six maybe but it varies widely. If you get a holiday, you'll get like 20 add-ons because people have time off. You know, if somebody just feeling the grind at two and you'll notice that the add-on struck commeasurably too. It would be else that come on the same kind of week. ALEX: You mentioned that an add-on gets a score. Can you explain that score and how you rate at add-ons? KATIE: Sure. The score is most driven by details about the add-ons. There's Ember factors that go into it and it's out of 10 points. Five of them are from purely mechanical things, whether or not there's been more than two Git commits in the last three months, whether or not there's been a release in the last three months, whether or not they're in the top 10% of add-ons, top 10% of npm downloads for add-ons, whether they're in the top 10% of GitHub stars for add-ons. ALEX: I know that I'm a very competitive person but it also applies to software. Not just on sports or other types of competition. But I remember a moment and I'm an add-on author and my add-on had a 9 out of 10. I was just about to push some code and just about to get a release. Even before that happened, it went to a ten. The amount of satisfaction I got is kind of ridiculous. But I like it. I like the scoring system, not just for myself but also for helping me discover add-ons and picking the ones that might be right for me. I'll check out any add-on basically that might fit the description. As long as there's a readme, I'll go check it out. But it still helps along with the categorization. CHARLES: Correct me if I'm wrong but I believe you can achieve an 8 out of 10 without it being a popularity contest. By saying there's a certain concrete steps that you can take to make sure that you have test and you have a readme, that's a thing of substance. I don't remember what all the criteria are but you can get a high score without getting into the how many stars or if you're in the top 10%. I think that's awesome. But it does mean that if I see an add-on with a five or something like that, it means that they're not taking my concrete steps or it might not be as well-maintained. You know, that's definitely something to take into account. I'm curious if there are any different parameters you've thought, tweaks you thought of making to the system. Because this gets to second part of that, what things had you considered just throughout out of hand, as maybe not good ways to rate add-ons. KATIE: We haven't thought about everything. I don't particularly like the popularity aspect of it. It did feel necessary to include it in some way. The stars and theory are representative of interest, not as much as popularity but it probably gets into popularity as well then downloads are inferior for popularity. But the problem downloads and I found this happening more and more frequently is that if large companies start publishing their own add-ons, then they have a lot of developers, they are going through the roof on those downloads so they're getting that to a point that just from their own developers and I have no way of knowing if anybody else is using this. CHARLES: Probably their continuous integration with containers, right? Like if it's running on Travis or Circle, it's just sitting there spinning like pumping the download numbers. KATIE: Yeah and that frustrates me quite a bit. But I haven't found another thing that really representative popularity. Unfortunately, with npm you can get the download counts but you can't tell where they came from. There's no way to do that. I simply would like to see the things that are popular, they rate higher than the they currently do, like you said either, it's 10 points go without the popularity coming to affect it. But you do need a collaboration with at least one of the person to get eight points. If you give seven by yourself, you need to have another contributor. If you only have one contributor, you don't get that point because that's trying to be representative of sort of a bust factor but it's not truly there. You can just have one commit from somebody else to get that. CHARLES: Of all the pieces, I think that's totally fair. KATIE: Yeah, there's definitely a few other things I have in mind to bring in to the metrics but we're not quite there yet. I need to entirely refactor how the score is given so it's not exactly out of 10. The idea is to have some questions and some points that are relevant only to certain categories about us. Whether or not in add-ons testing its different versions of Ember, it might have matters for add-on but it's only adding 10 for CLI or whether or not they have a recent release. Might not matter if it's kind of a one-off with the sass plugin or something more of the build tool that doesn't change for everyone. CHARLES: Yeah, I've noticed we have that happened a couple times where we've got a component that just wraps a type of input. Until the HTML is back changes or a major API changes happens in Ember, there's no need to change it. I can definitely see that. How do you market it is like something that changes infrequently. Is it just that add-on author says it's done? You give them a bigger window or something like that? KATIE: There are probably some sort of categories that fall under that. For the input, I think if it's doing something that Ember producing components, it probably want to be upgrading every CLI, at least every three months. I think in that case, it's probably fair to require an update within that period of time. But some of the things that are more close to Broccoli like as they are in Ember add-ons, that make sense to not have that requirement. Maybe not that's the [inaudible] exact example of the kind of questions that [inaudible]. ALEX: In Ember add-on was the first time that I gave back to the open source community. It was my first open source project and Ember Observer really helped me along the way to say what is an open source best practice and I thought that was really cool. Now, it sounds like with some of the point totals, you're leaning towards Ember best practices to help Ember add-on authors along that way. I think that's really awesome and very, very useful. I would not like to see what the Ember add-on ecosystem would look like without Katie. It would be a very different place. KATIE: Thanks. I'm glad it had some help on that and I'm affecting add-on authors. I actually didn't originally think about it when I was first building it and I was really hoping to help consuming of add-ons but it really has kind of driven out people finding add-ons to not build because they contribute to existing ones. Also, driving them for the score because as you said, people get very competitive. I really didn't realize what kind of drive the score to be because to me I'm like, "Somebody else's score me. How dare you?" [Laughter] KATIE: I have had people say that to me, "How dare you score me? How dare you score my add-ons?" Well, it's mostly computerized. Even the review was manual but only thing about it has any sort of leeway is are there meaningful tests. That's really the only thing when I go through an add-on is whether or not that has any sort of leeway for the judgment of the person that's doing the reading. If there's a readme and we're kind of a rubric so that goes if you have anything in there that's other than the default Ember [inaudible] readme. Whether or not there's a build that is based entirely whether or not there's a CI tag in readme, these are for the owner to go look for them [inaudible]. We hope to automate that so I don't have to keep looking for those. A lot of add-ons turned out have to have builds when they didn't have any meaningful tests. They just had tests so that's kind of confusing. CHARLES: What do you do in that situation? You actually manually review it so that add-on would not get the point for tests. KATIE: No, they don't get the point for test and if they don't get the point for test, I put 'N/A' for whether or not, they have built so it doesn't apply. It doesn't mean anything to me if they haven't build their own test. CHARLES: Right, that makes sense. A lot of it is automated but it still sounds like consume some of your time, some of Phil's time, some of Michelle's time. I guess, my question is do you accept donations or a way that people can contribute because I see this as kind of part of the critical infrastructure of the community at this point. There might be some people out there who think, "Maybe, I could help in some way." Is there a way that people can help? If so, I'd love to hear about it. KATIE: We don't have any sort of donation or anything like that. I mean, we should. We consider it primarily just part of our open source works, part of our contributions to the community because we also make a great deal of use out of the community. Fortunately, it's not very expensive to run. It's only $20 a month VPS. Other than time, it's not really consuming very many resources. That may change over time. The number of hits is increasing and we're doing some more resource-intensive things like Code Search and we're running Ember-try scenarios from the top 100 add-ons to generate compatibility tables. It hasn't been the most reliable. Think about if you're trying to do nvm-installed times 100 add-ons, times every day, times the different Ember dependency settings so it's been very much like a game of whack-a-mole but for now, it's not bad. But we probably should think about some sort of donation by then. Maybe something that writes out the exact numerical cost of something like Ember Observer. The API is getting about 130,000 hits a month but that's the API so that's some number of quests per person. [inaudible] tells me something about 12,000 visitors each month. CHARLES: Does Ember Observer has an API? Are there any the third-party apps that you know about, that people built on top of the Ember Observer? KATIE: None that have been kind of public. I know a couple of private companies seem to be hitting the API but it's not a public API. It's really not public yet. I'm literally process of switching over to JSON API and at some point, I'll make some portion of that -- a public API -- but it's pretty hard to support that at the same time. It change Ember server pretty frequently and do any kind of migrations we need to do. Ember add-ons does all the scores from us from API end point. CHARLES: I actually wasn't aware. I remember the announcement of Code Search but how do you kind of see the usage of that? What's the primary use case when you would use Code Search on Ember add-on or Ember Observers? KATIE: I think the primary use case is if you're looking for how to use a feature. If you're creating an add-on and you want to know how to use certain hooks like [inaudible] or something like that. You can do the Code Search for that and see what other add-ons are doing. It's only searching Ember add-ons that have the repository are all set so you'll only find Ember results. That will be nicer compared to searching GitHub. Then I find another use case is more by the core team to see who is using what APIs and whether or not they can deprecate something or change something or something has become widely used since we're pretty excited about that possibility, we've never ever been searching. ALEX: That is brilliant. CHARLES: Yeah, that's fantastic. The other question that I had was this running Ember-try scenarios on the top 100 add-ons and that's something that you're doing now. Are you actually reflecting that in the Ember Observer interface? Is that an information or is that an experimental feature? Or is that reflected all the way through so if I go to Ember Observer today, I'll see that information based on those computations? KATIE: I start playing in the Ember Observer interface. It's only for the top 100 add-ons currently but hopefully expanding that to all add-ons. Especially, for few months but maybe it's not easy to notice. The only on top 100 add-ons would be on the right side bar and there will be a list of the scenarios we ran it with and whether or not it pass or not. There's add-on information there. The top 100, I'll link to right on the main page of Ember Observer so you can see in the front. CHARLES: How do you get that information back to the author of that top add-on? KATIE: We haven't actually done that. It's just on Ember Observer. It's more meant for consumers to be able to see that this add-on is compatible with all these version. We're not using their scenarios. We're using our own scenarios saying Ember from this version to this version, unless they have specified that version compatibility thing and then we'll use those auto-generated scenarios. This might get harder for add-ons that have complex scenarios so it need something else to vary along with the versions like Ember Data or maybe they're using Liquid Fire and Liquid Fire have these three different version and for each versions, it's being used. For those, we'll just have things which are unable to test this. But hopefully, this is still providing some useful information for some add-ons. A lot of add-ons, their build won't run unless they commit. In this case, this is running every night so new Ember versions released will see if that fails. On other side of it, we have a dashboard where we can see which add-ons failed and maybe see if a new commit to something broke a bunch of add-ons. It commits to something like Ember and for CLI Ember-Gate, one of the main things. CHARLES: I know that certainly that right after we get over this podcast, I'm going and running or checking up all add-ons that we maintain and making sure everything is copacetic. If you guys see me take off my headphones and dash out the door, you know where I'm going. KATIE: Got you. ALEX: I just have a further comment that I'm excited for the public API of Ember Observer just because I've been thinking a lot about data visualization lately. I think it would be really cool tool to do a deprecated API, like one of those bubble charts where like the area that's covered by this deprecated API -- I'm doing a bad job of explaining this -- just like the most use deprecated API methods and visualizing that, I think they'll be really interesting to see it. CHARLES: Right, seeing how they spread across the add-on. ALEX: Yeah, or just all add-ons in general. KATIE: I am most nervous about a public API for Code Search, though because it's a little bit resource-intensive so just freaking out a little bit about the potential of a public API for it. But an Ember Observer client is open source. If you want to add anything to the app, that I consider as public think it is. Adding to that, I really do want to figure out some way to have like a performance budget for when people add to the client because sometimes I'll get people who want to add features and I'm like, "That's just going to screw all of it. It's going to be a problem for all Ember Observer and it's going to make everything slower and it's really little slow. But JSON API fortunately, I have kind of a beta version that running and it's going to be much faster, thank God. I probably shouldn't said that either. CHARLES: Definitely we want to get that donation bin set up before the API goes public. Okay, let's turn to the internet now and we'll answer some of the questions that got twitted in. we've got a question from Jonathan Jackson and he wanted to ask you, "Where have you seen the most change in the Ember ecosystem since last EmberConf?" which was March of 2016. KATIE: There was definitely fewer add-ons being published but the add-ons that are being published are kind of, say more grown up things. We've got... I don't know if engines was before or after March. I have no idea. Time has one of those things that engines and then people doing things related to Fastfood so things are coming from more collaborative efforts, I think. This is just my gut-feeling. I have no data on this. Isn't a gut-feeling from looking at add-ons and then there's a lot of add-ons that are coming out that are specific to a particular company. I think that maybe, I hope representative of more companies getting it to Ember but hopefully, they'll make things more generic and share them back. The other problem with the popularity is like about before, where big company is getting itself into the top 100 list, probably with just its own employees only appear over the summer. I tried a few different ways to mutate the algorithm to try to get them out of there but there was no solution there. It's much fewer, novel things. Very rarely do I look at Ember add-ons and I'd be like, "Oh, that's great," but when I do, it's something very exciting. CHARLES: Right so there's a level of maturity that we're starting to see. Then I actually think that there is something in the story too, of there are now larger companies with big, big code bases that have lots of fan out on their dependency tree that just weren't there before. KATIE: Definitely. I don't think some of the large companies were there before but I think some of the largest companies are probably keeping most of their add-ons private so there's kind of mid-range of company that's big enough to donate things or willing to put things open source. A few of these companies that can have a lot of add-ons now and a lot of them are very similar to things that have already existed so you're going to be like, "I don't know why I use this," but they obviously make changes for some reason. CHARLES: The other thing that I want to talk to you about, before we wrap up, is you actually are in partnership in Code All Day. What kind of business is that? What is it you guys do? What's it like running your company, while on the same time, you're kind of managing these large pieces of the Ember ecosystem? KATIE: Code All Day is very small. It's just me and Michelle. It's a consulting company, we kind of partnered together after we left to startup and decided to do consulting together. We primarily do Ember projects, also some Rails and we try to work together and we love test-driven things. It's pretty kind of loose end. We ended up running it since it's just a partnership. We don't have any employees. But Ember Observer will take up a lot of our time and we really had an idea that it might help us get clients that way so I suppose it kind of helps our credibility but it hasn't really been great for leads so much. But fortunately, there hasn't been a big problem for us. We really enjoys spending our time. We enjoy the flexibility that consulting gives us and while that flexibility is what's going to making these things keep running. CHARLES: All right-y. Well, are there any kind of skunkworks, stealth, secret things you've got brewing in the lab, crazy ideas that you might be ready to give us a sneak preview about for inquiring minds that may want to know? KATIE: Some of them are really [inaudible] which is redoing Ember Observer with JSON API instead of currently, it's using ActiveModel serializers, which is a kind of custom API to Rails and [inaudible] fortunately, it's an API now. They're removing something called JSON API Resources so that will get the performance of Ember Observer much better and that's pretty much my primary focus at the moment. I don't really have any big skunkworks, exciting projects. I have far off ideas that hopefully will materialize into some sort of skunkworks projects. CHARLES: All right. Well, fantastic. I want to say thank you, Katie for coming on the show. I know that you are kind of a hero of mine. I think a lot of people come to our community and they see like, "Where's the value in being a member of this community, in terms of the things that I can take out of it? What does it provide for me?" And you demonstrate on a day-to-day basis, asking what you can do for your community, rather than what your community can do for you, to paraphrase JFK. I think you live that every day so I look up to you very much in that. Thank you for being such a [inaudible] of the community which I'm a part of and thank you for coming on the show. KATIE: I'm very happy that I've been here and thank you. I use a lot of your guys add-ons and it's really the community has given so much to me, which is why I ever want to participate in it. It's really great group of people. CHARLES: Yep, all right-y. Well, bye everybody. ALEX: Bye.

The Web Platform Podcast
119 Testing Testing Testing

The Web Platform Podcast

Play Episode Listen Later Dec 22, 2016 48:34


Danny, Erik, & Mark discuss their experiences and current challenges in testing code across their applications and services. Resources Panic - https://github.com/gundb/panic-server Aphyr, Kyle Kingsbury, testing network partitions - https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions Lighthouse Github - https://github.com/GoogleChrome/lighthouse Chrome Plugin - https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk?hl=en Jeff Posnick at PWA Conference - https://developers.google.com/web/shows/pwa-devsummit/amsterdam-2016/to-the-lighthouse-progressive-web-app-summit-2016 Benchmarkjs - https://benchmarkjs.com/ Mocha - https://mochajs.org/ Tape - https://github.com/substack/tape Qunit - https://qunitjs.com/ Jasmine - https://jasmine.github.io/ Sinon - http://sinonjs.org/ Enzyme - https://github.com/airbnb/enzyme Chai - http://chaijs.com/ Selenium - http://www.seleniumhq.org/projects/webdriver/ PTSD (tech talk) - https://youtu.be/BEqH-oZ4UXI JSPerf replacement: http://jsbench.github.io/ Travis (automate tests on push to GitHub) - https://travis-ci.org/

Front End Happy Hour
Episode 006 - Unit testing and whiskey tasting

Front End Happy Hour

Play Episode Listen Later Apr 25, 2016 51:03


We’ve all heard unit testing is good, but how do you get started writing unit tests? In this episode of Front End Happy Hour we share our experiences and advice writing unit tests. We discuss why it’s important and beneficial to have unit tests in your JavaScript. We share how we’ve approached unit tests and what a good unit test looks like. We also talk about the various tools and frameworks available to get your code properly tested. Items mentioned in the episode: Selenium, Black-box testing, White-box testing, Ember guides, Mocha, Jasmine, QUnit, Tape, Jest, Webpack, 5 Questions Every Unit Test Must Answer, Ember CLI, React CLI, Karma, What is the difference between a test runner, testing framwork, assertion library, and a testing plugin?, Ember Guides introduction to Unit Testing Panelists: Ryan Burgess - @burgessdryan Augustus Yuan - @augburto Jem Young - @JemYoung Derrick Showers - @derrickshowers Picks: Ryan Burgess - Caffeine for Mac Ryan Burgess - Odesza Augustus Yuan - Google Doodles Augustus Yuan - OSSU Computer Science curriculum Augustus Yuan - Mura Masa - What If I Go? Augustus Yuan - teamLab: Living Digital Space and Future Parks Jem Young - Flume - the mixtape Jem Young - Programming Sucks Jem Young - Hype Machine Derrick Showers - Google Calendar goals Derrick Showers - $13 bluetooth headset

The Web Platform Podcast
69: Testing Front End Code

The Web Platform Podcast

Play Episode Listen Later Nov 3, 2015 75:41


Summary   Oren Rubin (@Shexman) goes through why it's important to not only test the back-end code of our applications but also to test our Front End code, the integration points, and the full user experience. Oren also goes through reasons why you would test, what should be tested, best practices, and when you should not test.   O'Reilly Media Partner Discounts The Web Platform Podcast is a proud O'Reilly Media Partner. As such, one of the benefits we provide our listeners are special  discounts such as 50% off ebooks and 40% in printed material. This includes but is not limited to books on the web technologies. Your discount code is PCBW so head over to http://www.oreilly.com/ right now to get all your favorite tech books at much lower prices. Your Latest O'Reilly Discounts 20% Discount to FluentConf http://conferences.oreilly.com/fluent-javascript-html-ca/ Call for proposals is done, registration is open, and O'Reilly Fluent Conf is back in just a few months. Fluent, The Web Platform conference will be held in San Francisco, CA on March 7-10 2016. Get practical Training in JavaScript, HTML5, CSS and the latest web development technologies and frameworks. The Web Platform Podcast listeners receive a 20% discount when registering for the conference. Make sure you use the promotional code PCWPP20 to receive your discount.   Free eBook: Data-Informed Product Design http://www.oreilly.com/pub/cpc/1220   Designers must understand user needs to create any product. But what type of data should you look at? In her new book, Data-Informed Product Design, Pamela Pavliscak outlines a way to use data of all kinds to understand the relationship between people and technology. Generally speaking, big data is quantitative; it gives you the what, where, and when, while “thick data” provides the qualitative perspective—the how and the why.   Up until now, there hasn't been much information on how to combine quantitative big data with qualitative thick data. That's where this report can help. If you're involved in any aspect of product design, this is indispensable reading. It's useful, and we're pleased to offer it to you, for free! Get the free ebook now.   Resources Unit Tests QUnit - Basic unit tesing https://qunitjs.com/ Sinon - Unit test spies, stubs and mocks library http://sinonjs.org/ Jasmine - all-in-one unit testing  http://jasmine.github.io/ Karma - front end test runner http://karma-runner.github.io/0.13/index.html istanbul - https://github.com/gotwarlost/istanbul   End to end testing Selenium WebDriver - http://www.seleniumhq.org/ Protractor - https://angular.github.io/protractor/#/ phantomjs - headless WebKit “browesr” with JS API http://phantomjs.org/   Testim.io - a new to write end-to-end tests using dynamic locators https://testim.io/ mocha - https://mochajs.org/ Integrated tests are a scam - http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/ Selenium Grid - https://github.com/SeleniumHQ/selenium/wiki/Grid2 Visual validations Open source: Huxley, Wraith Commercial: applitools, percy.io Sauce Labs - Browser in the cloud service provider https://saucelabs.com/   Panelists Erik Isaksen (@eisaksen) - Front End Development Lead at Deloitte Digital & Google Developer Expert in Web Technologies Justin Ribeiro (@justinribeiro)  - Wearables & HTML5 Google Developer Expert & Partner at Stickman Ventures or random person who keeps finding our Hangout link

Ember Weekend
Episode 12: The Curiously Strong Weekend

Ember Weekend

Play Episode Listen Later Jun 8, 2015 19:25


Chase and Jonathan talk about Ember Simple Auth + Mirage, QUnit helpers, an RFC by Tom Dale, Breadcrumbs and blog posts, component actions, and the growing Ember community.

Enterprise Java Newscast
Episode 21 - Aug 2014

Enterprise Java Newscast

Play Episode Listen Later Aug 29, 2014 70:18


Kito, Ian, and Daniel change up the format to involve more discussion. They discuss the new Java EE JSRs (JSF, JMS, Servlet, CDI, Java API for JSON Binding, etc.) plus entirely new proposals such as MVC and Java API for JSON Binding. Other topics include RichFaces, PrimeFaces, Apache TomEE, Delta Spike, QUnit, Angular.js. Hibernate OGM, Spring Boot, JBoss Errai, and more, plus upcoming conferences. If you have feedback on the new format, please let us know. UI Tier Mojarra 2.2.8 released PrimeFaces 5.0.5, 4.0.18, 3.5.28 released RichFaces 4.5.0.Alpha3 released ICEfaces EE 3.3.0.GA_P02 released JSF 2.3 JSR proposal Servlet 4.0 Java API for JSON Binding Spring Framework 4.1 -- Spring MVC Improvements Java EE MVC 1.0 JSR Middle Tier Apache TomEE JAX-RS / Arquillian starter project Delta Spike 1.0 Released Contexts and Dependency Injection for Java 2.0 It's time to begin JMS 2.1. Persistence Tier Hibernate OGM nominated for Duke’s Choice Award Apache Hadoop 2.5.0 is released Apache Sqoop 1.4.5 released Misc   Jena 2.12.0 released Spring XD 1.0 GA Released Spring Boot 1.1.5 released Errai 3.1.0.CR1, 3.0.2.Final and 2.4.5.Final released! Events JavaZone - Oslo, Norway - September 9th-11th JavaOne - San Francisco, CA September 28 – October 2, 2014 Devoxx - Antwerp, Belgium - November 10th-14th No Fluff Just Stuff Atlanta, GA Sep 12 - 14 Boston, MA Sep 19 - 21 Minneapolis, MN Oct 3 - 4

/dev/hell
Episode 30: It's Episode 30, you guys

/dev/hell

Play Episode Listen Later Mar 29, 2013


In our action-packed 30th episode Ed and Chris discussed their experiences with JavaScript testing tools, specifically how certain tools push you towards specific refactoring patterns. Chris talked about the successful launch of his latest book on using PHPUnit and got into some honest talk about revenue and how the product development course helped him make this book do 4 times the launch day revenue of his previous one. Ed discussed his plans to talk about mental illness on the conference circuit this year. Please help out by donating to the campaign! Rate us on iTunes here Follow us on Twitter here. Like us on Facebook here Listen Download now (MP3, 25.2MB, 55:13) Links and Notes Mocha Sinon.JS Jasmine and Jasmine-node QUnit The Grumpy Programmer’s PHPUnit Cookbook 30x500 LoneStar PHP Open Sourcing Mental Illness Require.JS PhantomJS Zombie.js Karma “It’s pronounced ‘jif’” Chris' new favourite thought leader Async JavaScript made the jump from Leanpub to bigger publisher DadBoner

All JavaScript Podcasts by Devchat.tv
050 JSJ QUnit with Jörn Zaefferer

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Mar 8, 2013 57:32


Panel Jörn Zaefferer (twitter github blog) Jamison Dance (twitter github blog) Joe Eames (twitter github blog) Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up) Discussion 01:15 - Jörn Zaefferer Introduction jQuery QUnit 02:32 - QUnit jQuery Mobile Introduction to Unit Testing | QUnit 06:59 - Built-in support for HTML fixtures for your tests 08:50 - Unit Testing joshuaclayton / specit mmonteleone / pavlov 11:57 - Assertions fn:deep-equal 15:49 - Why use QUnit? unit testing - QUnit vs Jasmine - Stack Overflow stacktrace.js 023 RR Book Club: Smalltalk Best Practice Patterns with Kent Beck 26:01 - User experience for user interface 30:03 - Continuous integration setups Jenkins CI PhantomJS 023 JSJ Phantom.js with Ariya Hidayat jquery / testswarm jQuery's TestSwarm BrowserStack 36:55 - Testing in JavaScript Sauce Labs: Cloudified Browser Testing Testacular SeleniumHQ 43:35 - Add-ons Picks MYO - The Gesture Control Armband (Jamison) Mailbox (Jamison) Testing Clientside JavaScript (Joe’s Course) (Joe) DragonBox (Joe) Breeze.js (Joe) Anker Battery Pack (Chuck) App.net (Chuck) Leap Motion (Jörn) jQuery Validation Plugin Pledgie (Jörn) Next Week Finding a job Transcript JOE:  I'm really glad that I didn’t know you when Star Wars first came out....Dude! Vader’s Luke’s father. [Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.] [This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.] CHUCK:  Hey everybody and welcome to Episode 50 of the JavaScript Jabber Show. This week on our panel, we have Jamison Dance. JAMISON:  Hello friends. CHUCK:  We have Joe Eames. JOE:  Hey, everybody. CHUCK:  I'm Charles Max Wood from DevChat.tv. I'm the only person on this particular episode whose name does not start with J. We also have -- I know I'm going to destroy this name. Jorn Zaefferer. JORN:  Hi! Yeah, it’s me. You should have practiced the last name too. CHUCK:  Yeah. JOE:  You should pronounce that correctly for us so we know. JORN:  Jorn Zaefferer. CHUCK:  Alright. Well, I can say Jorn. So, I’m going to stick with that. JORN:  Yeah, that works. CHUCK:  Do you want to introduce your self for the people who aren’t aware of who you are and what you do? JORN:  Sure. I'm a freelance software developer since a little bit more than two years now. I am involved a lot in the jQuery project and have been involved in that for years. So far, I'm the only person on the Board of Directors of the jQuery Foundation outside of the US. And for the jQuery project, I'm working mostly on jQuery UI and the testing tools. So jQuery UI, I'm one of the lead developers. One was Scott Gonzalez. For the testing tools, I'm leading that team. So, I'm trying to get contributions from other people so things move along evenly. There’s usually much more work to do than I can handle myself. So, I’m trying my best to get open source going there. CHUCK:  So, you work on jQuery UI and QUnit? JORN:  I’m working on the jQuery UI and the testing tools which involves QUnit and a few other things. QUnit is the one that’s actually featured in the jQuery site. We also have TestSwarm and even smaller tools that eventually should get there as well. It’s much more influx than QUnit is. CHUCK:  Interesting. So, we brought you on the show to talk about QUnit. Joe is kind of our testing guru as far as JavaScript goes. Is QUnit just a unit testing framework or do you provide other tools for integration with a backend or other libraries? JORN:  QUnit focuses mostly on unit testing. But people usually end up using it for other things as well. I heard a story where someone was using QUnit to do performance regression testing.

JavaScript Jabber
050 JSJ QUnit with Jörn Zaefferer

JavaScript Jabber

Play Episode Listen Later Mar 8, 2013 57:32


Panel Jörn Zaefferer (twitter github blog) Jamison Dance (twitter github blog) Joe Eames (twitter github blog) Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up) Discussion 01:15 - Jörn Zaefferer Introduction jQuery QUnit 02:32 - QUnit jQuery Mobile Introduction to Unit Testing | QUnit 06:59 - Built-in support for HTML fixtures for your tests 08:50 - Unit Testing joshuaclayton / specit mmonteleone / pavlov 11:57 - Assertions fn:deep-equal 15:49 - Why use QUnit? unit testing - QUnit vs Jasmine - Stack Overflow stacktrace.js 023 RR Book Club: Smalltalk Best Practice Patterns with Kent Beck 26:01 - User experience for user interface 30:03 - Continuous integration setups Jenkins CI PhantomJS 023 JSJ Phantom.js with Ariya Hidayat jquery / testswarm jQuery's TestSwarm BrowserStack 36:55 - Testing in JavaScript Sauce Labs: Cloudified Browser Testing Testacular SeleniumHQ 43:35 - Add-ons Picks MYO - The Gesture Control Armband (Jamison) Mailbox (Jamison) Testing Clientside JavaScript (Joe’s Course) (Joe) DragonBox (Joe) Breeze.js (Joe) Anker Battery Pack (Chuck) App.net (Chuck) Leap Motion (Jörn) jQuery Validation Plugin Pledgie (Jörn) Next Week Finding a job Transcript JOE:  I'm really glad that I didn’t know you when Star Wars first came out....Dude! Vader’s Luke’s father. [Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.] [This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.] CHUCK:  Hey everybody and welcome to Episode 50 of the JavaScript Jabber Show. This week on our panel, we have Jamison Dance. JAMISON:  Hello friends. CHUCK:  We have Joe Eames. JOE:  Hey, everybody. CHUCK:  I'm Charles Max Wood from DevChat.tv. I'm the only person on this particular episode whose name does not start with J. We also have -- I know I'm going to destroy this name. Jorn Zaefferer. JORN:  Hi! Yeah, it’s me. You should have practiced the last name too. CHUCK:  Yeah. JOE:  You should pronounce that correctly for us so we know. JORN:  Jorn Zaefferer. CHUCK:  Alright. Well, I can say Jorn. So, I’m going to stick with that. JORN:  Yeah, that works. CHUCK:  Do you want to introduce your self for the people who aren’t aware of who you are and what you do? JORN:  Sure. I'm a freelance software developer since a little bit more than two years now. I am involved a lot in the jQuery project and have been involved in that for years. So far, I'm the only person on the Board of Directors of the jQuery Foundation outside of the US. And for the jQuery project, I'm working mostly on jQuery UI and the testing tools. So jQuery UI, I'm one of the lead developers. One was Scott Gonzalez. For the testing tools, I'm leading that team. So, I'm trying to get contributions from other people so things move along evenly. There’s usually much more work to do than I can handle myself. So, I’m trying my best to get open source going there. CHUCK:  So, you work on jQuery UI and QUnit? JORN:  I’m working on the jQuery UI and the testing tools which involves QUnit and a few other things. QUnit is the one that’s actually featured in the jQuery site. We also have TestSwarm and even smaller tools that eventually should get there as well. It’s much more influx than QUnit is. CHUCK:  Interesting. So, we brought you on the show to talk about QUnit. Joe is kind of our testing guru as far as JavaScript goes. Is QUnit just a unit testing framework or do you provide other tools for integration with a backend or other libraries? JORN:  QUnit focuses mostly on unit testing. But people usually end up using it for other things as well. I heard a story where someone was using QUnit to do performance regression testing.

Devchat.tv Master Feed
050 JSJ QUnit with Jörn Zaefferer

Devchat.tv Master Feed

Play Episode Listen Later Mar 8, 2013 57:32


Panel Jörn Zaefferer (twitter github blog) Jamison Dance (twitter github blog) Joe Eames (twitter github blog) Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up) Discussion 01:15 - Jörn Zaefferer Introduction jQuery QUnit 02:32 - QUnit jQuery Mobile Introduction to Unit Testing | QUnit 06:59 - Built-in support for HTML fixtures for your tests 08:50 - Unit Testing joshuaclayton / specit mmonteleone / pavlov 11:57 - Assertions fn:deep-equal 15:49 - Why use QUnit? unit testing - QUnit vs Jasmine - Stack Overflow stacktrace.js 023 RR Book Club: Smalltalk Best Practice Patterns with Kent Beck 26:01 - User experience for user interface 30:03 - Continuous integration setups Jenkins CI PhantomJS 023 JSJ Phantom.js with Ariya Hidayat jquery / testswarm jQuery's TestSwarm BrowserStack 36:55 - Testing in JavaScript Sauce Labs: Cloudified Browser Testing Testacular SeleniumHQ 43:35 - Add-ons Picks MYO - The Gesture Control Armband (Jamison) Mailbox (Jamison) Testing Clientside JavaScript (Joe’s Course) (Joe) DragonBox (Joe) Breeze.js (Joe) Anker Battery Pack (Chuck) App.net (Chuck) Leap Motion (Jörn) jQuery Validation Plugin Pledgie (Jörn) Next Week Finding a job Transcript JOE:  I'm really glad that I didn’t know you when Star Wars first came out....Dude! Vader’s Luke’s father. [Hosting and bandwidth provided by the Blue Box Group. Check them out at Bluebox.net.] [This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.] CHUCK:  Hey everybody and welcome to Episode 50 of the JavaScript Jabber Show. This week on our panel, we have Jamison Dance. JAMISON:  Hello friends. CHUCK:  We have Joe Eames. JOE:  Hey, everybody. CHUCK:  I'm Charles Max Wood from DevChat.tv. I'm the only person on this particular episode whose name does not start with J. We also have -- I know I'm going to destroy this name. Jorn Zaefferer. JORN:  Hi! Yeah, it’s me. You should have practiced the last name too. CHUCK:  Yeah. JOE:  You should pronounce that correctly for us so we know. JORN:  Jorn Zaefferer. CHUCK:  Alright. Well, I can say Jorn. So, I’m going to stick with that. JORN:  Yeah, that works. CHUCK:  Do you want to introduce your self for the people who aren’t aware of who you are and what you do? JORN:  Sure. I'm a freelance software developer since a little bit more than two years now. I am involved a lot in the jQuery project and have been involved in that for years. So far, I'm the only person on the Board of Directors of the jQuery Foundation outside of the US. And for the jQuery project, I'm working mostly on jQuery UI and the testing tools. So jQuery UI, I'm one of the lead developers. One was Scott Gonzalez. For the testing tools, I'm leading that team. So, I'm trying to get contributions from other people so things move along evenly. There’s usually much more work to do than I can handle myself. So, I’m trying my best to get open source going there. CHUCK:  So, you work on jQuery UI and QUnit? JORN:  I’m working on the jQuery UI and the testing tools which involves QUnit and a few other things. QUnit is the one that’s actually featured in the jQuery site. We also have TestSwarm and even smaller tools that eventually should get there as well. It’s much more influx than QUnit is. CHUCK:  Interesting. So, we brought you on the show to talk about QUnit. Joe is kind of our testing guru as far as JavaScript goes. Is QUnit just a unit testing framework or do you provide other tools for integration with a backend or other libraries? JORN:  QUnit focuses mostly on unit testing. But people usually end up using it for other things as well. I heard a story where someone was using QUnit to do performance regression testing.

/dev/hell
Episode 21: The Grace Hopper Rape Whistle

/dev/hell

Play Episode Listen Later Oct 6, 2012


Twenty-one. Like blackjack. Like the gun salute. For those about to rock. We recorded this episode just a few days after #20, because we were able to get Elizabeth Naramore as a super-special secret guest! She talks to us about Codeconnexx, an open source tech and life skills conference in Indianapolis Nov 8-9. We also talked a lot about getting women to submit talks to conferences, including the success that Jan Lehnardt has had in this area with JSConfEU. We then get into Chris’s experiments with JS testing, and what we think of App.net from a developer perspective. If you’re support the inalienable rights of all humans, you’ll do these things: Check out our sponsors, Engine Yard and WonderNetwork Follow us on Twitter here. Rate us on iTunes here Listen Download now (MP3, 28.5MB, 1:06:42) Links and Notes Codeconnexx True North PHP Øredev 2012 How We Got 25% Women Speakers for JSConf EU 2012 Elizabeth Naramore’s Blog Qunit Jasmine Chai Mocha Sinon App.net Post photo by Liene Verzemnieks/@li3n3

The Tablet Show
Jeffrey Fritz Unit Tests Javascript in Windows 8

The Tablet Show

Play Episode Listen Later Aug 5, 2012 36:00


Carl and Richard talk to Jeff Fritz about doing unit testing of WinJS in Windows 8. Jeff talks about QUnit, the JavaScript testing framework built by the JQuery folks for testing JQuery. There are some interesting challenges for JavaScript testing, especially around automation. To deal with the complexities of working with WinJS in Windows 8 Jeff has forked QUnit to create QUnit-Metro. The conversation also digs into the challenges of working in WinJS, where tablets appear to be going and the what its like hopping between two flavors of JavaScript.

/dev/hell
Episode 14: The PHP Guy Is Sulking

/dev/hell

Play Episode Listen Later Jun 14, 2012


This week we’re joined by Justin Searls, JavaScript developer and JS testing EXPERT. We talk lots about building and testing “fat” browser apps, particularly about best practices and different testing approaches. After a while Chris felt bad and told us to shut up. This was the first podcast we broadcast live while recording. Big ups to WonderNetwork for providing the streaming bandwidth, and Engine Yard for sponsoring the podcast. Keep an eye on the @dev_hell Twitter account for info on our next live stream. If you love us, you will do these things: You should follow us on Twitter here. You should rate us on iTunes here Listen Download now (MP3, 33.1MB, 1:16) Notes TestDouble training.gaslightsoftware.com Backbone Idiomatic JS Require.js Jasmine Cucumber Behat Michael Feathers book on working with legacy code Rails asset pipeline Dieter for Clojure Kohana Assets Assetic Webassets for Python Drumkit.js by Chris Powers QUnit Searls on GitHub https://github.com/searls/jasmine-fixture https://github.com/searls/jasmine-given https://github.com/searls/jasmine-stealth Pro JavaScript (John Resig)

Ploppcasts
Ploppcasts 011 - JavaScript und Client-Side Stromausfälle

Ploppcasts

Play Episode Listen Later Dec 31, 2011 90:13


Dieses Mal in vollkommen ungewohnter Besetzung, weil Dennis mit seinem Rumgefummel an der Technik das Stromnetz in seiner Straße zum erliegen gebracht hat ("Laaaagestrassseeee!") und statt dessen spontan sein Bremer Kollege Eckhard Rotte im Schanzenstudio zugeschaltet werden konnte. Thema ist weiterhin Javascript und Client-Side Apps. Trinkspiel-Regeln (auch für die Zuhörer an den Empfangsgeräten): Nennung eines Arbeitgebers bzw. Auftragnehmers führt zur sofortigen Einnahme einer selbst festzulegenden Menge beliebiger Flüssigkeiten (Unsere Empfehlung: Alkoholische bzw. nicht-alkoholische Getränke sind gegenüber Mineralöl, eitrigen Ausflüssen und Schmierseife zu bevorzugen) Shownotes: - require.js: http://requirejs.org/ - Handlebars: http://handlebarsjs.com/ - Jade (Templating mit signifikantem Whitespace, Name ist uns in der Sendung nicht eingefallen): http://jade-lang.com/ - Mustache.js: https://github.com/janl/mustache.js - QUnit: http://qunitjs.com/ - Jasmine: http://pivotal.github.com/jasmine/ - Sinon.js: http://sinonjs.org/ - YUI Test: http://yuilibrary.com/yui/docs/test/ - YETI: http://yuilibrary.com/projects/yeti/ - Sinon.js: http://sinonjs.org/ - Casper.js: http://casperjs.org/ - Phantom.js: http://phantomjs.org/ - Cucumber: http://cukes.info/ Picks von Jan: - RaspberryPi: www.raspberrypi.org - Optoma Pico: http://www.optoma.de/picoconsumer.aspx?PC=PK120&ShowMenu=Y&PTypeDB=Pico - Apple Keynote: http://www.apple.com/iwork/keynote/ - Buch: Test Driven JavaScript Development: http://tddjs.com/ Picks von Peter: - RevealJS auf der nächsten Hamburger Ruby-UG: http://hamburg.onruby.de/wishes/in-browser-prasentationen-mit-reveal-js - Teleport: https://github.com/gurgeous/teleport - Google Tag Manager: http://www.google.com/tagmanager/ - Yps-Reboot: http://www.heise.de/newsticker/meldung/Was-bist-du-gross-geworden-Yps-fuer-Erwachsene-1727146.html Pick von Daniel: - Rails 4 whirlwind tour: http://vimeo.com/51181496) Pick von Ecki: - Icebreaker Wäsche: http://de.icebreaker.com/