POPULARITY
We're talking Safari drama, wondering why Apple won't support other browsers on mobile, some solutions to writing in VS Code, and solving problems in isolation.
Start learning with Brilliant (and get 20% off) https://www.brilliant.org/reneritchieApple just fixed Mobile Safari in the latest iOS 15 betas. Not in 6 years like the modular Mac Pro. Not in 3 years like the Butterfly keyboard. But in 2 months. They took in community feedback. They took it seriously. And they took action. Fast. Like speed-force fast. So, what else could Apple fix and fast? Child Safety, Game Streaming, Advertising? Well…See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Kaay and Kane Motswana grew up in neighbouring villages in the Northern Okavango but only met as adults working in the tourism industry. In this episode this wonderful couple share their insights into growing up surrounded by wildlife and discuss the impact this has had on their careers and businesses. They share some advice for younger tourism professionals who may be looking to start their own businesses as well as their love and passion for Botswana and its wild spaces.They run a travel agency Safari Embassy which books trips to Botswana and around Africa as well as trips with their Mobile Safari company Kane's Adventure Safaris.Great Plains Conservation's Student Conservation Camps which Kane participates in every year is mentioned in the episode.
In this internal episode, Charles and Wil talk about testing issues and BigTest solutions. Pieces of the testing story are discussed, such as the start and launch application, component setup and teardown, interacting with the application and component, convergent assertions, and network. Then they talk about testing issues: the fact that cross browser and device-simulated browsers are not good enough, maintainability and when and when not to DRY (RYE), slowness and why (acceptance) testing is slow, portability and why tests are coupled to the framework, and reliability. Finally, they talk about BigTest solutions: @bigtest/cli to start / launch (Karma recommended for now) @bigtest/react, @bigtest/vue, etc for setup & teardown @bigtest/interactor for interactions @bigtest/convergence for assertions @bigtest/network in the future (Mirage recommended for now) Resources: Justin Searls – Please don't mock me This show was produced by Mandy Moore, aka @therubyrep of DevReps, LLC. Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode 115. My name is Charles Lowell, this episode's host and a developer here at the Frontside. With me today to talk some shop is Mr Wil Wilsman. WIL: Hello. CHARLES: Hello, Wil. WIL: How's it going? CHARLES: It's going good. I'm actually pretty excited to get to jump into this topic because we're going to be talking about some of the big things that are happening at Frontside and some of the things that we've been developing in almost for the last year. WIL: Yeah. It's been about a year now. CHARLES: It's been about a year and we've talked about it in various podcast but we're going to be talking about it again because there's just been so much progress that we've made, I think in a lot of clarity in kind of what we're going for here when we talk about BigTest and testing big and how we want to roll out the BigTest framework. We just have a lot more experience using it on a number of different projects, so we get to talk about that today. Before we get started, I just wanted to talk a little bit about what BigTest is, both in terms of the framework and also the philosophy. Wil, you're the one who works the most on BigTest. When you think about philosophically, what does BigTest mean to you? WIL: It's the size of your test, not a physical size like size and storage but how much your task actually does. The test itself can be very small as our test are but it tests the whole application from the user interacting with it down to the network requests. That's the definition of the philosophy of a BigTest to me. It's to tests your application from the biggest point of view. CHARLES: Actually, achieving that can be surprisingly difficult, especially in a frontend JavaScript application and there are a lot of solutions out there for testing and we've talked about them. One of the questions that arises is when we talk about BigTest, what exactly are we talking about? Are we talking about a product that you can download and install? Are we talking about the philosophy that you just outlined? Or are we talking about the individual pieces of software that make that philosophy real? I think the answer is we're kind of talking about all three but we want to take this episode to talk about where we're going with the product. What we've identified is the subcomponent pieces of that product. In other words, in order to get started testing big, what are the things that you need to think about? What are the things that you need to do? And then what are the component pieces? Because one of the things that I think is very important to us is that you be able to arrive at wherever you are in your project, whatever framework you are using, whatever current testing solution and be able to begin using BigTest. That means, you might be using some of it or you might be using a lot of it but we want to meet you exactly where you are, so that you can then, get onboarded and start testing big. WIL: Yeah. Definitely an important distinction that we get confusion about is what is BigTests and people just assume like this whole test suite is BigTest but we used the parts of it ourselves like we use Mocha, which is not part of BigTest. We use Chai, which is not part of BigTest. We use Mirage which is kind of part of BigTest but definitely it originate in BigTest and Karma and things like that. BigTest isn't your testing suite. It's not one thing to go-to to grab, to start writing tests. It is a small pieces that you can use in conjunction with other small pieces, just to make it really easy and flexible to test your application. CHARLES: Exactly. Because it turns out that there's a lot going on in the application. Maybe we should talk about what some of those pieces are that you might want to start using BigTest with or that you might need to test big, I guess I should say. What's a good place to start? Let's start with talking about some of the issues that you want to do when your testing big. Then we can talk about what pieces of the testing story that fit in to solve those issues. One of them is you need to test that your application works, like actually works. That means you need to be able to test on a multiplicity of browsers, for example. We're limiting to the domain of web applications. There are actually a shockingly large number of browsers. It's not just Chrome. It's not just Safari. There's Mobile Chrome, Mobile Safari, which are subtly different. There's Edge and I'm sure the Mobile Edge is slightly different too, so you want to be able to test cross browser, right? WIL: Yeah, absolutely and things like Nightmare and JS DOM and things that simulated browsers, we don't necessarily think those are the best tools for writing BigTest because we want to ensure that those browser quirks are caught and tested as well. CHARLES: This is not theoretical like sometimes you'll have a syntax, like the parser is slightly different and you have something that throws a syntax error in Safari or in the Internet Explorer and your whole app is completely busted. If you just take in the time, just even trying to load the app in that browser, you would have caught that. That's what I've been on many times. WIL: Yeah and what I just saw came up yesterday, which comes up frequently is not closing your CSS Selector and Chrome doesn't really care like web to browsers don't care too much but that will fail in Edge and depending on what you're missing, the failing is part of that too but mostly, Firefox and Chrome don't care about that kind of thing. CHARLES: Right. It seems like the majority of testing solutions are kind of focused around Headless Chrome or some variation of Electron. That entire class of really dumb errors has already been caught. Like I said, to actually catch it, it takes less than a millisecond of CPU time just to load it onto the browser and see that thing doesn't work. Unfortunately, they can be catastrophic errors but the problem is how do you actually do. We want to test like cross browser. This is something that we want to do. For me, I just can't imagine shipping an application without having some form of cross browser testing, some capability of being able to say, "I want to test it," like, "We want to work on these eight browsers and so we're going to test it on these eight browsers," but how do you actually go about doing that? WIL: Right now, we are working on the BigTest CLI which will help us launch browsers but that's not complete yet. It has some bugs on. For the meantime we've been using Karma, which is great. Basically, you just have this service that's able to find the browser binary on the system and just launch them pointing to local hosts with your app loaded up and your normal development server take care of loading the test up and running the test. Karma and the BigTest CLI is just there to capture output and launch those separate browsers. CHARLES: Yeah. I remember when I was first using working with Karma and I think Testim is another tool that's in this space. There's Testim, Karma and BigTest actually is we're developing a launcher because launching is something that you're going to need but it's such a weird problem. I feel like with the browser launchers, there's three levels of inversion of control because you're starting a server that then starts another process, which then calls back to your server, which then loads the app resources, which then loads the tests and then runs the test. There's a lot of sleight of hand that has to happen and – WIL: Including injecting the adapter that you use, like the Mocha adapter, the Jasmine adapter that ends up reporting back to the CLI. That's something that Karma and Testim and BigTest will handle for you. CHARLES: Right, so you're fanning out the test suite to a suite of browsers then collecting the results but basically, you need some sort of agent living inside the browser that's going to act on behalf of the test suite, to collect the results. I remember when I first came into contact with Karma and Testim, I was like, "This is so unnecessarily complex," but then, having used it for a while and I think there are some complexity that can be removed but if you want to do cross browser testing, that kind of level of ping-ponging is there's a certain amount of it that just necessary. It's something that's actually quite complex that you need to have in your stack, in your toolbox, if you want to truly test big. WIL: Yeah and all the solutions is mechanisms for detecting when the browser has launched and restarting the browser based on its health check, etcetera and things like that that you wouldn't think of actually loading up a browser but you need to think of when you're doing automated testing. CHARLES: What is it that sets apart, for example the launcher solution? We kind of call this class of solutions launchers, so Testim, Karma, the BigTest CLI. What is it that sets BigTest CLI apart from say, Karma and Testim? WIL: We're trying to be as minimal config as possible and just really easy to get started and going. Karma has a lot of plugins that you need to make sure you have installed and loaded in the options set for those plugins. Testim has some stuff bundled but it still requires this big config bulk at the beginning that you need to passing or that's all what you were doing. We're trying to avoid that with BigTest CLI and one of the ways that we're able to avoid that is by just letting your Bundler handle bunding the test. In Karma, you need Karma webpack or something. Testim has some stuff that it needs and really, we just want like in-testing mode. When you're in the testing environment, just change your index to point your tests, instead of your application and your Bundler will do all the work and we just serve that file and collect the results. CHARLES: Right, so it doesn't matter if you're using Parcel or you're using webpack or you're using Ember CI. WIL: Yeah, Rollup even. CHARLES: Or even just like low level Broccoli or Gulp or whatever. There's a preponderance of bundling solutions and that was always something that was just a huge pain in the butt with Karma. I know it's like just getting to the point where my tests are loaded and you look with Testim, most of my experience come with Testim comes through how it's used in Ember CLI like the histrionics that undertaken just to bundle all your tests assets and your application assets and your vendor assets and just kind of bootstrap that thing. It's a lot of work. WIL: Another thing with BigTest CLI that doesn't include in Karma and Testim does is a concept of a watcher because all these Bundlers, you have HMR -- hot module reloading, Rollup and things like that. It come with plenty of plugins. Parcel is always set out of the box, so if you're using your Bundler, your existing Bundler to bundle your test, you get that watch feature for free, so it's another complexity that the BigTest CLI kind of eliminates. CHARLES: What it means is we've hidden most of that complexity. Just let the Bundler handle it, right? The Bundler is the part of your project that bundles. WIL: Yeah. CHARLES: You should have your launcher actually doing that for you but we still do need to have some way to do that set up and tear down. When we have that testing endpoint, we have some way to say, "We're starting a test, not the application. We're ending the test, tear it down," so how do you abstract that away? WIL: That's kind of something that we can't really avoid. It is just like some sort of dependency on the framework itself, your application framework. It's like you need to mount a React app. You need to mount an Ember app, etcetera and there's different ways to mount those things. This is one of the things that can't really be decoupled as much as everything else can but BigTest has BigTest React and BigTest Vue and we want to eventually gets like BigTest Ember but really, the main export of all these packages is just a simple mount helper that will mount and clean up your application for you and your testing hooks, whether you're using before each from Mocha or before from something else like Jasmine. You know, no matter what you're doing, you just have a hook that mounts your application and then, cleans it up on the next mount. CHARLES: It's worth pointing out here that this is kind of a core concern of testing and testing big is being able to mount your application and tear it down with regularity and having hooks into that process. Whether you're using BigTest or whether you're not, can you still use BigTest React and BigTest Vue, even if you weren't using anything else? WIL: Yeah, absolutely. Like I said, they just export simple mount helpers. I don't even think they have any other inner BigTest dependencies. They just have pure dependencies on their frameworks. CHARLES: Right and so, you could use it, even if you wanted to roll everything else by hand or you wanted to get started somehow and you needed to do set up and tear down, again this is something that's key to being able to test big, so you should be able to use it independently, whether you use the CLI or not, whether you're using any of the other tools or not. All of the tools can be used independently. WIL: Then another feature of the BigTest React and BigTest Vue is the tear down that happens before set up, rather than happening after your test runs, having a separate tear down. This allows it. Whether your test passes or fails, you can look at it and play with it and inspect it and debug it much easier than if you had tear down. You have to disable at tear down or throw a pause in there to keep other or something. CHARLES: Yeah, I love that. When something goes wrong, you can just let the test case run and the last test that it runs, it just leaves at set up. It does the tear down right before the set up. WIL: Exactly, yeah. At the very end of the whole test run, there's an app there waiting for you to play with. CHARLES: If you focus in on a single test, we most commonly use Mocha, so you say a '.only' to run that single focus test, then you have the state of the application at that test case set up and ready to go. You can just play with it, you can inspect it, you can actually just use it as a starting off point and interact with the app normally as you would. WIL: I want to say, Cypress does this too. They do their tear down before they're set up as well. That's how you're able to play with Cypress test. CHARLES: Yeah, I like that trick. Now, we talked about launching, setup and tear down but we haven't actually talked about much of what actually happens in the test cases themselves. We talked about how to start and launch your test suite, how to do that across a bunch of different browsers, how inside of that, you have a separate concern as applications set up and tear down and how you want to lean on how you're actual app is actually bundled because that fits in with the philosophy of testing big. You don't want to use an external Bundler for your test suite. You want to use your real Bundler, how the asset is actually going to look. But when it comes down to actually writing the tests, you need to be able to interact with at the highest level as you possibly can. When I say highest level, we want to verify that the users, when they take certain actions, we'll see certain outcomes and so, we want those outcomes and we already talked about this to be reflected in a real DOM, in a real browser. But at the same time, the real interactions, we want those to be as high fidelity as possible, so you want to be sending events to the browser. You want real mount events, real key events, real interactions. WIL: Yeah, interacting with application. That's another core philosophy that we kind of talked about earlier that defines a BigTest. It's the user interacting with your application. We're not calling methods and expecting other callbacks or arguments to be passed or clicking on a button and expecting a message to pop up that says, "Form submitted successfully." These are user-facing things were starting on and acting on. CHARLES: Yeah and then, it can be really tricky because these things don't happen synchronously. They're happening inside of your browser's event loop. I click that button and then it goes off and there's some loading state and then, I might get an error message that pops up this thing that animates out and then, goes away. The state of the browser is in constant flux. It's constantly changing and so, it can be very difficult to put your finger and say, I want to be in this state if you are limiting yourself to only reading from the DOM. Some frameworks, Ember for example, you have kind of a white box where you can actually inspect the state of the Ember run loop and use that to do some synchronization but it can be very, very hard to coordinate these interactions. WIL: Yeah. You know, to talk about getting to the solution as a BigTest interactor, which is basically modern page components or page objects. If you ever heard of page objects, it's just a way to encapsulate interacting with big pieces of your pages. It's not a new concept. It's been around for a while but BigTest interactor has kind of a new twist on it where they're immutable, composable interactions that are also convergent, which we'll get into later, which basically means if your buttons not there, it won't click the button until it is there. They're really powerful and they're making really easy and fun to write these tests. CHARLES: Yeah, they're super powerful. I remember we talked about convergences last time when we talked about BigTest but interactors, I think are definitely a new development. I think we should spend a little bit of time there talking about, not just the power but also the ergonomics of interactors because they are like page components or page objects, except they're scope to the component. Not only do they have all this wonderful stuff where it'll make sure that the component exists before it starts to interact with it and things like that but their composable. If I have a button, then there are certain operations that are valid for that button. I can click it. I can hover over it. I can do all these things. They're the operations that make it unique to the button. Now, those might actually map to real events. WIL: Similarly, their assertions about that button as well, like as primary is secondary. If this button is repeated throughout your application, you might want to make sure that your form has a primary and secondary button. CHARLES: Exactly. It really encapsulates all the knowledge of how you can interact with both in terms of taking action and reading state from that button. It almost feels like an accessibility API. It would be easy to write a screen reader if you had these interactors for every single component on the page. WIL: That's kind of what it is. It's just like you're defining an API around how your user would interact with your application and what your user would expect in the application. That's the point of page objects and interactors as you're defining this user API, essentially. CHARLES: Yeah and so, really the step that interactor take is that they take the classic page object and it make them composable, so I can have, you kind of touched on this before, a modal dialogue interactor, which is composed out of two button interactors. One for the primary action, one for the secondary action and maybe, it's aware of its own title text, so you can assert on the title text but I didn't actually have to write the individual button interactors for that modal dialog interactor. Then I might have a second modal dialog interactor or a form that's on a modal dialog just composed of the modal dialog interactors and the individual form components, which appear on that particular modal dialog. WIL: It's essentially how we've been building applications lately with components but this is for page objects in your test if you want to mirror that. You don't have to have one-to-one mappings of an interactor to a component but if you do, it's really powerful. CHARLES: Yeah. I found that when we have one-to-one interactors, that's when it just feels the best. WIL: Yeah and on top of this, if you have a component library and your component library exports the interactor that it uses for the component test, like we said, this BigTest technology, they're sprinkled also. We don't have to use interactors in big acceptance tests. We can use them for smaller component tests too, so if we ship these component interactors with the component library, your application that's consuming this component library now can test those components for free, without having to write their own interactors. It can just compose the interactors exported by the library. CHARLES: Man, I almost want you to repeat that word for word again, just so it can sink in. It's so awesome. Because when you actually go to write your tests, you're not starting from ground zero like, "How do I do this?" They're like, "I'm writing some tests for this thing and I'm using these components and so, I've already got the prepackaged interactions for those components." It's like you start writing your tests. If your tests are a 10-story building, it's like you're starting on Floor 7 and you only have to walk up to Floor 10, instead of slogging up all 10 stories. WIL: One really helpful interactor that we work within the open source stuff we've been working on is a date-picker interactor because date-pickers can be really complex. Just having that common interactor and have a date-picker on multiple forms where we can just use that one interactor, we don't have to tell every single test how to interact with that date-picker. We just say pick date and pass the date. CHARLES: Yeah, it's so awesome. That is actually a great example. It doesn't feel scary to write a test for a page that has a date-picker on it or two. If you're doing like a date range or something like that, you're like, "Oh, my God. I don't write the selectors to test this." You just import your date-picker interactor, you set the date, it actually worries about all the low level events and there you go. It feels like you're operating at a much higher level. WIL: Yeah. The interactor API essentially, you're telling me the test what the user would be doing and what the user would be seeing. CHARLES: Yeah. It's worth pointing out again. We've identified starting and launching. We've identified set up and tear down but interaction is a core concern of BigTesting, no matter what tool you're using. One of the things that we found as interactors are something that you can sprinkle on literally any test suite if you're testing an interface and it makes it better. We've used it inside big acceptance tests. We use it inside Jest, doing just little component tests. There are people in the BigTest community who have used it to basically, write component tests against a JS DOM and while theoretically, philosophically, you want to make those tests as big as you possibly can, you can use that piece in your test suite. If you are using a simulated DOM and if you're running a node in a browser, these interactors will still work and you're going to get high fidelity test cases that are resilient to this asynchrony and are composable and if they do have a full-fledged test suite, you can reuse these interactors. They are a really awesome power up that you can bring into your test suite. WIL: And they are not tied to the framework at all. We use them in React for our stuff but we've also written some in Ember. Robert's written some in the Vue and ported some test and one of the beautiful things we've seen from this is that one interactor goes everywhere. You just write the interactor once and you can use it in Ember, in React, in Vue, in those test suites. If the rest of your test suite is framework agnostic, you have this test suite that you can jump frameworks in your test suites until it works and can test your application with high fidelity. CHARLES: Yeah, it's fantastic. I remember when we first tried using interactors inside an Ember test suite because Ember comes with like a big kitchen sink in testing set up but interactors just slotted right in and there's absolutely no issue. WIL: Yeah and there is actually a speed boost even because in most of the Ember test offers a hook into the Ember run loop and interactors are not. There is actually a good speed boos just using interactors. CHARLES: Yeah. This is a good point. It's a good segue because typically, we think of acceptance tests as being really slow and one of the reasons, even the people [inaudible] acceptance tests or testing big as they think like it's going to take a long time. We found that actually we've been able to maintain a happy medium of testing big but also, having those test be really, really fast. When you say you said a speed boost from using interactors with Ember, where is that speed boost actually come from? WIL: I mentioned the Ember test offers a hook into the Ember run loop and interactors aren't and the reason of this is because interactors are converging and they wait for things in the DOM to exist before interacting with them. Instead of waiting for the framework to settle, it just waits for the thing to appear and then interacts with it immediately. If you're starting something about a button toward the top of the page, you don't really care that another button at the bottom of the page has rendered yet, unless of course you have assertion about that but if they're converging, you don't need to hook into the wrong loop to wait for the entire page to load, to interact with just one piece of it. CHARLES: Right. You're just waiting and you say, "I'm expecting something to happen and the moment I detect it, no matter what else is going on, the page could be taking 30 seconds to load but if that button appears and I can interact with it, I can take my action then or I can make my assertion then." It's about kind of removing gates -- artificial gates. WIL: Yeah. Another common thing that's helped with is animations as most test that are hooked into the run loop, you kind of have to wait for some of these animations to finish before you can even interact with the element and that means if a model has a half second animation where it flies in and you have 30 tests around this modal, those tests are extremely slow now because you have to wait for that modal to come in, whereas -- CHARLES: -- Straight up flaky. WIL: Yeah, straight up flaky. Whereas in the actual DOM, that modal is inserted pretty immediately and can be interacted with pretty immediately. With interactors, they don't need to wait for the animation to finish. They can just immediately interact with that modal but of course, if you need to wait for the animation to finish, there are options for that as well. CHARLES: Yeah. If there's some fade in that needs to happen, you can kind of assert on any state and as long as it's achieved at some point, the interactor will recognize it and recognize it at the soonest possible time that it possibly could. I remember getting bitten on one project where the modal animations in particular were so brutal. Not only were they flaky, they just were slow because there was all these manual time outs. It wasn't even a paper cut. It was kind of like a knife cut, like there's someone sitting there and kind of slashing you with a pocket knife. It just was a constant source of pain in your side. WIL: Yeah and that's how you end up with things like waits and sleeps in your test suite. When you need to wait for the animation to happen or something, you just see a sleep for four seconds with a comment because we have to wait for the components to load in. That's kind of a code now. CHARLES: Yeah, that's just asking for trouble both in terms of slowness and in terms of it's going to get flaky again. That has been kind of one the most freeing things about working with interactors and working with convergent assertions on which they're based is you just don't ever have to worry about asynchrony. Really, really truly, most of the time, you're writing your tests, like it's all synchronous and that kind of makes sense because from the user's perspective, their consciousness is synchronous and they don't care about the internal run loop. It's just they were making observations in serial and at some point, they're going to observe something, so the interactor sits at that point and really observes the application the way that your user would. WIL: Yeah. We've mentioned a few times now the convergent assertions, which interactors are based on. A little caveat there if you're using interactors and you're making non-convergent assertions, they might fail or be flaky. That's because interactors wait for the thing to be there to interact with, so as soon as the buttons there, it clicks it but it doesn't wait for after that event has fired and your application has reacted to that event, that's your application is concerned. We need something there like our convergent assertions that can converge on that state and wait for that state to be true before it considers itself passing or in times out. CHARLES: Maybe we should dig a little bit into convergent assertions. I think the last time we had a public conversation on the podcast about this, this is kind of where we were, like we hadn't built the interactors, we hadn't built these other component pieces of the testing story. We were really focused on the convergent assertion. We've talked a little bit about this but I think it's worth rehashing a little bit because it's a unique way of approaching the system but it's also kind of horrifying when you see how it works under the covers. I think when we tell people about the fact that it's basically polling underneath the covers. The timeout is configurable but it's basically polling every 10 milliseconds to observe a state. I remember the first time being confronted with this idea and I was horrified and like my programmer hackles on the back of my neck, like raised up and I was like, "Wait a minute. This is going to be slow. It's going to be computationally intensive." WIL: Yeah. That was my exact thought too because this is going to be slow. If acceptance tests are slow and we're doing an acceptance test every 10 milliseconds, it's going to be really slow and that's actually not the case completely. It's actually the opposite. They're extremely fast. CHARLES: It is shockingly fast. You've got to try it to believe how fast it is, how fast you can run acceptance tests. WIL: Yeah, talking like 100 tests in just tens of seconds. CHARLES: Right. You basically gated by how fast your framework can render. Your tests are not part of the slowness. Your test -- WIL: And also, memory leaks can be costly too. We experience that recently where we had memory leaks that were slowing down our test but we fixed those up in test and put our backup. CHARLES: Yeah, because basically, running the assertion or running the convergence is very fast. It's just a very light ping. I kind of think of it is as it is light as the brush of a photon or something that was bouncing off of a surface, so that you can observe it. It's extremely light and most of the time, it's just waiting so the test and the convergence really just gets out of the way. Just because they can run a thousand times or a hundred times in a second, it's doesn't gun it up. But the thing is it means that your tests run as fast as your application will run. You get back to the point... Was it in React where the kind of the key insight is that JavaScript is not the bottleneck? Well, your tests are not the bottleneck. WIL: Yeah. CHARLES: I guess this is what it is. I don't know if there's anything else that you want to say about convergences. WIL: No. We pretty much summed it up there and that's what interactors are based on. That's how they're able to wait for things in a DOM. It basically polls the DOM until it exists and then it moves on and actually does the interaction. CHARLES: Once again, this is actually a very low level thing on which BigTest is based but this is once again, something that you can use independently. You can write your own convergent assertions. You can write your own convergences that honestly have nothing to do with testing your assertions. It's a free standing library that you can use in your test suite or elsewhere should you choose. WIL: That doesn't need to be a DOM for BigTest convergence there. I use BigTest convergence in BigTest CLI to converge on the browser being launched. Instead of waiting for the browser to report that, I can just kind of poll and see how that process is doing and the convergence waits for that process to start before moving on. CHARLES: Right. I guess the best way I've thought about it it's a way to synchronize on observations and not on callbacks. It's a synchronization mechanism and 99% of the synchronization mechanisms that we're used to, they've involved some sort of callback, a promise, an event-listener, things like that or even a generator where control is handed back explicitly to a piece of code when something happens. Whereas, this is a fundamentally different synchronization primitive, where you are writing synchronous code that's based on observations, so what I observe this, do this. When I observe this, do this. It's extremely robust. WIL: Yeah, very. CHARLES: It is a core piece. A fundamental thing that on which interactors are based on, which the CLI is based, I don't know if it's core to writing tests but -- WIL: It definitely helps. CHARLES: It doesn't helps. We couldn't have BigTest interactor without that. WIL: No, definitely not. CHARLES: Because that's what makes it fast, that's what makes it not flaky at all and having those things, I think it makes it easy to maintain because you can work at the interactor level or the level of user interaction and you don't have to worry about synchronization, so the flow of your tests are very natural. WIL: Yeah. We don't have to explicitly wait for request to be done for making an assertion about your app. That'll just come with convergences, just waiting for test date in application to true. CHARLES: Let's talk about one more piece of the testing issue because when you're testing big, when you're testing in the browser, there's always the issue of what are you going to do about your API. You got to have your API running. It's just always an issue and this is kind of interesting because this sits at the crossroads of testing big and also, getting the most utility out of your test because in an ideal world, if you're testing really big, you're going to be using a real API. You're not going to poke holes in reality. WIL: Yeah. One of the things that we avoid in BigTest is poking holes. We're not shallow mounting the components and testing the methods and the results. We're fully mounting these things and fully interacting with them through the full DOM API. CHARLES: Yeah, exactly, using real browsers. It just occurred to me the irony of us talking about reality being things that are still running inside of a computer processor. I think we've inherited this term from that talk that Justin Searls at AssertJS in 2017. It's a really, really excellent talk. I think he gave it at RubyConf. It's the 'Don't mock me.' WIL: Yeah, it's one of my favorite talks. CHARLES: Yeah, it's a great talk. In it, he talks about the value of a test is a balance of how many holes you poke in reality and sometimes, you encounter a test where all it is like holes in reality. Whether you're mocking this, you're mocking that, you're mocking the DOM, you're mocking the browser, you're mocking your network layer, you're mocking this external API and the more holes you poke, the less useful it's going to be. Network is one of those where it can be very difficult to not poke holes in that reality because it's a huge part of your application. Your frontend application is how it's going to interact with the server but at the same time, servers are gigantic pieces of software themselves, each with their own dependencies, each with their own set up and tear down -- WIL: Have their own concerns. CHARLES: Yeah, exactly. They might be in a different language. They've got runtime, things like they might need external C libraries and crazy stuff like that. They're their own beast. To get a true big end-to-end test, you going to have to stand up your server but the problem that presents is you want your tests to be also isolatable. If you're a developer, I can go to a repo, I can do an install of my dependencies and I can run the tests without having to do any external dependencies other than the repository and the language in which I'm working. This is one where we kind of have tried to walk the line of not wanting to poke holes in reality but also, have the test be containable to the actual application. In order to do that, you need something that presents a high fidelity version of the network. You can kind of try and have your cake and eat it too. You want to have something that acts like a server and really acts like a server but it's actually not a server. WIL: And still poke as few holes as possible and the application and how that's all set up, we don't want to be intercepting methods and responding with fake data. That's not a good way to mock that network. CHARLES: Right. We want to be calling actual fetches, calling actual XMLHttpRequest. Ideally, if you've got service workers, making actual service worker requests. WIL: Basically, as far as the application is concerned, it's talking to a real server on us. CHARLES: Yeah and that's kind of the litmus test for is it a hole in reality or is it just a really great illusion? WIL: Yeah and that's a good name for Mirage, right? It's a really great illusion. CHARLES: Yeah. It is a simulation of reality, so we use Mirage, which is something from the Ember testing world but something that we have extracted and made available as BigTest Mirage. WIL: Yeah. The main difference just being is that we've taken away the Ember dependencies and the run loop stuff. It's just plain JavaScript Mirage. It works exactly the same as you use it in Ember minus the auto imports and the file... Oh, man. I can't think of that word. Aside from automatically importing your files for your server config, you have to do that manually because Ember is what provides that but other than that, it's a form of Mirage. You define models and serializers and factories and all the good stuff. CHARLES: Right and then you can use those factories and you can use those models to really give a high fidelity server. If you are building something in whatever framework, you can use BigTest Mirage to simulate that network layer. Again, we've used it in a number of different scenarios but having that in place means that you're going to be able to have those high fidelity tests where your application is actually making XMLHttpRequest but it's all isolatable, so that it can be run in repo. This isn't really related to testing but it has a fantastic capability where you can prepopulate, you can use the factories to prepopulate your server with data, so that you can use the application without the actual server being implemented. WIL: Yeah. That's extremely powerful. That's we were talking about earlier and getting at is the scenarios which are setting up specific, essentially fixtures but you're generating these fixtures. Factories are essentially high level fixtures, network of fixtures. CHARLES: Yeah, higher order of fixtures. WIL: Yeah, so the scenarios are just setting up these fixtures for a scenario of your applications, like the backend is down or the list only responds with two items as opposed to 5000 items, something like that. You want to be able to, not only test these things but be able to develop against it and Mirage makes that really easy because you can just start your app with Mirage-enabled point to that scenario and you're there. You have that exhausted scenario to develop in. CHARLES: If you've never used Mirage, it is really hard to understand just how incredibly powerful it can be. We've used it now on at least four projects, where we did develop the entire first version of the product without any backend whatsoever. It's an incredible product development tool, even apart from testing, that then informs the shape of what the API was going to be. I know we've talked about this on the podcast before but it's really an incredible technology and it is available to you no matter what framework you're using. I think it's one of the best kept secrets in JavaScript development. WIL: Yeah. That's definitely great. That said, though it does have some fallacies. It's great but it can be a little slow sometimes, so we are eventually working on a BigTest network like another piece of the BigTest pie that you'll be able to sprinkle into your application but in the meantime, praise Mirage. CHARLES: Yeah. We are going to be offering an alternative or maybe collaborating for another version of Mirage but hopefully, we can make Mirage faster, we will be able to make this thing faster, so that it can use service workers and be used in a bunch of different scenarios. Just to recap, we've talked about a lot of different components but over the past year, a couple of years, these are the things that we've identified as being really key components as big part of your acceptance testing and really your testing stack. How you're going to start and launch these things? How are you going to set them up and tear them down? How are you going to interact with the application from a user, both in terms of making assertions and how are you going to take action on behalf of the user and still have it be maintainable, have it be resistant to flakiness, have it be performant? BigTest is the answer to that for those particular areas of the testing story and so, some were using we're using existing components like we use Karma, we use Mirage to date. Those, we did not develop but where we see kind of key pieces of that puzzle missing is where we started writing the BigTest solutions so things like the interactor. Eventually, we are going to make BigTest into a product that's you're going to be able to use kind of out of the box, just like you might install Cypress, where it's a very quick set up and we make all of the decisions about the components for you. But in the meantime, we're really trying to take our time, identify those pieces of the puzzle and build the software component that fits that piece of the puzzle at the absolute best so when they're polished, use them in a more comprehensive product. Things like convergence, things like interactor, things like BigTest React, BigTest Vue and very soon, BigTest Ember. These are things that you can use today, to make your tests just that much bigger and that much better, especially interactor. It's been an incredible journey this past year as we kind of develop these individual pieces and there's just going to be more goodness to come. WIL: Absolutely. Right now, I'm working on some validation type API for interactor that I'm hoping to land soon. That'll open up the possibilities of maybe hiding away those convergent assertions a bit more in your tests and just handling this automatically. It'll be pretty good. CHARLES: It's really exciting. Writing test has got more and more easy and more and more fun over the last year for us. I think we're already starting in a pretty good place. If you have any questions about BigTest, how would folks get in touch with us? WIL: We have a BigTest Gitter channel. You can find a link to that on the BigTest website: BigTestJS.io. Just ask us questions on Gitter and we'll try to answer them. CHARLES: And as always, you can ask us directly. You can send email to Contact@Frontside.io or reach out to us on Twitter at @TheFrontside or you can actually reach out to the BigTestJS Twitter account directly and just call us on Twitter at @BigTestJS. Thank you very much, Wil. WIL: Thank you, Charles.
Industry News NRFTech Peter Schwartz GetSpiffy Fund Raise Google Express Traction Amazon Prime Day Amazon Press Release USA Today Recap Target results on Prime Day Listener Questions Joan Abrams asks: How important are progressive web apps going to be for retailers? Any retailers doing this really well yet? West Elm Example: https://mobile-beta.westelm.com/ Julie Acosta asks: Any updates on multi-touch attribution models/partners - who’s doing online/offline (brick & mortar) right? Upcoming Events Jason is doing retail visits in Australia July 21 - 29. Ping him on twitter if you’re there. Jason and Scot will be doing shows from eTail East (8/6 - 8/9 in Boston) Don't forget to like our facebook page, and if you enjoyed this episode please write us a review on itunes. Episode 137 of the Jason & Scot show was recorded on Thursday, July 19th 2018. http://jasonandscot.com Join your hosts Jason "Retailgeek" Goldberg, SVP Commerce & Content at SapientRazorfish, and Scot Wingo, Founder and Executive Chairman of Channel Advisor as they discuss the latest news and trends in the world of e-commerce and digital shopper marketing. Transcript Jason: [0:25] Welcome to the Jason and Scott show this is episode 137 being recorded on Thursday July 19th 2018 I'm your host Jason retailgeek Goldberg and as usual I'm here with your co-host Scott Wingo. Scot: [0:40] Hey Jason and welcome back Jason Scott show listeners Jason how is your summer going. Jason: [0:48] It is going traffic I am just starting sort of a. A heavy travel session for me so I just got back from San Francisco and I leave on another trip tomorrow but I I feel like the big news this week is I read that spiffy got some new fun thing so congratulations. Scot: [1:08] Thanks yeah so for listeners that may not know so I started Channel advisor 2001 move to exact chairman in 2015 moved over 250 full-time I kind of, experiment with it in 2014 and overlap there and yeah it's been fun so we are on demand Car Care started with car wash with added oil change and now even have some products on the market so we have a cool iot device called spiffy blue that you plug into your vehicle and there's a companion app that tells you all that's going on with your car. A passionate about Digital Services I think that's kind of where the future is going so decided to put my money in time where my mouth is and we just raised our second round this week so it's good to get that behind us so we can keep servicing customers. Jason: [2:03] That's awesome and you keep expanding in the cities to right right right you where are you now. Scot: [2:09] Get 1/5 we're holding firm at 5 right now cuz we have this explosion of Fleet business so e we started out really with optispark with consumers at office parks and Residences and then we added oil change it unlocked Fleet so we we've been kind of adjusting as much Fleet as we get on our plate and it will be adding more City suit store in Raleigh Charlotte Dallas Atlanta and Los Angeles now. Jason: [2:35] Awesome I'm kind of sad because I am assuming that the climate in Chicago means I'm not going to be next on your list. Scot: [2:42] Yeah we've identified our first 50 cities and unfortunately Chicago is not in the first 25 but will eventually get to you. Jason: [2:51] Not I will be waiting or I'll just move I'll just get so frustrated at my lack of spiffy then I'll move. Scot: [2:56] Yeah yeah there's stairs five markets for you to retire to that maybe we'll get you there. Jason: [3:02] Nice I like the thought of retirement. Scot: [3:06] Couldn't you just in San Francisco at the NRF retail Tech any interesting things you want to give us a trip report on there. Jason: [3:14] Yeah you'd be happy too so there's an interesting show interruptus had this show for a number of years and it's mainly been focused on the CIO and CTO. So they sort of had a private party for a number of years I think for a long time it was. Permanently located in Half Moon Bay then it may have moved around a little bit so it may have gone to Laguna Niguel one year and San Diego last year. But there was also a digital merchandising show that that that shop.org put on, MN that shows been retired and see what they kind of done with interests Tech is they've expanded the temp to include all the, the digital Business Leaders and have more more overlapping content for cio's and business leader so I would, I'm not sure if that interests agrees with this definition exactly that I would characterize this is sort of the second year. [4:10] Where this the show has a broader scope in and I think it was interesting content and I think I think all of us that attended. Got a lot out of it so it was two and a half days. In San Francisco of pretty jam-packed content and it's a you know it's a smaller venue with smaller group so it's much more intimate like you basically have an opportunity and network with everyone else that attends and. You know it's it's it's more what I would call a a conference then like a big exhibition show it. Scot: [4:47] What were some of the key takeaways. Jason: [4:49] Yeah it's a different number different topics that they're too many speakers to go over everyone one that I was practically looking forward to is Katie Finnegan, so she runs a store 8 for Walmart which is Walmart's incubation lab and. She I think she she was the founder of the couple startups that Walmart Acquired and she kind of laid out their methodology and it's it's interesting to me because they're there been a bunch of retailers that I've had internally incubation labs in. I think it's fair to say on the aggregate that they haven't actually been that successful. And the Walmart one you know is is these are wholly-owned LLC so if you Walmart is essentially going to acquire you and then incubate you and sit in there all kinds of. You know questions that come out you know what is the exit look like for the the founders in the management team and you know how to define answers work and what are the success criteria and all those sorts of things so it's kinda interesting. I think Katie is like pretty realistic about the track record of retail incubation Labs insert you know she she was pretty candid about. [6:06] Where where they felt like they had the problem solved and where they thought like you know they're still open questions to be answered and she she kind of painted this maturity model and you know she highlighted several several. Companies that are in the story labs and where they are in that maturity model and so the. [6:27] If they're six stages of maturity you know, the most mature thing they have is this jet black which is kind of at the third stage of maturity. Which is this kind of um concierge personal shopping service and delivery service in New York City and some most of what they have is even earlier than that. So that was interesting was interesting to hear her thoughts about incubation in general and then some of the specific initiatives Walmart had. You know a bunch of gas from the show we're speaking they're so so Billy May who's the the CEOs or the table was on. A panel talking about. How how to prioritize technical initiatives within the company Rob Schmaltz who's been on the show from Talbots did a did a couple of channels. [7:25] The John Nord Mark has been on the show for his iterate. Incubation lab he brought in a couple of the companies in in his lab to kind of talk about new. New companies and there was also some the VCU that had some of their companies and so. There is kind of some interesting startups that we got to hear from and chat box there's a company called Shadow research. And there's a ton of newer Marketplace aggregator called hinge to which you may be more familiar with I wasn't super familiar with them. [8:09] As you well know you know tons of interest in traction in the in the marketplace space going on. And in the very last speaker sound pretty interesting and. I think I think we need to get him on the show so it's a it's a guy named Peter Schwartz who's a futurist. And I think he's got like a boondoggle job for salesforce.com so he's the head futurist for salesforce.com. [8:40] The listeners may be more familiar with Peter Schwartz his work he's the main character that the Matthew Broderick character was based on in the movie War Games. So he was like literally a young hacker that broke into some government databases and he was a consultant on the movie War games and several years later he as a futurist he partnered with Steven Spielberg to do. Paint a picture of what the future would look like for the movie Minority Report so all those famous scenes of. You know facial recognition triggering customer have ads inside the Gap in The Minority Report where ideas that that he put together. It's got a ton of fascinating stories he said when we were brainstorming Minority Report we thought we were, we are thinking about what the future would look like in 2040 or 2050, and he's like basically everything that we had in that movie like is now here and it's 2018 so he's like you know we may have gotten some of the ideas right but we were way wrong on the. On the timer iser. Scot: [9:48] I love those two movies Joshua spoiler. Jason: [9:54] Exactly exactly so he was interesting and it was silly but he's like. He's an older gentleman and he's like I'm 72 years old on the oldest employee at Salesforce and the irony is not lost on me that the oldest guy tells horses responsible for the future. Scot: [10:12] What did he predict. Jason: [10:18] Yes so he we talked about a bunch of things. We talked about Ai and that's an interesting one because I feel like there's kind of two camps I think there's people and maybe Elon Musk is in this Camp to think. The AI is super dangerous and that the Killer Robots are coming and Peter Schwartz was like that's not what's happening with the AI that we have now and heat you know he made his Arguments for why. We we like we really aren't making very fast progress on General AI that has sort of generalized intelligence that could become sentient. [10:52] But he did talk about a bunch of the risks with the kind of AI that is emerging and he talked about like, you know in the near future when all these AI, algorithms are deciding like what medicine what Medical Treatments we qualify for and whether or not we get credit and all these things you know will there be. Transparency about all those decisions and what rights are we having and things like that so is his POV on on. Hey I was a little less like. Daunting and then some folks and we talked about the future of work you had a lot of folks that think that like all the jobs are going to get eliminated by all this Automation and that you know we're all going to be sitting around without anything to do and again he kind of felt like that. That wasn't likely to be the case cuz he kind of talked about how hey you may not have a lot of people sitting in truck driving trucks around that he envisions is future we're just like we have all the Air Force drone pilot sitting in Las Vegas. [11:53] Fly the planes over Afghanistan and then go home to their families in Las Vegas that you could have a ton of. Truck drivers that are moving trucks on it you know through the commercial residential streets and then getting them on the highway where they drive autonomous way and you know it. Some some data points to support his his hypothesis is that. Lots of New Jersey jobs emerge to replace the jobs that tend to go away. Scot: [12:20] Nursing school any other highlights. Jason: [12:24] Those were some of the stuff that they jumped out to me there are a lot of like sort of topic-specific stuff so you know topics about Big Data topics about how. To hire how to structure Innovation and companies you know so there there was a little something for everyone in their butt butt. I feel like those are those are some good highlights and I was a little distracted through the whole thing because I feel like there's some other e-commerce stuff going on at the same time. Scot: [12:57] Yeah it's been a busy week in the world of e-commerce and and since we're coming off Prime Day 2018 we thought we'd jump in the news and start off with. [13:25] Yes let's start off with the prime day got off to a rough start with I know you were tracking that and I saw you at least quoted two or three times out there in the Press what what did you make of that. Jason: [13:36] Yeah so I mean I'll start out by saying I was completely surprised and caught off guard I feel like Amazon has had a shockingly good record of reliability on all these peak days and so if you would have asked me up front to like make a bed on, on which retailers were likely to suffer an outage during up a peak event I would not have picked Amazon so I was, someone surprised that they had this outage right off the bat while to talk about like what the impacted the outage was and forms of hurting, the revenue hurting their Prime subscriber is hurting sales of there their first party products. Marker 01 Scot: [14:16] Yeah what when does Herzing is it on Twitter Marker 02 [14:21] people discovered that if you use the smile you no interface where you anything you buy donates money to a charity and I had previously set that up and I was able to go through smile I was are down for are the first four hour but I was able to go to smiling and get it to work which to be made it feel more like a networking thing I never know what happened but it was unusual to me that there were were little slices that were working. Marker 04 Jason: [14:46] Yeah for listeners that haven't been responsible for sort of peekaboo ability like I'll highlights. Prime day is is perfectly designed to be the worst case scenario for a i t system standpoint right like you're number one we have, all of this traffic coming at the same hour right like so you had this huge Peak which if you're really worried about up time you would try to do something to spread out that demand more but even worse. With all of these short-lasting deals and all the personalization on the Amazon site very little of the of the Amazon Prime day experience can be cashed, so you know they're all these things that you like where you would kind of compromise the customer experience. To make it easier to load on the servers if you were really worried about availability and and Prime day is a perfect storm of like all the all the best customer experience practices that are you know. Extra challenging for the it guys and so so obviously you know it it did come to bite them somehow this year. Scot: [15:54] Yeah well we'll see if they'll get better that's the thing you weren't about Amazon this when they take on the stuff that they learn a lot and then they they get better. Jason: [16:02] So I'm one thing that I'll be curious about him hopefully some of some of our friends that are. Marketplace hours on the site maintenance may be able to share some insight here but if you were a prime deal that had one of those early slots and you were disrupted. Like I'll be curious to see if Amazon does any kind of make good for for those vendors or they just missed their window or or how they're handling that. Scot: [16:27] I tried it with three or four people not specifically about this outage thing and they had deals going at that time and they said that there are deals sold out in, but even faster than I thought they would so so it's weird it does seem to be and I saw a heat map that showed there was only certain cities were impacted so it's it's kind of an anomaly to me, to my knowledge AWS itself wasn't down so, yeah it's definitely specific to Amazon's phone usage of their infrastructure which kind of points do your data thing or some kind of an internal just their Network just there their slice of the whole AWS Network it is kind of a mystery. Jason: [17:07] It'll be interesting to see if any more info weeks. A side note I think we're also some reports from Amazon Flex drivers that the flex app was down which I like also sort of boron my my theory that maybe there is some product data problem. Scot: [17:25] So despite all that they estimates are out that they did about 3.5 billion and I believe that's up from 2.4 last year Amazon doesn't release date the Allies frustratingly give you like little clues of how the day went, I'll talk in terms of growth in unit numbers ish kind of things so those are all estimates of how big it was it was 6 hour first longer this year social apples and oranges and that comparison and then you and I were kind of talking in the pre-show in The Green Room the virtual Green Room that it's in more countries this year for sure Australia is its first year at Prime day so hopefully they had an exciting time down under you can go so I report cuz you're going to Australia soon settle be exciting you can kind of tell us how they felt about prime day. Jason: [18:14] Exactly just to answer your question I booked a trip tomorrow. Scot: [18:18] Good good thinking my advice is to make sure you download a lot of movies. And then yeah did you happen I know you like live on top of a Whole Foods or something like that did you happen to go into a whole foods during the prime day excitement I know a lot of people were we're trying to take advantage of some deals are there something where you could on Prime day you got $10 off in the store and online if you wanted that day. Jason: [18:44] Yeah yeah so I did not get to take advantage of myself I do live in very close proximity to A Whole Foods but I was of course down in. In San Francisco so I didn't get to experience it first-hand but all the reports that I've seen is, it probably was very favorable in the whole foods that like you know the the month leading up to it that they had really like done a good job is starting to roll out Prime benefits and so they kind of trained a bunch of the. The whole food choppers that would also Prime members till I get the Whole Foods app for Eddie and you know arguably some of the best financial deals that they offered work you know you you if you gamified everything at Whole Foods you essentially could get $30 in extra. Extra cash which is you know on I'm not significant amount of purchase it so that you know that that was pretty substantial. Scot: [19:35] It's like 2% off my average Whole Foods check out. A couple of observations from my side on Prime day. You know that the sales every noise focuses on the sales but I've kind of come to believe that that yeah that's part of what Amazon is going for there but they're real benefit you know they're probably out ways so it say it is three and a half billion the real benefit comes from the the juice that flows into the ecosystem elements that Amazon has because of. [20:08] Prime day so it's the first of all you have a bunch of prime sign up so they don't reveal that obviously but they in the past they said things like you know tens of millions of sign-ups for Prime day in those kinds of kinds of numbers this year there, as you point out leveraging that Whole Foods intersection so so if she reports that say prior to that position there's only 40% overlap there between Whole Food Shoppers and Prime members so that that's like a huge audience that they can get over in the prime they're obviously focused on that I also saw a report that they they announced a million connected home devices were sold on the show here a week then I would. [20:47] Pat ourselves on the back with an early on this that we felt like, Echo and all of its Associated devices Alexa power devices that were really big opportunity for Amazon and now they're selling a million units in one day 36 hours is pretty amazing another one that we've been pretty early on it is talking about the ads that that we think, Amazon ads are going to be pretty big and we're seeing you have cpgs and Brands really spend a lot of money there everyone I talk to at a brand head really dialed up their ad spend during Prime day so so that's a really, kind of cool Catalyst that Amazon has for getting people to really come on to that platform experience it on a day that's pretty crazy and hopefully you know from the Amazon site at least get addicted kind of tangentially do that before Prime day one of the when I think the smartest Wall Street guys on this is John Blackledge accountant he has raised his ad number for Amazon just add number 236 billion in Revenue over the next five years. [21:53] So that that's almost like the quote on Facebook I am a size scale business that he believes Amazon will be building over the next five years obviously that's Facebook today in 5 years Facebook will be bigger but just trying to get a little scale to it because that's a just kind of insanely big number. [22:11] There is some last couple things kind of around this idea of the ecosystem the benefits and aside from the sales third-party sales were reported to grow 89% this from cnbc's numbers and orders were up 69% so how it was you know it in some years we seen Amazon, Prime day the traffic gets absorbed by kind of the Amazon first party and owns products and private labels and that kind of stuff this year they did a really good job of splashing it all through the marketplace so so we saw anecdotally talk to a fair number third-party sellers that just had a very robust Prime day and there's that CNBC data and then singing Brands it was interesting to see the brands that participated so you had by participating. [23:00] Spotlight on that home page of deals so you saw a lot of this around why is offline so it was Under Armour Wrangler Champion Columbia Calvin Klein Adidas and Reebok and then interesting Lee it's also who didn't, play so Nike to my knowledge I never saw them and I've read a lot of the the reports of folks that track this pretty closely they did not participate in Prime day in a meaningful way Skechers Haynes Converse and Ralph Lauren so so those are some other interesting kind of aspects of prime day to think about is yeah the sales are good and you can't sneeze it over 3 billion dollars but I think they're real benefit it was probably another 5 or 6 billion comes from these ecosystem impacts. Jason: [23:48] Yeah for sure and I don't know if you saw it I thought it was actually from the Amazon press release but one of the things that they did claim is that it was the biggest day, single day ever for Prime signups so they added more people on Prime day than they ever had before which isn't surprising. Scot: [24:06] Yeah so your guess the last highest was last Prime day. Jason: [24:09] Oh I think they did say that again that last year so I think they they exceeded last year's already good numbers on Prime signups just sort of highlight in your point. Another thing that they put in the press release it's also kind of fun is they like, the Highlight the best selling on Amazon products and every country which hours I think it's always interesting to just look at the trends there, so I think I'm almost across the board in every country that the prime products the Amazon products are the best seller or so the fire in the. The echo stuff but not on Amazon stuff in the US and Canada the instapot is the the best selling item, and there's a bunch of other countries where, the best selling item is kind of high ticket consumer good so you know there's some countries were TV sold particularly well the video game platforms were really good sellers in a bunch of countries. [25:13] Yes I'm like. Expensive SD memory cards were big sellers in a bunch of countries but then what's interesting is there's another whole set of countries, that have like 3 different shopping behaviors and you see like. Everyday essential consumer consumer packaged Goods being the big sellers right so so like you know some some food item or something like cleaning soap or something like that right and so you know there is kind of bifurcation of the countries where where I feel like you know their grocery shopping is still predominantly. Brick-and-mortar and they use e-commerce for all these these high-ticket items and then there's you know countries where all the shopping is it shifted to e-commerce until you see things like laundry detergent being the, the best prime day item in Japan for. Scot: [26:09] Yeah they sold over 300,000 of those were they called cooking pots that's crazy. Jason: [26:14] Yeah the instapot is a crazy phenomenon in and of itself online and offline but but yeah it's up it's a fast mover on Amazon. Scot: [26:24] It was also interesting to see what other retailers John Prime day there I think it's kind of funny Amazon has them kind of Anna a check kind of a medium Checkmate kind of position do they ignore it or do they participate do they mention Amazon what what did you see there. Jason: [26:40] So tender retailers have sales and to me it's a no-brainer that you should have a sale on Prime day like I think it's it's so in the ethos of e-commerce Shoppers now that it's a shopping day that you really missed out if you didn't have some promotions to push people over the edge and I would argue especially since that first hour had this outage if you went to Amazon and you had a glitch, you're one click away from buying something somewhere else and so you know I think other retailers could definitely, not to the same level as Amazon but you know there definitely is a halo effect for the whole whole e-commerce industry and you know without naming any names I can tell you, I had probably half a dozen clients that had their their top sales day of the year on Prime day and so. It is absolutely a big shopping day that has a broader effect than just Amazon I think the interesting strategy is. Do you call the back to school sale do you call it you know Cyber Monday into like you know do you just have it be a promotional day or do you overtly. Yum Soda counter program. The Amazon and call it Prime day like you know that potentially could help you from an SEO standpoint but you know you're also sort of. [28:03] Promoting your competitor and acknowledging their success in and you know psychologically you're celebrating their birthday ironically enough. So I think we saw some retail delivery over it we had prime Day sales and some details though it'll more more subtle about it. And then when the outage happened I think you know what's going around on Twitter that like Office Depot sent out sort of. A smart aleck email you to telling Shoppers that is. They're tired of looking at pictures of dogs that they could come shop on on Office Depot and I have to be honest that just feels like tempting fate and in sort of kicking a monster that you don't want to kick. Scot: [28:47] Yeah yeah it's like doing live demos it just Murphy's Law is just begging begging for attention when you do that. Call any other Amazon things you want to cover before I move on. Jason: [29:02] No I think those were all the sort of big things that jumped out at me. Scot: [29:11] Cool so it was a couple of weeks dominated by Amazon but in the anonymous on side you know one thing I wanted to talk about is kind of the biggest news and marketplaces that's really gone unnoticed we talked about a little bit on the show but I wanted to try to Circle back is in you know some confusing marketing but the Google within the Google Express system they have this new offering called a c I forget about it shopping action and it's essentially where you can buy things so to mobile kind of AD unit. [29:46] We're supposed to be at Marketplace we can check out right on the Google page so so that's interesting and you know they are adding a ton of retailers to the program and I guess it's a little self-promotional but but side of the channel visors are partner of theirs and a large number of those folks are coming through our connections into that new market place so I know Google is Steve always been flirting with the marketplaces and she feels like they've got a little more religion on it this time I kind of feel it this is the time of year one everyone's really kind of ramping into you know Q3 obviously but then you know here as we get into the end of Q3 will be Full Throttle talk about holiday so that's going to be I think one of them interesting things to watch this holiday is how serious does Google get about this plus we seen some retailers like Urban Outfitter has launched the marketplace you know there's there's a bunch of other things that are going on out there so why really interesting things going on under the radar in the rotor marketplaces that I think they're just getting hot not not like covered or talked about because of all the exciting stuff Amazon's doing with things I promise. Jason: [31:00] Yeah yeah for sure and I do think the Google offerings are there still is a little confusion out there because there are kind of a couple different Commerce statuses that you can have with Google so so obviously you know Google still has this Google Express system where they at where you get they'll accept orders from. A bunch of retailers and then like a Google employee will actually pick up that order from that retailer and deliver it to you and I think as you pointed out there's been some recent traction with new retailers joining ecosystem. Like separate from that other they can they can be married together is this Google shopping actions which is like. The most low-friction way Google has ever had to actually buy a product out of a Google promotional spot right and so you know I think. Originally launched as a pilot and and you know it was kind of hard to get in the program now they've opened it up. And I think we're seeing lots of people for certain types of products. Take advantage of those Google shopping actions I still feel like the industry needs a little more expertise around that. Offering um I'm always surprised how unfamiliar people are with it and then there's this even weirder status there are. [32:21] Unique Partnerships from retailers with Google to be able to sell products through the Google home device and so Walmart and Target for example, are not in Google Express at the moment but they are selling their products through the Google home Echo System until. You know between all that your you know I think there's more Commerce activity in the Google echo system than we've ever seen before. Scot: [32:47] Yeah yeah it's largely not getting talk about Skylanders I guess in some ways it actually gives Google some space to go and experimenting and I have like a thousand spotlights on them well well all this Amazon stuff happening so we had what we have been I had a very busy summer so we are a little delinquent and getting to some listener questions so we have a little bit of time here on the podcast and want to jump into a couple of them that have been lingering out there and Jason I get to kick back and relax on these cuz they're both for you so first we have Joan Abrams and Joan ask how important are progressive web apps going to be for retailers are there any retailers doing this really well yet so maybe, let's make sure we started the Top Gear what's the progressive web app how does that differ from a native web app or a responsive site and all that kind of stuff and then you can jump into the importance and then any retailers that they are doing a great job. Jason: [33:47] So great question so Progressive web app is a slightly unfortunate name cuz it makes it sound like. This is an alternative to a traditional Native app. And well it potentially can be that I would argue that it's much more than that essentially it's a set of standards for a way to build a mobile web experience that runs natively through the web browser. [34:14] And they been supported by by Google Chrome which is very important mobile browser for some time. But they were very poorly supported or not at all supported in Mobile Safari which is the next usually important mobile browser, and so what exciting is as of the last iOS released there's now. Port for Progressive web apps in both Chrome and Safari, which is essentially the bulk of Shoppers on mobile and so there's a bunch of. Tools that you can use to build a website and then there's this feature called remote workers which essentially gives you, the capability to add. Native app like functionality to the web experience and have it even function offline so have it work even if you don't have, I'm connectivity so for example. [35:19] Google can have a Google Maps or Gmail experience that lets you, you know. Calculate directions in in Google Maps or or you know reading compose emails even if you're in airplane mode, by leveraging use these remote workers and what's super important and unique about Progressive web apps is, did the really designed to be performance efficient and so by following the the the the pwa standards you end up with a mobile web experience, it's much faster and much more responsive than. [36:04] Traditional responsive mobile web designs were into the site performs much faster and it can have a richer future set it can essentially have, almost all the features that you can have in a native app and so one thing you could do with it is build a native app that didn't require the App Store and that. That is a huge thing because a lot of customers don't know how to download apps from the app store or they have app fatigue and if your retailer and your bill all these cool functions in your amp, Nobody ends up using them because apps have such a small reach but you have you put the same features in your mobile web experience using pwa, you know anyone that goes to your url like instantly has access to all those things and so it's a great alternative to spending a fortune on building a native app, but it's also just the best way to build a mobile web experience and because the performance is so much better. We're seeing that the early retailers that have adopted Progressive web apps are are really killing at their their mobile site so much faster they're getting much higher conversion rates. [37:13] And so it's I would highly encourage any retailer to put on the road map right now that they should be. Redoing their mobile experience as a progressive web app and I have to be honest because most is building a proper response to. Mobile site and they feel like they did a bunch of work and they want to feel like they're done now and not too many of them are thrilled to hear that they need to start a new project to build it over using a new set of Technologies, but the but the retailer the early retailers that are, are are getting really good results and so you know it it really should be kind of high on your on your your roadmap particularly as all your customers are shifting the mobile. Scot: [38:05] How do you, so I know we obviously write a lot of this stuff at spiffy so we have you know you have Swift and Objective C for a native app and then there's a lot of interesting JavaScript, turn based libraries for writing kind of a responsive sites that were even app so I react in those kinds of things what what do you write a progressive web app. Jason: [38:31] Yeah. So it is a JavaScript like framework, much more constrained and specific set of libraries so so often when you're you're jumping JavaScript you can you can pick you know all these different development Library sets and only you know small subset that have been Highway optimized not allowed to be used in npwa is but it's it's mostly sort of JavaScript style development environments. Scot: [39:03] Are there any retailers doing a really awesome job at it that you want to highlight. Jason: [39:09] So there are they like again, it it hasn't for the most part been the huge retailers that have completely embraced. The pwa jet like there have been some retailers that have done a pwa, like as a separate stand-alone site in a in a replace an app in addition to a mobile website but says some of the retailers that have gone full pwa and got some really good results one of the biggest retailers out there to my knowledge this done it is is West Elm and then there's some in a smaller specialty retailers I think Tommy Bahama I think Snapdeal is full Progressive web app now, I think Payless has a progressive web app some of these are fixed it Sweet Frog so they never had a good mobile site before and this is the first one Willie Pulitzer Lancome, some some folks like that and so if anyone any of the listeners have a favorite Progressive web app retail site I would love to hear about it as well. Scot: [40:17] Yeah so what's up pic on West Arm so just to be clear you don't just download their app right you just go to their website and you're going to have a much more responses or how much more mobile friendly experience then then if you there previously shipped is at. Jason: [40:35] Exactly you go to westelm.com right interview go to West Elm, dot-com on a desktop browser you're getting a desktop experience if you go to that same URL in the mobile site you're getting a mobile version of that experience in the mobile version you'll get from West Allen is a progressive web app and I, I'm compelled since you're calling me out on it to highlight that. The West Elm one may not be in full deployment yet so I think there's some may be testing going on so if you go to westelm.com you may or may not get the progressive web app version. There's a different URL you can do to guarantee that you get the the West on the pwa version which is I'll put the link in the show notes but it's. Mobile - beta. Westelm.com and that kind of bypasses the testing guarantee you get the pwa version but they shared some some public data from there, they're beta version that that's in this A B test and the. That the folks getting the pwa version of The Experience are spending 15% more time on the site and there are spending 9% more Revenue than them folks that are. On the traditional responsive-design version of the site so so pretty meaningful. [41:53] Non retail sites there's a ton of of big sites that have moved to Progressive web app, and across-the-board the performance metrics though the page weight and the load times and the time the interaction are wildly better for Progressive web apps and whenever we see those performance numbers go up, bounce rate goes way down and engagement goes way up so you know I think there's both. A big lesson about performance from the General market and I think you know we're now starting to see some some pretty tangible retail results. Scot: [42:27] Colt Express or some super-secret Insider information there what do you when you go to a Retailer's site on mobile is there some kind of way that you're seeing its Progressive web app is there is there some tell her or are you just speak he's going to have to know. Jason: [42:41] Not a very convenient one like I mean the old but you literally have to either go view the source and you could there some Telltale headers that would tell you that it's a progressive web app. Or there are now a set of tools on their companies like ghostery and built with their frankly not that convenient to use on mobile that they're much more convenient on desktop Adele sort of, scan the site you're on and give you a report of the the underlying technologies that are being used for that side and they'll both tell you if you're if you're on a pwa site like the only the big user experience tell, is if you're not in an app and your your get your given like a full interactive site even without, activity then it's a pretty safe bet that you're that you're benefiting from the pwa site. Scot: [43:35] Coop's so last question in this comes from. Julie Acosta and the question is a two-parter any updates on multi-touch attribution model Sim partners and who's doing online offline right. Gold attribution question. Jason: [43:53] Yeah I do love a good attribution conversation so so pretty like there are you know and ever increasing number of, specialized tools that that do multi-touch attribution models, but what has me most excited is much better tools for doing multi-touch attribution are, starting to evolve in our standard analytics packages so I to my mind I apologize to my friends that I've met him in a w I feel like the, Google analytics implementation of multi-touch attribution is maybe a little further ahead but, IBM Adobe and Google all have multi attribution models built in in Google's case that used to only be available in their expensive premium paid product and now it's, it's made its way down to the free version and the big evolution in these multi-touch attribution models is, you used to have to come to pick a model so you can say. [44:55] Hey I'm interested in a weighted model and this is how it works or I'm interested in first touch or I'm interested in last touch or I'm interested in that decay model where every subsequent touch gets less weight. And you had to kind of manually specify model and then you could use the Analytics tool to look through the data with that lens now multi-touch has become sort of machine learning enabled, into a century the two will tell you which multi-touch model from which amongst the the. [45:30] The pool of multi-touch models the name given to a support, best fits the data set that you have so you can actually use machine learning to sort of refine the multi touch model that you use for your particular data, and you know you can you can Embrace that for your Enterprise and so that's pretty cool. [45:51] Multi-touch is a slightly complicated word because I would argue there's kind of three versions there's there's online to offline which is. You know hey I saw some digital ads on Facebook and Google and then I walked into a store and bought something so how do I you know attribute that in-store purchase to those those digital touch, I'll call that multichannel, there's multi-touch like a I saw an ad in Facebook and then I saw an ad in Google and then I saw add-on, Abercrombie & Fitch and then I bought the shirt like how you know which which of those three marketing Vehicles gets all the credit or most of the Creditor how do you do that that's kind of. The most traditional version of a multi-touch that the tools are designed to support. And then there's Multi-Device like hey I started on a tablet and then I moved to a phone and then I will definitely made the purchase on a laptop. How do I do a tribution across those devices and So my answer before it was mostly based on. [46:55] Digital multi-touch for multi device we're starting to see some interesting Solutions. Adobe in particular has this interesting Co-op model where they they're building a shared database amongst all the people that use Google Analytics. Of all the device ID that they ever see and when you have a user on a particular device you can go to The Coop and say what other devices does that same user use and so you can use this kind of share. To identify the same user across multiple devices historically only the. The digital tools that are most likely to have the user authenticated can kind of recognize the same user across devices and so Google and Facebook have a huge advantage over the rest of the world. In Multi-Device attribution. And then online offline we're also starting to see so much better tools interesting Lee both Facebook and Google that want to you to spend more money on digital advertising. [47:56] It's very important to them that you be able to understand online and offline attribution because, a lot of the purchases that you make after seeing or digital ads are in a store and so they've actually built some pretty good tools that you can upload your offline data into, and then do online offline attribution so we're starting to see that a lot more commonly but when you ask which retailers are. Are the kind of best-in-class doing it right it's the retailers with an unfair advantage. And so it's REI that has 95% of their purchases coming from Members so they have. Capture of that that email address of that member number every time you do a transaction online or, in-store it's Sephora where 95% of their customers are in that customer Affinity program, and they're able to do very effective online and offline attribution it Starbucks where we high percentage of the transactions are being done with their Mobile payment at go system, they're doing the best job of online to offline, and you know that the more traditional retailers wear a lot of the in-store purchases are not Animas you know that they're all getting better at doing online offline attribution but they're only able to do it for a much smaller set of their data. Scot: [49:11] Well thanks to John and Julie for asking those questions I think we're caught up on listener questions we we regularly post he's over on our Facebook page so just go to Facebook and search for Jason and Scott show or go to Jason and Scott. Calm and you'll see a direct over to there yeah. Jason: [49:30] Awesome I got before we sign I did want to highlight a couple upcoming opportunities to meet some listeners. So we we looted to it earlier in the show but I am leaving Saturday for a week of retail visits in Australia. So I have a bunch of meetings booked with retailers in Sydney and Melbourne but if you're a listener that happens me in Australia be sure to ping me on Twitter sometime in the next weekend if it's schedules permit I would. Love to meet up but I'm I'm really looking forward to learning a lot more about that market and sharing some of. The warnings that we've had in some some more mature Amazon markets that maybe we can share with. With Australia which is the newest Amazon market so I'm looking forward to that trip that's going to be fun and I know my family is looking even more forward to having me be gone for a week. And then early next month August 6th through the 9th you and I are going to be live and in person together at the eat a least show in Boston. Scot: [50:40] Yeah it's going to be a lot of fun we will go and I plan on wearing since we'll be in the the the Founding Father home well I'm going to wear a white wig for that one side of the exhaust. Jason: [50:52] Yeah and it's and it's made me a person knows I just wear a white wig anyway so that'll be the normal for me. Scot: [50:56] Look for the two founding father type of sport or triangle hats to. Jason: [51:03] Exactly and with that it's it's happen again we've used up our a lot of time, but if you have any further questions or we got something wrong on this week show we'd love to hear about it so, let's keep the conversation going on Facebook or Twitter and as always if you got any value out of this show and you want to reward us the best thing you can do is go to iTunes and give us that 5-star review, if you really didn't enjoy the show the best way to do that is visit Scott in person at his home and give him your feedback that's appreciated as well. Scot: [51:36] Yeah absolutely will thanks for joining star Buy. Jason: [51:39] Until next time happy commercing.
Have a question? Want to chat? Join our community on Spectrum or tweet at us, @immutablefm! Redacted Pinpoint Lickability Next Now Styled-Components Isn't Our Code Just the Best Vectors Keegan Jones Watchlist
After the cliffhanger left in Episode 62: UI for U and I, we follow up with a short discussion about how we specifically do UI Testing at The Frontside in Austin, Texas. Resources: Tweet that led to this discussion Unit Testing Acceptance Testing Ember CLI Mirage Percy Test-Driven Development Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode #68. My name is Charles Lowell. I'm a developer here at The Frontside and podcast host-in-training. I'm here today with Jeffrey and Elrick, two other developers here at The Frontside. We are going to carry on where we left off back in Episode 62. There was an implicit promise to talk about the way that we do UI testing. JEFFREY: We had a cliffhanger. CHARLES: Yeah, we did. It ended with a cliffhanger and someone actually called us on it, which I hate it because they're making more work for you. But Ian Dickinson from Twitter writes, "Hey, you guys promised to talk about how you do UI testing at The Frontside but never actually delivered on that promise." We're going to try and make good on that today. JEFFREY: We like to resolve our promises. CHARLES: Oh! [Laughter] CHARLES: You've been on vacation for a week, Jeffrey and I forgot what it was like to have you in the office. JEFFREY: Not enough code puns. CHARLES: Oh, code pun. There you go. It's like CodePen except all of the code has to make puns. I like it. Internet is awash with people bloviating about testing so we figured we'd take our turn at it. We promise to keep it short. We'll keep it focused but it seems like a value that we do have so might as well talk about it a little bit. You guys ready? JEFFREY: Let's talk about testing. CHARLES: I think one of the best things to do is to use something concrete, not just to talk about abstractions, not just to talk about things that might be but we're actually starting a project here in three days. It's going to kick off on Monday and testing is going to be a key part of that. Why don't we talk a little bit about what it's going to look like as we kind of lay the groundwork for that project? JEFFREY: As we start this project, the very minimum baseline that we want to get immediately is acceptance tests in the browser. We want to make sure that when you fire up this app, it renders, you get the fallbacks for the basic functionality of this app immediately. As we're building features on top of this app, that's when we bring in unit tests, as we say, "We're building this new component. We're building this new feature. That's part of this app. We're going to use test driven development and unit tests to drive the creation of that," but ultimately, our test of quality for that app and our assurance of quality over the long term comes from the acceptance testing. CHARLES: People often ask the question, "When is it appropriate to write those unit tests? When is it appropriate to write those acceptance tests and how much time so I spend doing each one?" Personally, when I was starting out with testing many, many, many years ago, I really, really like unit tests and I like developing my code based around unit tests. The thing that I liked about them was that they were fast. The entire test suite ran in a matter of seconds or minutes. JEFFREY: And you get coverage. CHARLES: Yes. JEFFREY: Like you get your coverage numbers up. It is like every line that I wrote here has some kind of code coverage. CHARLES: Right and it feels really good. I also think that unit tests really are great for mapping out the functionality in the sense of getting an intuitive feel of what it's like to use a unit before it's actually written. You get the experience like, "This is actually a terrible API because I can't write the test for it," so obviously it's not flexible and it's really mungy so I really, really enjoyed that. The thing that I hated most is acceptance tests. They're hard to write. They were slow to run and it seemed like when they would break, it was not always clear why. JEFFREY: Wait, so we were just singing the praises of unit tests. What's wrong with them? CHARLES: Part of it is kind of the way that you conceive of a test: what are you actually doing? I think if you think of tests not so much as something that's there for regression or something that's there to drive your design, both of which are very, very important things but more is just kind of measurement: taking a data point, what is it that I want to measure? In the case of a unit test, I want to measure that this library call does exactly what I think it's going to do so I can take that data point, I can put it on my spreadsheet and I can say, "I got that base covered." The thing is that an acceptance test just measures a completely separate thing and by acceptance test, we're talking about testing as much of the stack your entire integrated application as you can. Oh, boy. Don't get me started on terminology but when you have an acceptance test, what you're measuring is, "Does my application actually function when all the pieces are integrated?" So you're just like a scientist in the laboratory and you're making an observation and you're writing it in your notebook. Yes or no, does it work? In the same way that you're taking a perception when your eye perceived the photon of light, you can say, "That thing is there," when it bounces off the chair in the room, I know that the chair is there. If what you want to measure, what you want to perceive does in fact my application work for user, then an acceptance test is the only thing that will actually take that measurement. It will actually let you write that data point down in your notebook and move on. I think that the problem with unit tests... Well, there's nothing wrong with them, it's just they don't observe your integrated application so that means you're blind. It's only part of the story so that's why we find that acceptance test really are like the highest value tests, even though they suck to write and even though they take a long time and even though when they break. Sometimes, they break at some weird integration point that you didn't see and that's really hard to diagnose. But you know what? That same integration point, that's a potential failure in your app and it's going to be just as weird and just is esoteric to track down. That's my take on it. One of the things that you're describing, Jeffrey is we set up those acceptance tests suites, why do we set them up? What, because they're part of like a process? JEFFREY: Yeah, that process is that we start at the very beginning when our project on the very first day is when we like to set up continuous integration and make sure the repos all set up, make sure the deployment pipeline is set up. CHARLES: What do you mean by deployment pipeline? JEFFREY: The deployment pipeline looks like, "We've got our a good repo and our version control under consideration and from there, anytime there's a push to the master build, any time a pull request gets accepted, we build out an continuous integration server, whether that be Travis or Circle or any of the solutions out there. We want to run that entire suite of acceptance tests and whatever unit tests we have, every time we have a push of code to past a person's local box. CHARLES: Right, a push to master is tantamount to a push to production. JEFFREY: Yes, that is the ideal. Any time that the code has been validated that far, yes this is ready to go to production and our acceptance test suite validates that we feel good about this going out. CHARLES: Yeah so the thing is if all you have is unit tests, you have no way of perceiving whether your application will actually work in a real browser, with real DOM events talking to a real server, do you really want to push to production? [Laughter] JEFFREY: We had some client work recently. We do have a lot of acceptance tests coverage and actually a lot of unit tests too. We changed the browser running on the CI server from Chrome 54 to 58 and uncovered a lot of bugs that acceptance tests coverage found, that unit tests just would never have revealed that the end user had a problem. CHARLES: Now, why is that? Let's take it down a layer now, when we do an acceptance test, what does that actually mean? What are we describing in terms of a web application? JEFFREY: We're describing that we have an application that we're actually running in a real browser. Usually, you're going to have to have some kind of stubbed out things for authentication to make sure you can get around those real user problems but you're actually running the real application in a real browser, not in a headless browser but an actual browser with a visible DOM. CHARLES: Yeah, not a simulated DOM. JEFFREY: Exactly so you can surface the kinds of real problems that your customer will actually have. CHARLES: In terms of interaction, you are not sending JavaScript actions. JEFFREY: No, you're firing DOM events. You're saying, "Click this button," or, "Put this text into this input," and the underlying JavaScript should be pretty opaque to you. You shouldn't be calling directly at JavaScript events in any acceptance tests. You want to rely on solely what's on the DOM. CHARLES: Yeah. I would say the ideal set up with an acceptance test, you should be able to have your acceptance tests suite run and then completely and totally rewrite your application, let's say rewrite it from Ember to Angular or something like that and you don't have to change your acceptance tests suite at all. JEFFREY: Yeah, usually the only binding you have is IDEs or classes or whatever you're using to select elements and that should be it. CHARLES: Right, you're interacting with the DOM and that's it, so now in terms of the server, most modern web apps -- in fact, all of them -- certainly in the kind of swimming pools in which we splash, have a very, very heavy server component. There's a lot of requests. Just even load the page and then throughout the life of the page, it's one big chatterbox with the server, what do we do there? JEFFREY: That's when we need to pull on a tool that can mock a request for us, the one that we fall back on a lot is Ember CLI Mirage that is built on top of Pretender. It's a really nice way to run a fake server, basically. I would even take that a step further and would love for the tool to be another server completely that's running on your local box or your CI box or whatever so that you get the full debugging available in developer tools and you actually see those requests as if they were real ones that just coming from a fake server. CHARLES: Right, as Jeffrey said right now, we use a tool called Mirage, which for our listeners in the Ember community, they know exactly what we're talking about. It's the gold standard. What it does is it actually stubs out XML HTTP request, which is what most of the network traffic in your browser after the initial load is encoded in. It's got a factory API for generating high quality stub data so your code is actually making a [inaudible] request and it's all working. Unfortunately because it's stub, as you said none of the developer tools work. But there's been talk about making Mirage, you service workers so that you would have all of that running. You could still run Mirage inside your browser. It would be a different process. I think the server's workers run off thread. That actually is very exciting. It's a great tool. I think it's a great segue to talk about one of the things that we love. This is absolutely a mandatory requirement for us, when we start a project is to have acceptance testing in place. Back in the battle days, it would take us, I want to say like two weeks just to set up our acceptance tests suite for a project. This is when we were doing Backbone and this is when we were doing the early, early days of Ember. We would start a project, we'd spend all of that time upfront just so that we could have an acceptance testing framework and that was actually a really hard sell because it's like, "What are you doing? You don't have anything to show," and it's like, "Well, we're actually setting up an app-a-scope," so that we can observe your application and make sure that it's running. The very first thing that we do is we set up an app-a-scope and it's like, "Nobody sets up an app-a-scope," so they can actually see their application. But one of the great things that has happened and one of the reasons we love Ember so much is that you actually get this now pretty much for free. I think there's still some stuff that's very white box about Ember testing and a lot of times, we talked about that ideal of you should be able to swap out your entire implementation of your app but your acceptance test suite stays the same. That's not quite possible in Ember. You have to know about things like the run loop and it kind of creeps its way in. But it's 95% of the way there. It's mostly there. It's good enough. It's better than good enough. It's great. You just get that for free when you start a new Ember project. JEFFREY: We've talked a lot about the Ember ecosystem and what we like there about testing and we're going to be doing some React work soon. What's the story there? CHARLES: Well, I'm glad you asked, Jeffrey. Yes, we're going to be doing some React work soon so again, this is a new project and it's absolutely a 100% ironclad requirement that we're not going to develop applications without an app-a-scope but I think that the React community is in a place where you kind of have to build your own app-a-scope. You know, actually having kind of scoured the blogosphere, there are a lot of people in the React community who care very deeply about acceptance testing but it does not seem like yet a mainstream concern or a mainstream pathway. For example, Jest which is the tool that is very, very popular in the React community and I was actually really excited about reading the documentation. It doesn't even run in the browser. It's Node.js only, which for us is it's a nonstarter. That makes it really fast but it actually is not an app-a-scope which is what we need. It does not lead us to actually observe our application. It lets you observe the components from which your application is built but actually you're blind to what your application actually looks like in production. I was kind of bummed about that because I know that there is work and I know that the maintainers of Jest carry very deeply about making it run in the browser eventually. There are some experimental pull requests in the couple branches. Who knows? Maybe those are even merged right now but the point is, it's still very early days in there. There are a couple of people who have used like Nightmare that they've booted up and they're kind of controlling the running acceptance tests in Nightmare from Jest. Now, that sounds great but part of your app-a-scope needs to perceive different browsers. In fact, users don't use Nightmare.js. They use Chrome, they use Mobile Safari, they use Firefox and normal Safari and Edge and what have you. There's actually a great set of tool. Testem and Karma are all set up to be able to run all these browsers and have those browsers connect to your test suite and run them in parallel. Again, that's kind of a bar. That's what we're actually working towards right now. We're running some experiments to try and use Mirage and Karma with Mocha to actually get the multiplicity of browsers, actually using browsers, be able to use real DOM events and test a real API or as real as API as we can get. I'm kind of excited about that work. I hope that the acceptance testing story gets a lot better in the React community and it's early days but like I said, we care very deeply about it. I know a lot of other people care very deeply about it. Some people have some different feel like it's not necessary. Some people feel like they don't need an app-a-scope. They're like, "You know what? My application is there and I know it. I don't actually want to look at my application but I know it's there." I think that's actually the pervasive model at Facebook and obviously, they're doing something right over there. Although, who knows? I'll keep my mouth shut. No, but seriously, there's some good software that comes out of there. The Facebook application itself is pretty simple. Maybe it's enough to perceive your components and not have to perceive your application as a whole. Or there are other ways that you can go about it. If you've got lots and lots of money, you can have people do it and they do a pretty good job. I understand that's another strategy. In fact, I think that's what Facebook does. I've read a lot of the debates between why they go with unit tests, why they don't go with acceptance tests? They've got a QA Team. They probably has their own tools or who knows? Maybe they use like Capybara or WebDriver or Selenium and those tools are really, really excellent and they are truly, truly black box tools. JEFFREY: Elrick, you brought up an interesting new angle that I think is starting to come of age that I think as an awesome addition to the toolkit which is visual regression testing. How far that's come in the past couple years and how valuable that is for getting the CSS confidence. Unit test coverage is great for testing your individual components, for testing JavaScript functionality but ultimately, the confidence around CSS comes from visual regression testing. That's the only way you can get that and I think that's helping the CSS ecosystem to be a little healthier to encourage better practices there and having engineers who are more averse in JavaScript and not as averse in CSS, feel more comfortable making CSS changes because they have that type of testing to them. ELRICK: When you're doing things programmatically, you wouldn't really know what's going on visually unless you physically go and check it and then that may not be the best solution because it takes a lot of time. JEFFREY: Yeah. CHARLES: The tool that we have that have the most direct hands on experience with is Percy and I think, you guys have more experience with it than I do but it's very intriguing. When I heard about this I was like, "How does this even work?" JEFFREY: It's so great to have visual diffs. As it runs through your acceptance test suite, you specify, "Take a screenshot now," so you can compare against what came before. Sometimes you run into some finicky things that are like, "I have some data that's not completely locked in." One of the most common dis I get is, "Hey, there's deep change in the screenshot from acceptance tests." I'm like, "Oh, because I've hard code that date. Always do the same," but it's great at servicing and like, "This button move 10 pixels. Did you intend to do that?" In particular, when we upgraded a component style guide library, we noticed a lot of changes that came out of that. Actually they were okay changes but it was important to know that those has changed. CHARLES: Right and there were actually some visual regressions too. I remember there was some button turned red or something like that and it's like, "Oops, something got screwed up in the cascade," which always seems to happen. JEFFREY: I think it has caught some bug before and some changes we've made to select component and like, "This has the same functionality," but actually the classes around this particular component changed. "It doesn't look the same. What happened?" It's been a really valuable tool. CHARLES: The acceptance test from the code perspective actually completely and totally worked, right? JEFFREY: Yes. Exactly. CHARLES: But, yeah, acceptance test didn't catch it because they were not perceiving the actual visual style of the application. It's an enhancement or an add-on to your app-a-scope so that it can see more things and you can perceive more things. I love it. When you have an acceptance test suite, then it allows you to do those power add-ons because you're actually loading up your whole application in a real browser, in a real DOM and you're making it go through the paces that an actual user will do so you can do things like take visual diffs. If you're just doing unit tests in a simulated DOM, that's just not a possibility because you're not actually running the painting algorithms. Whereas, an acceptance test, you are so it allows you to perceive more things and therefore, check for more things and catch for more things. JEFFREY: I think my next area of exploration that I'm interested in is, let's say you have a web app with sound effects. Is there something to validate the sound effects? Or take a video capture of your app because that would be really cool. I'm sure there's something out there but I think I'll be my [inaudible] to go find out. CHARLES: And then we actually touched on this on the episode on accessibility, you can test your accessible APIs when you're running acceptance test. Another thing that I might add is as terms of regression testing, acceptance tests have a much longer term payoff because I feel very strongly about test driven development for your unit tests. I think unit tests are really, really great and if you're building tiny units, especially novel ones that you haven't really built before, they're not cookie cutter things like a component or something like that. It's some unique service or just a utility library or some bundle of functions. Using test to drive those out is extremely important. But I almost feel like once you've done that, you can throw those tests away. By all means, they're free to keep them around but your tests also serve as a fence, a wall, a kind of exoskeleton for code and that can be great except when you want to change and refactor the code around, you have to tear down the wall and you have to break the exoskeleton of the code. If your code is siloed into all these tiny little exoskeletons, elsewhere it's going to be very hard to move and refactor or it's going to be hard to do without rearranging those walls. I guess my point is, with an acceptance test, you're making that wall very big. It covers a lot of area so you can have relative freedom to change things internally and rapidly. While the acceptance test is slow, the speed for internal change that engenders, I think is worth it. I think that's another pay off because I think that tests do have a shelf life and I think that the shelf life for unit tests is very small. Whereas for application level tests, it's very large. Then I guess, the final thing is really, there's no such thing as acceptance tests and integration tests and unit tests and blah-blah-blah-blah-blah, all the different types of tests. It really is just a matter of scope. It's like how big are the lines that you're drawing so acceptance test is a type of test that we give a name for tests that have a very high amount of scope. They cover a lot, where in unit tests, have a very small scope but there's a whole spectrum for every piece of code that is running in between, there's a whole spectrum in between. All righty. Well, this concludes Episode 68 of The Frontside Podcast. I told you it was going to be a short one but I think it's going to be a good one. I know that this is a subject that's very, very near and dear to our hearts. We don't dedicate explicit time to it all that often but I'm sure we will return to it again in the future. Jeffrey, Elrick, thank you guys for joining us for this discussion and we're out.
Camilo är upprörd och är snubblande nära att ropa Information wants to be free! Kristoffer i sin tur förklarar varför hans grundinställning till appar är att de är kass (har dock inget med frihet att göra). Sen glider de in på ett ämne de inte känner sig bekväma med och därmed raskt överger. Det är med andra ord ännu ett avsnitt av världens mest avancerade podcast om webb för dig som jobbar med webb, eller vill jobba med webb. Frågor, feedback eller glada tillrop? podden@24hr.se Saker som diskuteras i avsnittet: Foliehatt om iOS och AdBlocking som ett sätt att flytta annonsörer till iAds Mer om bannerkriget Banners kan vara farliga Apple news Adblocker på Android Panasonic Smart TV med Firefox Appar Web App manifest Hur många människor är det i rymden just nu? Månfaser Vad är adblocking på Mobile Safari en tillämpning av? Daring Fireball “Gör reklam på ett snyggt sätt” The Deck Marco Arment och hans adBlocker Peace: Introduceras Möter verkligheten Drabbas av eftertankens kranka blekhet Ghostery Ökar adblockers? AdBlockPlus som annonsnätverk Feber HackerNytt TV Googles Notification Tjänst
Financial Folk Wowed by Launch Day Lines RBC Ups Apple Target on Potential ASP Upside Sunday Lines: Scenes from a Mall Tim Cook Greets iPhone Buyers at Palo Alto Store iFixit Breaks New iPhones; Gives High Repairability Score Chipworks Confirms TSMC as Producer of Apple A8 Processor RBC Respondents Not Wowed by Apple Watch or Apple Pay (Yet) Tim Cook Pens Open Letter on Consumer Privacy Apple Says It Cannot Access Info on Passcode-Protected iOS 8 Devices Apple Adds Privacy-Focused Search Engine DuckDuckGo as Choice in Mobile Safari : Learn Apple software, plus business and creative skills, from easy-to-follow video tutorials at .
Gastheer Maarten Hendrikx, @maartenhendrikx op Twitter of via zijn website. Panel Jan Seurinck, @janseurinck op Twitter, of via zijn website. Toon Van de Putte, @toonvandeputte op Twitter, of via zijn website. Marco Frissen, @mfrissen op Twitter, of via zijn website. Stefaan Lesage, @stefaanlesage op Twitter, of via zijn website. Onderwerpen Stefaan is naar twee Samsung events geweest, zijn verslag. OS X Mountain Lion is er (bijna). Facebook "verified" accounts. De echte Google Goggles? Windows 8 krijgt een nieuw logo. Microsoft publiceert meer anti-Google reclames. Google omzeilt privacy instellingen van Mobile Safari. De winnaars van onze #LoveLogitech45 wedstrijd Winnen een Logitech Mini Boombox : @jurgenholvoet @edoderoo @type14 @funkstar_ @lurkjerk Winnen een Logitech Squeezebox Radio : @jnloco @fzelders Van harte gefeliciteerd aan de winnaars. Wijzelf zijn natuurlijk stik jaloers en vragen dus vriendelijk of jullie samen met uw prijs op de foto kunnen en deze dan via twitter onder de #lovelogitech45 hastag aan ons kunnen doorspelen. Op die manier kunnen wij met zijn allen lekker ... nog ietsje jaloerser worden :-) Tips Jan: Autoscroll for Chrome, Flanders DC i-Creative Toon: Brickshop en de Victorinox Cybertool 34 Floris: Floris' 20 songs Spotify list, The Bitterest Pill podcast Marco: Clear en Osfoora Mac Maarten: Onavo voor Android Stefaan: Camcorder Info Feedback Het Tech45-team apprecieert alle feedback die ingestuurd wordt. Heb je dus opmerkingen, reacties of suggesties, laat dan een commentaar hieronder achter. Via twitter kan natuurlijk ook @tech45cast. Ook audio-reacties in .mp3-formaat zijn altijd welkom. Items voor de volgende aflevering kunnen gelinkt worden op Twitter met de hashtag '#tech45'. Vergeet ook niet dat je 'live' kan komen meepraten via live.tech45.eu op dinsdag 29 februari vanaf 21u30. Deze aflevering van de podcast kan je downloaden via deze link, rechtstreeks beluisteren via de onderstaande player, of gewoon gratis abonneren via iTunes.
Barclays Capital Analyst Ups Apple Target to 555-Dollars on Apple’s Power as “Disruptive Force in Hardware” / Fortune: Jefferies Analyst Advises Clients on Oct 4 Event (About Which Apple Has Not Yet Advised Anyone Officially) / All Things D Sources Say Oct 4 Event to Be Held on Apple Campus / Amazon Invites Press to Event in NYC This Wednesday, 28 September / The Next Web: Twitter Events May Indicate Release of iOS 5 by Oct. 10 / Apple Insider: Apple Employee Vacation Blackout Dates May Indicate iOS 5 Release on Oct 10 and New iPhone Release on Oct 14 / Apple Joins Consumer Protection Group DDP / S3 Sues Apple for Alleged Patent Infringement / Samsung Files Four New Suits Against Apple in the Netherlands / Verizon Files Amicus Brief Asking US Court to Not Ban Certain Samsung Products / All Things D: Samsung App Display Uses Icons for Mobile Safari and iOS App Store
Security Researcher Outlines Vulnerability in Mobile Safari for iOS Devices / Olivier Sanche - Director of Global Data Center Operations for Apple - Passes Away at 41 / iPad Lands in 10 New Countries in Asia and Europe / Apple Insider Looks at Subsidized iPads / Bloomberg Looks at Financial Sector Move from RIM to iOS Devices / Barnes and Noble Trumpets nook Success Without the Benefit of Numbers / ChangeWave Says iPad Gaining Quickly on Kindle in Two-Horse eReader Race / Project Digital-Only Magazine Hits the App Store / Morgan Stanley Analyst Says iPhone Outselling Nokia N8 Six to One in Europe / Apple Requires True Names for iOS Game Center / First-Month Sales of Microsoft Kinect Beat First-Month Sales of Apple iPad
USITC to Investigate Apple Patent Infringement Claims Against Motorola / iOS 4.2 Includes a Slew of Security Fixes / Apple Adds WebSockets and Accelerometer Support to Mobile Safari via iOS 4.2 Update / Rumors Put Subscription-Focused iOS 4.3 in First Half of December / Gruber Kind of Confirms Reports of News Corp Daily for Tablets / Some Users See Problems with Sony and Philips TVs and 2nd-Gen Apple TV / Apple Announces Black Friday Sale for UK Shoppers / Apple Announces Black Friday Sale for US Shoppers / US Survey says 6-to-12-Year-Olds Wants iPads More Than Anything / Apple Hiring and Poaching RIM Execs for Enterprise Sales / Acer Announces Three Tablets for 2011 and an iTunes-esque Store / Sony Teases Reader Apps for iPhone and Android / Beatles Sales Top 450k Albums and 2-Million Singles in Week One / Apple 1 Sells at Auction for Nearly a Quarter of a Million Dollars
Gastheer Maarten Hendrikx, @maartenhendrikx op Twitter. Panel Stefaan Lesage, @stefaanlesage op Twitter of via de Devia website. Jan Seurinck, @janseurinck op Twitter, of via zijn website. Marco Frissen, @mfrissen op Twitter, of via zijn website. Davy Buntinx, @dirtyjos op Twitter, of via zijn website. Onderwerpen Bing voegt social resultaten toe door een samenwerking met Facebook. (Bing Likes Facebook, Facebook apps collecting user data) We blikken terug op de presentatie van de jaarcijfers van Apple, en het commentaar van Jobs daarbij. Ook wagen we ons aan voorspellingen wat er op het oktober "Back to the Mac" event gepresenteerd gaat worden. (Apple presents Q3 results, Back to the Mac) [red: wanneer deze podcast online is, is het Apple event al voorbij. Volgende week onze analyses, maar vergelijk gerust onze voorspellingen met de werkelijkheid] Nieuws BSA zet Europa onder druk om Open Standaarden niet meer te gebruiken T-Mobile verliest na 2 jaar het alleenrecht op iPhone verkoop. Mozilla komt met haar eigen open decentrale appstore platform voor webapps. Dief stuurt USB met data van gestolen laptop terug naar eigenaar. Er komt een remake van Duke Nukem 3D gemaakt door fans. Google kwartaalresultaten zijn ook boven verwachting goed. Google maakt verjaardags doodles. Netlog is op zoek naar een partner. Western Digital introduceert de eerste harddisk groter dan 2,5TB Maarten en zijn egaa verwachten kind versie 2.0! Tips Jan heeft een nieuwe favoriete Android Twitter client: Twicca Marco is begonnen met het lezen van "Nothing to Envy: Ordinary lives in North Korea" Stefaan raadt Movie Slate aan voor iedereen die veel filmt. Maarten gebruikt Snoopy, een bookmarklet waarmee je ook op je Mobile Safari de source en eigenschappen van een site kan bekijken. Davy vind de koffie uit de Aerobie Aeropress geweldig. Feedback Het Tech45-team apprecieert alle feedback die ingestuurd wordt. Heb je dus opmerkingen of suggesties, laat dan zeker een reactie achter. Via twitter kan natuurlijk ook: @tech45cast. Ook audio-reacties in .mp3-formaat zijn altijd welkom. Items voor de volgende aflevering kunnen gemarkeerd worden in Delicious met de tag 'tech45'. Vergeet ook niet dat je 'live' kan komen meepraten via live.tech45.eu op dinsdag 26 oktober vanaf 21u30. Deze aflevering van de podcast kan je downloaden via deze link, rechtstreeks beluisteren via de onderstaande player, of gewoon gratis abonneren via iTunes.
Apple berichtet über Quartalsergebnisse; iPhone 3G (fast) weltweit ausverkauft; iPhone 3G könnte erst im Oktober lieferbar sein; Pwnage Jailbreak für iPhone 2.0; MobileMe unter Windows Vista und XP; Apple entschuldigt sich erneut bei MobileMe Benutzern; Intel senkt die Preise; Motorola verklagt Apple-Mitarbeiter; Mobile Safari bekommt einen heftigen Geschwindigkeitszuwachs; 1Password für das iPhone; Wordpress Client für das iPhone; You Control
- FutureMedia News, Reviews, Interviews, Analysis, Expos, Press Events, Parties
Click to Play In QuickTime Player Man explains how he used his iPhone to get info he needed at a dinner party.Formats available: MPEG4 Video (.mp4)Tags: iphone, apple, safari, advertising, ads