POPULARITY
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Welcome to a special Halloween edition of the TestGuild Automation Podcast! Today, we've conjured up a truly spellbinding guest—Paul Grossman, renowned as the "Dark Arts wizard" of Test Automation. With over 25 years in software development and a co-author of "Enhanced Test Automation with WebDriverIo," Paul delves into the magical realms of generative AI and its transformative, exciting role in test automation. Join us as we explore how AI tools like Copilot and Test Rigger simplify tasks, the possibilities and limitations of AI-generated content, and the controversial use of headless browsers. We'll also discuss top-tier tools like BrowserStack Live and Applitools, dynamic page object models, and proper prompting for effective AI outputs. This episode promises to be a hauntingly insightful treat packed with practical tips, industry expertise, and a peek into the future of automation testing. Listen up!
In this episode, we dive into the career journey of Angie Jones, the Global Vice President of Developer Relations at Block. Angie shares her experiences transitioning from a Senior Software Engineer at Twitter to leading developer relations initiatives at Applitools and now at Block. Angie also provides insights on the future of developer relations and the evolving landscape of tech communities. Whether you're an aspiring developer advocate or a seasoned professional, this episode is packed with wisdom and practical tips. Guests: Angie Jones - @techgirl1908 Panelists: Ryan Burgess - @burgessdryan Episode transcript: https://www.frontendhappyhour.com/episodes/sips-of-wisdom-interview-with-angie-jones
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
In this episode, Dave Piacente, a senior community manager in developer relations and community expert at Applitools, joins us to talk about redefining test automation. There are a common set of techniques seasoned test automation practitioners know to be the pillars of any successful automated testing practice that can fit into most (if not all) team contexts. But some things have either been left out or are described in ways that are disadvantageous for the industry simply because we need to talk about it from the perspective of the fundamentals used to craft them. By reasoning from first principles, we can unearth a more impactful definition of test automation that can act as a compass and help supercharge everyone's test automation practice - while also understanding how to navigate the uncharted waters of new technologies that challenge existing paradigms.
Andrew Knight, popularly known as automation panda, was recently interviewed on the Modern Web Podcast by host Rob Ocel. Andrew, a software quality champion, developer advocate, and test automation expert, shared his insights and experiences in the interview. He spoke passionately about testing and the importance of improving software quality. Andrew discussed the challenges he faced early in his career and how he recognized the opportunity to enhance software stability, readability, and speed through effective testing. The conversation then shifted to the current state of testing in various tech communities. Andrew highlighted the wide variation in testing practices across companies, irrespective of programming languages or tech stacks. Smaller companies often lacked proper testing processes, while larger companies relied on traditional testing approaches that were deeply ingrained. Advancements in web testing frameworks and tools were another topic of discussion. Andrew acknowledged the long-standing availability of functional testing, which simulates user interactions with a website. However, he pointed out the emergence of newer tools like Cypress and Playwright, which provide a modern developer experience, making web testing more accessible, efficient, and enjoyable. Andrew also emphasized the importance of addressing user experience and visual aspects of testing, where human evaluation remains crucial but can be supplemented by visual testing tools like Applitools. The interview concluded with a glimpse into the future of autonomous testing. Andrew highlighted that while autonomous testing could never completely replace human exploratory testing, it held potential in understanding the behavior of software applications. The vision was to train autonomous agents to recognize established workflows, adapt to specific applications, propose test cases based on observed behaviors, and potentially execute these behaviors autonomously. This approach would allow developers to focus more on designing desired behaviors and less on implementing specifications. Throughout the interview, Rob Ocel and Andrew Knight discussed the trade-off between investment and return in software testing. They reflected on the value of maintaining extensive test coverage and questioned its significance compared to the effort required to sustain it. Andrew emphasized the importance of focusing on valuable behaviors and understanding customer needs when prioritizing testing efforts. They also touched upon the idea that not all edge cases or hypothetical scenarios warrant investing time and resources in testing if they have minimal real-world impact. Other topics covered in the interview include Andrew's talk on the eight Software Testing convictions, inspired by Japanese woodblock prints, which emphasize intentional design, accessibility, and the value of quality in software development. The discussion also revolved around the value of personas, engaging with real users to understand their needs and prioritize testing efforts accordingly. The interview highlighted the delicate balance between investing in quality and delivering value in software development and testing, with a focus on valuable behaviors and iterative learning from user interactions. Host Rob Ocel, Software Architect at This Dot Labs Guest Andrew Knight, Principal Developer Advocate at Applitools This episode is sponsored by This Dot Labs.
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
If you'd like to accelerate your test execution & maintenance speed — and massively reduce flakiness, I've got some excellent news for you. On this episode of TestGuild Automation Podcast, Adam Carmi Co-founder and CTO at Applitools, joins us to discuss new automation testing self-healing technology that is a giant leap forward in test infrastructure innovation. Adam goes over how this new cloud-based testing platform replaces legacy testing grids — easily augmenting open-source test frameworks with AI capabilities (like self-healing locators). Discover how you and your teams can intelligently heal broken tests as they run — drastically reducing flakiness and execution time. Lastly, the episode touches on the rise of AI in the testing industry and how testers and people in the testing community have a bright future. Tune in for a plethora of insights and tips on test automation.
What company just Emerged from Stealth to Apply AI to App Testing? Why is Automation the way forward for enterprises? How can you solve False Positives in Static Code Analysis? Find out in this episode of the Automation in DevSecOps New Shows for the week of May 21st. time News Title News Link 0:19 Applitools FREE Account https://applitools.info/joe 0:34 An Overview of Software Design Patterns & Test Automation https://testguild.me/p2xpf7 1:46 Automation is the way forward for enterprises https://testguild.me/wx4ypx 2:48 Applitools - FRONT-END TEST FEST 2023 http://front-endtestfest.com/r5a 4:04 Approachable Test Driven Development https://testguild.me/zm1kgj 5:54 Book update got a print copy and need Amazon review https://testguild.com/automationbook/ 6:59 SapientAI Emerges From Stealth to Apply AI to App Testing https://testguild.me/a7janq 7:42 Performing Load Testing with Artillery in a Nutshell https://testguild.me/7jwr3b 8:29 False Positives in Static Code Analysis https://testguild.me/kx2xod
Have you heard what robot caused a $100 billion error? Want to know a new game-changing open-source IDE for exploring and testing APIs Have you seen how easy it is now to do Security Testing against Postman collections? Find out in this episode of the Automation in DevSecOps New Shows for the week of Feb 19th. So grab your favorite cup of coffee or tea, and let's do this. Time News Title Rocket Link 0:24 Create your FREE Applitools Account Now https://applitools.info/joe 0:58 Applitools - TAU Conference https://tauconference.com/isc 2:00 Tricentis Unveils the Future of No-Code Test Automation https://testguild.me/gwgrv7 2:46 A robot's $100 billion error: https://testguild.me/6hoiv0 4:05 Headlamp is launching on Product Hunt tomorrow https://testguild.me/9umhip 4:53 Bruno open-source IDE for exploring and testing APIs! https://testguild.me/2pllyl 5:46 Release of Testim Mobile. https://testguild.me/e1jl89 6:50 Another cloud outage (was it DNS or BGP) https://testguild.me/pr0aks 7:32 Azure Load Testing service is now Generally Available! https://testguild.me/phr7k7 8:33 Pynt and to do Security Testing against the Postman collection https://testguild.me/vrr8w2
Ayelet is a professional UX and product design lead with extensive experience in the technology industry. She has spent the past few years working in fast-paced start-ups, leading design teams to create impactful experiences for mobile and B2B SaaS products. She thoroughly understands the end-to-end design process, from research to delivery, and is well-versed in Agile methodologies and organizational processes. Previously, Ayelet was part of a global UX team at NICE, a large enterprise company. She took a major part in creating and implementing a new design system for the organization's products. She led the switch from multiple design tools to one - which improved efficiency through better alignment and collaboration within the team and other stakeholders. When she's not designing, you can find her giving back to the design community by mentoring young designers and hosting workshops and meetups. She is always eager to learn and grow and brings this mindset to everything she does, continuously striving to improve processes and stay ahead of the curve.
Andrew Knight, better known as "Automation Panda". He describes himself as a software enthusiast, with a specialty in test automation and behavior-driven development. And he is a developer advocate at Applitools. They'll discuss Test Automation at Scale, how to keep up with changes in the system, how to maintain the quality of your test suite, and more. Episode Highlights How Andy got into software testing and specifically, automation testing. Doing test automation at scale. How expensive can making a mistake within a 10 people team vs a much larger one can be. How to distribute tasks within an automation testing team and its importance. For the episode transcript and relevant links, check here: https://abstracta.us/blog/software-testing/quality-sense-podcast-andy-knight-test-automation-at-scale/
Recording date: Thursday, March 24John Papa @John_PapaWard Bell @WardBellDan Wahlin @DanWahlinCraig Shoemaker @craigshoemakerRamona Schwering @leichteckigBrought to you byAG GridNarwhal Visit nx.dev to get the preeminent open-source toolkit for monorepo development, today. Resources:Ramona's websiteArticle over Testing Pipelines for Frontend developersTesting trophyCypress testingshopware AGDan Wahlin on End to End Testing with Cypress.ioSpot the DifferenceWhat is visual testingTest CafeVisual testing with StorybookOnce Upon a Storybook - Web Rush 110Visual testing with PercyPlaywright testingAppliTools for visual testingTesting PyramidJiminy CricketToastr JavaScript libraryRun DisneyTimejumps00:34 Hiking and puppies03:05 Guest introduction05:45 When do I want to use this kind of testing?08:31 Sponsor: Ag Grid09:34 When do you enable visual testing?17:03 Can you ignore areas of the display if you don't want it to test?18:14 How do you test for responsive design?21:42 What about Applitools?23:12 Sponsor: Narwhal23:44 When a difference is spotted, how do you work with that?25:36 What issues have come up when learning about visual testing?29:36 How do you fit visual testing alongside other testing?36:03 Final thoughtsPodcast editing on this episode done by Chris Enns of Lemon Productions.
Have you seen the free course intro to testing machine learning models? Want to know a Python-based performance test tool you can actually run on Google's Kubernetes engine? Is a developer software development lifecycle, the same as a security lifecycle? Find out the answers to these and all other E2E full pipelines. DevOps automation testing, performance testing, and security testing in this episode of Test Guild's new show for the week of April 3rd. So grab a cup of coffee or tea, and let's do this. Time Item Link 0:27 Applitools https://rcl.ink/xroZw 0:57 Cypress Course https://links.testguild.com/lyy6f 1:57 SeleniumGrid https://links.testguild.com/j52jJ 2:28 Dagger https://links.testguild.com/h6K0x 3:26 Garden.io, https://links.testguild.com/6DkTb 4:31 Puppet Report https://links.testguild.com/cySXw 5:10 ML in Testing https://links.testguild.com/AfQBf 5:54 Applitools FOT UX https://links.testguild.com/bI0Hi 6:26 Anatomy of an incident https://links.testguild.com/8MDZe 7:12 Locust Testing on (GKE) https://links.testguild.com/tGNDx 7:43 Security's Life Cycle https://links.testguild.com/LFpF1 8:52 Spring4Shell Tool https://links.testguild.com/nN46h
About SeanSean is a senior software engineer at TheZebra, working to build developer experience tooling with a focus on application stability and scalability. Over the past seven years, they have helped create software and proprietary platforms that help teams understand and better their own work.Links: TheZebra: https://www.thezebra.com/ Twitter: https://twitter.com/sc_codeUM LinkedIn: https://www.linkedin.com/in/sean-corbett-574a5321/ Email: scorbett@thezebra.com TranscriptSean: Hello, and welcome to Screaming in the Cloud with your host, Chief cloud economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Today's episode is brought to you in part by our friends at MinIO the high-performance Kubernetes native object store that's built for the multi-cloud, creating a consistent data storage layer for your public cloud instances, your private cloud instances, and even your edge instances, depending upon what the heck you're defining those as, which depends probably on where you work. It's getting that unified is one of the greatest challenges facing developers and architects today. It requires S3 compatibility, enterprise-grade security and resiliency, the speed to run any workload, and the footprint to run anywhere, and that's exactly what MinIO offers. With superb read speeds in excess of 360 gigs and 100 megabyte binary that doesn't eat all the data you've gotten on the system, it's exactly what you've been looking for. Check it out today at min.io/download, and see for yourself. That's min.io/download, and be sure to tell them that I sent you.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. An awful lot of companies out they're calling themselves unicorns, which is odd because if you look at the root ‘uni,' it means one, but they're sure a lot of them out there. Conversely, my guest today works at a company called TheZebra with the singular definite article being the key differentiator here, and frankly, I'm a big fan of being that specific. My guest is Senior Software Development Engineer in Test, Sean Corbett. Sean, thank you for taking the time to join me today, and more or less suffer the slings and arrows, I will no doubt be hurling your direction.Sean: Thank you very much for having me here.Corey: So, you've been a great Twitter follow for a while: You're clearly deeply technically skilled; you also have a soul, you're strong on the empathy point, and that is an embarrassing lack in large swaths of our industry. I'm going to talk about that right now because I'm sure it comes through the way it does when you talk about virtually anything else. Instead, you are a Software Development Engineer in Test or SDET. I believe you are the only person I'm aware of in my orbit who uses that title, so I have to ask—and please don't view this as me in any way criticizing you; it's mostly my own ignorance speaking—what is that?Sean: So, what is a Software Development Engineer in Test? If you look back—I believe it was Microsoft originally came up with the title, and what it stems from was they needed software development engineers who particularly specialized in creating automation frameworks for testing stuff at scale. And that was over a decade ago, I believe. Microsoft has since stopped using the term, but it persists in areas in the industry.And what is an SDET today? Well, I think we're going to find out it's a strange mixture of things. SDET today is not just someone that creates automated frameworks or writes tests, or any of those things. An SDET is the strange amalgamation of everything from full-stack to DevOps to even some product management to even a little bit machine-learning engineer; it's a truly strange field that, at least for me, has allowed me to basically embrace almost every other discipline and area of the current modern engineering around, to some degree. So, it's fun, is what it is. [laugh].Corey: This sounds similar in some respects to oh, I think back to a role that I had in 2008, 2009, where there was an entire department that was termed QA or Quality Assurance, and they were sort of the next step. You know, development would build something and start, and then deploy it to a test environment or staging environment, and then QA would climb all over this, sometimes with automation—which was still in the early days, back in that era—and sometimes by clicking the button, and going through scripts, and making sure that the website looked okay. Is that aligned with what you're doing, or is that a bit of a different branch?Sean: That is a little bit of a different branch from me. The way I would put it is QA and QA departments are an interesting artifact that I think, in particular, newer orgs still feel like they might need one, and what you quickly realize today, particularly with modern development and this, kind of, DevOps focus is that having that centralized QA department doesn't really work. So, SDETs absolutely can do all those things: They can climb over a test environment with automation, they can click the buttons, they can tell you everything's good, they can check the boxes for you if you want, but if that is what you're using your SDETs for you are, frankly, missing out because I guarantee you, the people that you've hired as SDETs have a lot more skills than that, and not utilizing those to your advantage is missing out on a lot of potential benefit, both in terms of not just quality—which is this fantastic concept that dates all the way back to—gives people a lot of weird feelings [laugh] to be frank, and product.Corey: So, one of the challenges I've always had is people talk about test-driven development, which sounds like a beautiful idea in theory, and in practice is something people—you know, just like using the AWS console, and then lying about it forms this heart and soul of ClickOps—we claim to be using test-driven development but we don't seem to be the reality of software development. And again, no judgment on these; things are hard. I built out a, more or less, piecing together a whole bunch of toothpicks and string to come up with my newsletter production pipeline. And that's about 29 Lambdas Function, behind about 5 APIs Gateway, and that was all kinds of ridiculous nonsense.And I can deploy each of the six or so microservices that do this, independently. And I sometimes even do continuous build or slash continuous deploy to it because integration would imply I have tests, which is why I bring the topic up. And more often than not—because I'm very bad at computers—I will even have syntax errors, make it into this thing, and I push the button and suddenly it doesn't work. It's the iterative guess-and-check model that goes on here. So, I introduced regressions, a fair bit at the time, and the reason that I'm being so blase about this is that I am the only customer of this system, which means that I'm not out there making people's lives harder, no one is paying me money to use this thing, no one else is being put out by it. It's just me smacking into a wall and feeling dumb all the time.And when I talk to people about the idea of building tests. And it's like, “Oh, you should have unit tests and integration tests and all the rest.” And I did some research into the topics, and a lot of it sounds like what people were talking about 10 to 15 years ago in the world of tests. And again, to be clear, I've implemented none of these things because I am irresponsible and bad at computers. But what has changed over the last five or ten years? Because it feels like the overall high level as I understood it from intro to testing 101 in the world of Python, the first 18 chapters are about dependency manager—because of course they are; it's Python—then the rest of it just seems to be the concepts that we've never really gotten away from. What's new, what's exciting, what's emerging in your space?Sean: There's definitely some emerging and exciting stuff in the space. There's everything from, like, what Applitools does with using machine learning to do visual regressions—that's a huge advantage, a huge time saver, so you don't have to look pixel by pixel, and waste your time doing it—to things like our team at TheZebra is working on, which is, for example, a framework that utilizes Directed Acrylic Graph workflows that's written GoLang—the prototype is—and it allows you to work with these tests, rather than just as kind of these blasé scripts that you either keep in a monorepo, or maybe possibly in each individual services' repo, and just run them all together clumsily in this, kind of, packaged product, into this distributed resource that lets you think about tests as these, kind of, user flows and experiences and to dip between things like API layer, where you might, for example, say introduce regression [unintelligible 00:07:48] calling to a third-party resource, and something goes wrong, you can orchestrate that workflow as a whole. Rather than just having to write a script after script after script after script to cover all these test cases, you can focus on well, I'm going to create this block that represents this general action, can accept a general payload that conforms to this spec, and I'm going to orchestrate these general actions, maybe modify the payload of it, but I can recall those actions with a slightly different payload and not have to write script after script after script after script.But the problem is that, like you've noticed, a lot of test tooling doesn't embrace those, kind of, modern practices and ideas. It's still very much the, your tests, you—particularly integration tests do this—will exist in one place, a monorepo, they will have all the resources there, they'll be packaged together, you will run them after the fact, after a deploy, on an environment. And it makes it so that all these testing tools are very reactive, they don't encourage a lot of experimentation, and they make it at times very difficult to experiment, in particular because the more tests you add, the more chaotic that code and that framework gets, and the harder it gets to run in a CI/CD environment, the longer it takes. Whereas if you have something like this graph tool that we're building, these things just become data. You can store them in a database, for the love of God. You can apply modern DevOps practices, you can implement things like Jaeger.Corey: I don't think it's ever used or anything in the database. Great, then you can use anything itself as a database, which is my entire schtick, so great.Sean: Exactly.Corey: That's right, that means the entire world can indeed be reduced to TXT records in DNS, which I maintain is the… the holiest of all databases. I'm sorry, please, continue.Sean: No, nonono, that's true. The thing that has always driven me is this idea that why are we still just, kind of, spitting out code to test things in a way that is very prescriptive and very reactive? And so, the exciting things in test come from places like Applitools and places like the—oh, I forget. It was at a Test Days conference, where they talked about—they developed this test framework that was able to auto generate the models, and then it was so good at auto generating those models for test, they'd actually ended up auto generating the models for the actual product. [laugh]. I think it used a degree of machine learning to do so. It was for a flashcard site. A friend of mine, Jacob Evans on Twitter always likes to talk about it.These are where the exciting things lay is where people are starting to break out of that very reactive, prescriptive, kind of, test philosophy of, like I like to say, checking the boxes to, “Let's stop checking boxes and let's create, like insight tooling. Let's get ahead of the curve. What is the system actively doing? Let's check in. What data do we have? What is the system doing right at this moment? How ahead of the curve can we get with what we're actually using to test?”Corey: One question I have is the cultural changes because back in those early days where things were handed off from the developers to the QA team, and then ideally to where I was sitting over in operations—lots of handoffs; not a lot of integrations there—QA was not popular on the development side of the world, specifically because their entire perception was that of, “Oh, they're just the critics. They're going to wind up doing the thing I just worked hard on and telling me what's wrong with it.” And it becomes a ‘Department of No,' on some level. One of the, I think, benefits of test automation is that suddenly you're blaming a computer for things, which is, “Yep. You are a developer. Good work.” But the idea of putting people almost in the line of fire of being either actually or perceived as the person who's the blocker, how has that evolved? And I'm really hoping the answer is that it has.Sean: In some places, yes, in some places, no. I think it's always, there's a little bit more nuance than just yes, it's all changed, it's all better, or just no, we're still back in QA are quote-unquote, “The bad guys,” and all that stuff. The perception that QA are the critics and are there to block a great idea from seeing fruition and to block you from that promotion definitely still persists. And it also persists a lot in terms of a number of other attitudes that get directed towards QA folks, in terms of the fact that our skill sets are limited to writing stuff like automation tooling for test frameworks and stuff like that, or that we only know how to use things like—okay, well, they know how to use Selenium and all this other stuff, but they don't know how to work a database, they don't know how an app [unintelligible 00:12:07] up, they don't all the work that I put in. That's really not the case. More and more so, folks I'm seeing in test have actually a lot of other engineers experience to back that up.And so the places where I do see it moving forward is actually like TheZebra, it's much more of a collaborative environment where the engineers are working together with the teams that they're embedded in or with the SDETs to build things and help things that help engineers get ahead of the curve. So, the way I propose it to folks is, “We're going to make sure you know and see exactly what you wrote in terms of the code, and that you can take full [confidence 00:12:44] on that so when you walk up to your manager for your one-on-one, you can go like, ‘I did this. And it's great. And here's what I know what it does, and this is where it goes, and this is how it affects everything else, and my test person helped me see all this, and that's awesome.'” It's this transition of QA and product as these adversarial relationships to recognizing that there's no real differentiator at all there when you stop with that reactive mindset in test. Instead of trying to just catch things you're trying to get ahead of the curve and focus on insight and that sort of thing.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.Corey: One of my questions is, I guess, the terminology around a lot of this. If you tell me you're an SDE, I know that oh, you're a Software Development Engineer. If you tell me you're a DBA, I know oh, great, you're a Database Administrator. If you told me you're an SRE, I know oh, okay, great. You worked at Google.But what I'm trying to figure out is I don't see SDET, at least in the waters that I tend to swim in, as a title, really, other than you. Is that a relatively new emerging title? Is it one that has historically been very industry or segment-specific, or you're doing what I did, which is, “I don't know what to call myself, so I described myself as a Cloud Economist,” two words no one can define. Cloud being a bunch of other people's computers, and economist meaning claiming to know everything about money, but dresses like a flood victim. So, no one knows what I am when I make it up, and then people start giving actual job titles to people that are Cloud Economists now, and I'm starting to wonder, oh dear Lord, have I started the thing? What is, I guess, the history and positioning of SDET as a job title slash acronym?Sean: So SDET, like I was saying, it came from Microsoft, I believe, back in the double-ohs.Corey: Mmm.Sean: And other companies caught on. I think Google actually [unintelligible 00:14:33] as well. And it's hung on certain places, particularly places that feel like they need a concentrated quality department. That's where you usually will see places that have that title of SDET. It is increasingly less common because the idea of having centralized quality—like I said before, particularly with the modern, kind of, DevOps-focused development, Agile, and all that sort of thing, it becomes much, much more difficult.If you have a waterfall type of development cycle, it's a lot easier to have a central singular quality department, and then you can have SDET stuff [unintelligible 00:15:08], that gets a lot easier when you have Agile and you have that, kind of, regular integration and you have, particularly, DevOps [unintelligible 00:15:14] cycle, it becomes increasingly difficult, so a lot of places that have been moving away from that. It is definitely a strange title, but it is not entirely rare. If you want to peek, put a SDET on your LinkedIn for about two weeks and see how many offers come in, or how many folks in your inbox you get. It is absolutely in demand. People want engineers to write these test frameworks, but that's an entirely different point; that gets down to the point of the fact that people want people in these roles because a lot of test tooling, frankly, sucks.Corey: It's interesting you talk about that as a validation of it. I get remarkably few outreaches on LinkedIn, either for recruiting, which almost never happens or for trying to sell me something which happens once every week or so. My business partner has a CEO title, and he winds up getting people trying to sell him things four times a day by lunchtime, and occasionally people reaching out of, “Hey, I don't know much about your company, but if it's not going well, do you want to come work on something completely unrelated?” Great. And it's odd because both he and I have similar settings where neither of us have the ‘looking for work' box checked on LinkedIn because it turns out that does send a message to your staff who are depending on their job still being here next month, and that isn't overly positive because we're not on the market.But changing just titles and how we describe what we do and how we do it absolutely has a bearing as to how that is perceived by others. And increasingly, I'm spending more of my time focusing less on the technical substance of things and more about how what they do is being communicated. Because increasingly, what I'm finding about the world of enterprise technology and enterprise cloud and all of this murky industry in which we swim, is that the technology is great—anything can be made to work; mostly—but so few companies are doing an effective job of telling the story. And we see it with not just an engineering-land; in most in all parts of the business. People are not storytelling about what they do, about the outcomes they drive, and we're falling back to labels and buzzwords and acronyms and the rest.Where do you stand on this? I know we've spoken briefly before about how this is one of those things that you're paying attention to as well, so I know that we're not—I'm not completely off base here. What's your take on it?Sean: I definitely look at the labels and things of that sort. It's one of those things where humans like to group and aggregate things. Our brains like that degree of organization, and I'm going to say something that is very stereotypical here: This is helped a lot by social media which depends on things like hashtags and ability to group massive amounts of information is largely facilitated. And I don't know if it's caused by it, but it certainly aggravates the situation.We like being able to group things with few words. But as you said before, that doesn't help us. So, in a particular case, with something like a SDET title, yeah, that does absolutely send a signal, and it doesn't necessarily send the right one in terms of the person that you're talking to, you might have vastly different capabilities from the next SDET that you talk to. And it's were putting up a story of impact-driven, kind of, that classic way of focusing on not just the labels, but what was actually done and who had helped and who had enabled and the impact of it, that is key. The trick is trying to balance that with this increasing focus on the cut-down presentation.You and I've talked about this before, too, where you can only say so much on something like a LinkedIn profile before people just turn off their brains and they walk away to the next person. Or you can only put so much on your resume before people go, “Okay, ten pages, I'm done.” And it's just one of those things where… the trick I find that test people increasingly have is there was a very certain label applied to us that was rooted in one particular company's needs, and we have spent the better part of over a decade trying to escape and redefine that, and it's incredibly challenging. And a lot of it comes down to folks like, for example, Angie Jones, who simply, just through pure action and being very open about exactly what they're doing, change that narrative just by showing. That form of storytelling is show it, don't say it, you know? Rather than saying, “Oh, well, I bring into all this,” they just show it, and they bring it forward that way.Corey: I think you hit on something there with the idea of social media, where there is validity to the idea of being able to describe something concisely. “What's your elevator pitch?” Is a common question in business. “What is the problem you solve? What would someone use you for?”And if your answer to that requires you sabotage the elevator for 45 minutes in order to deliver your message, it's not going to work. With some products, especially very early-stage products where the only people who are working on them are the technical people building them, they have a lot of passion for the space, but they aren't—haven't quite gotten the messaging down to be able to articulate it. People's attention spans aren't great, by and large, so there's a, if it doesn't fit in a tweet, it's boring and crappy is sort of the takeaway here. And yeah, you're never going to encapsulate volume and nuance and shading into a tweet, but the baseline description of, “So, what do you do?” If it doesn't fit in a tweet, keep workshopping it, to some extent.And it's odd because I do think you're right, it leads to very yes or no, binary decisions about almost anything, someone is good or trash. There's no, people are complicated, depending upon what aspect we're talking about. And same story with companies. Companies are incredibly complex, but that tends to distill down in the Twitter ecosystem to, “Engineers are smart and executives are buffoons.” And anytime a company does something, clearly, it's a giant mistake.Well, contrary to popular opinion, Global Fortune 2000 companies do not tend to hire people who are not highly capable at the thing they're doing. They have context and nuance and constraints that are not visible from the outside. So, that is one of the frustrating parts to me. So, labels are helpful as far as explaining what someone is and where they fit in the ecosystem. For example, yeah, if you describe yourself as an SDET, I know that we're talking about testing to some extent; you're not about to show up and start talking to me extensively about, oh, I don't know, how you market observability products.It at least gives a direction and bounding to the context. The challenge I always had, why I picked a title that no one else had, was that what I do is complicated, and if once people have a label that they think encompasses where you start and where you stop, they stop listening, in some cases. What's been your experience, given that you do have a title that is not as widely traveled as a number of the more commonly used ones?Sean: Definitely that experience. I think that I've absolutely worked at places where—the thing is, though, and I do want to cite this, that when folks do end up just turning off once they have that nice little snippet that they think encompasses who you are—because increasingly nowadays, we like to attach what you do to who you are—and it makes a certain degree of sense, absolutely, but it's very hard to encompass those sorts of things, and let alone, kind of, closely nestle them together when you have, you know, 280 characters.Yes, folks like to do that to folks like SDETs. There's a definite mindset of, ‘stay in your lane,' in certain shops. I will say that it's not to the benefit of those shops, and it creates and often aggravates an adversarial relationship that is to the detriment of both, particularly today where the ability to spin up a rival product of reasonable quality and scale has never been easier, slowing yourself down with arbitrary delineations that are meant to relegate and overly-define folks, not necessarily for the actual convenience of your business, but for the convenience of your person, that is a very dangerous move. A previous company that I worked at almost lost a significant amount of their market share because they actively antagonized the SDET team to the point where several key members left. And it left them completely unable to cover areas of product with scalable automation tooling and other things. And it's a very complex product.And it almost cost them their position in the industry, potentially, the entire company as a whole got very close to that point. And that's one of the things we have to be careful of when it comes to applying these labels, is that when you apply a label to encompass someone, yes, you affect them, but it also we'll come back and affect you because when you apply that label to someone, you are immediately confining your relationship with that person. And that relationship is a two-way street. If you apply a label that closes off other roads of communication or potential collaboration or work or creativity or those sorts of things, that is your decision and you will have to accept those consequences.Corey: I've gotten the sense that a lot of folks, as they describe what they do and how they do it, they are often thinking longer-term; their careers often trend toward the thing that happens to them rather than a thing that winds up being actively managed. And… like, one of my favorite interview questions whenever I'm looking to bring someone in, it's always, “Yeah, ignore this job we're talking about. Magically you get it or you don't; whatever. That's not relevant right now. What's your next job? What's the one after that? What is the trajectory here?”And it's always fun to me to see people's responses to it. Often it's, “I have no idea,” versus the, “Oh, I want to do this, and this is the thing I'm interested in working with you for because I think it'll shore up this, this, and this.” And like, those are two extreme ends of the spectrum. There's no wrong answer, but it's helpful, I find, just to ask the question in the final round interview that I'm a part of, just to, I guess sort of like, boost them a bit into a longer-term picture view, as opposed to next week, next month, next year. Because if what you're doing doesn't bring you closer to what you want to be doing in the job after the next one, then I think you're looking at it wrong, in some cases.And I guess I'll turn the question on to you. If you look at what you're doing now, ignore whatever you do next, what's your role after that? Like, where are you aiming at?Sean: Ignoring the next position… which is interesting because I always—part of how I learned to operate, kind of in my earlier years was focus on the next two weeks because the longer you go out from that window, the more things you can't control, [laugh] and the harder it is to actually make an effective plan. But for me, the real goal is I want to be in any position that enables the hard work we do in building these things to make people's lives easier, better, give them access to additional information, maybe it's joy in terms of, like, a content platform, maybe it's something that helps other developers do what they do, something like Honeycomb, for example, just that little bit of extra insight to help them work a little bit better. And that's, for me, where I want to be, is building things that make the hard work we do to create these tools, these products easier. So, for me, that would look a lot like an internal tooling team of some sort, something that helps with developer efficiency, with workflow.One of the reasons—and it's funny because I got to asked this recently: “Why are you still even in test? You know what reputation this field has”—wrongly deserved, maybe so—“Why are you still in test?” My response was, “Because”—and maybe with a degree of hubris, stubbornly so—“I want to make things better for test.” There are a lot of issues we're facing, not just in terms of tooling, but in terms of processes, and how we think about solving problems, and like I said before, that kind of reactive nature, it sort of ends up kind of being an ouroboros, eating its own tail. Reactive tools generate reactive engineers, that then create more reactive tools, and it becomes this ouroboros eating itself.Where I want to be in terms of this is creating things that change that, push us forward in that direction. So, I think that internal tooling team is a fantastic place to do that, but frankly, any place where I could do that at any level would be fantastic.Corey: It's nice to see the things that you care about involve a lot more about around things like impact, as opposed to raw technologies and the rest. And again, I'm not passing judgment on anyone who chooses to focus on technology or different areas of these things. It's just, it's nice to see folks who are deeply technical themselves, raising their head a little bit above it and saying, “All right, here's the impact I want to have.” It's great, and lots of folks do, but I'm always frustrated when I find myself talking to folks who think that the code ultimately speaks; code is the arbiter. Like, you see this with some of the smart contract stuff, too.It's the, “All right, if you believe that's going to solve all the problems, I have a simple challenge to you, and then I will never criticize you again: Go to small claims court for a morning, four hours and watch all the disputes that wind up going through there, and ask yourselves how many of those a smart contract would have solved?”Every time I bring that point up to someone, they never come back and say, “This is still a good idea.” Maybe I'm a little too anti-computer, a little bit too human these days. But again, most of cloud economics, in my experience, is psychology more than it is math.Sean: I think it's really the truth. And I think that [unintelligible 00:29:06] that I really want to seize on for a second because code and technology as this ultimate arbiter, we've become fascinated with it, not necessarily to our benefit. One of the things you will often see me—to take a line from Game of Thrones—whinging about [laugh] is we are overly focused on utilizing technology, whether code or anything else, to solve what are fundamentally human problems. These are problems that are rooted in human tendencies, habits, characters, psychology—as you were saying—that require human interaction and influence, as uncomfortable as that may be to quote-unquote, “Solve.”And the reality of it is, is that the more that we insist upon, trying to use technology to solve those problems—things like cases of equity in terms of generational wealth and things of that sort, things like helping people communicate issues with one another within a software development engineering team—the more we will create complexity and additional problems, and the more we will fracture people's focus and ability to stay focused on what the underlying cause of the problem is, which is something human. And just as a side note, the fundamental idea that code is this ultimate arbiter of truth is terrible because if code was the ultimate arbiter of truth, I wouldn't have a job, Corey. [laugh]. I would be out of business so fast.Corey: Oh, yeah, it's great. It's—ugh, I—it feels like that's a naive perspective that people tend to have early in their career, and Lord knows I did. Everything was so straightforward and simple, back when I was in that era, whereas the older I get, the more the world is shades of nuance.Sean: There are cases where technology can help, but I tend to find those a very specific class of solutions, and even then they can only assist a human with maybe providing some additional context. This is an idea from a Seeking SRE book that I love to reference—I think it's, like, the first chapter—the Chief of Netflix SRE, I think it is, he talks about this is this, solving problems is this thing of relaying context, establishing context—and he focused a lot less on the technology side, a lot more of the human side, and brings in, like, “The technology can help this because it can give you a little bit better insight of how to communicate context, but context is valuable, but you're still going to have to do some talking at the end of the day and establish these human relationships.” And I think that technology can help with a very specific class of insight or context issues, but I would like to reemphasize that is a very specific class, and very specific sort, and most of the human problems we're trying to solve the technology don't fall in there.Corey: I think that's probably a great place for us to call it an episode. I really appreciate the way you view these things. I think that you are one of the most empathetic people that I find myself talking to on an ongoing basis. If people want to learn more, where's the best place to find you?Sean: You can find me on Twitter at S-C—underscore—code, capital U, capital M. That's probably the best place to find me. I'm most frequently on there.Corey: We will, of course, include links to that in the [show notes 00:32:37].Sean: And then, of course, my LinkedIn is not a bad place to reach out. So, you can probably find me there, Sean Corbett, working at TheZebra. And as always, you can reach me at scorbett@thezebra.com. That is my work email; feel free to email me there if you have any questions.Corey: And we will, of course, put links to all of that in the [show notes 00:33:00]. Sean, thank you so much for taking the time to speak with me today. I really appreciate it.Sean: Thank you.Corey: Sean Corbett, Senior Software Development Engineer in Test at TheZebra—because there's only one. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry ranting comment about how absolutely code speaks, and it is the ultimate arbiter of truth, and oh wait, what's that the FBI is at the door make some inquiries about your recent online behavior.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Do you know what, Testing company was recently acquired for $200 million? Are you looking for some RPA KPIs and metrics that you can use now? And what is cloud-native application protection? Find out the answers to these and all other and full pipeline DevOps automation, performance, and security testing in this episode of the TestGuild new show for the week of February 20th. So grab yourself a cup of coffee or tea, and let's do this. *** Applitools create a free account: https://rcl.ink/xroZw *** 1. QualityWatcher https://links.testguild.com/MEUcA 2. Testsigma https://links.testguild.com/DIv8Y 3. Xray https://links.testguild.com/Yny87 4. Testim https://links.testguild.com/LeJBm 5. SMTP Testing https://links.testguild.com/L0Pg3 6. RPA KPIs https://links.testguild.com/QpvUK 7. Future of Testing http://applitools.info/a5w 8. Shoreline https://links.testguild.com/UYJQS 9. New Relic https://links.testguild.com/GbwUU 10. Deepfactor https://links.testguild.com/NL8Xc 11. WhiteSourcehttps://links.testguild.com/xl9yL
Hey, want to know an example page where you could find all the scenarios to practice your test automation skills. How do you deploy Jmeter on AWS using TerraForm? And do you know, the adoption of APIs is on the rise? How can you make sure that your security is keeping up with this development? Final answers to these and all other and full pipeline DevOps automation, testing, performance, testing, and security testing in this episode of the Test Guild new show for the week of January 23rd. So give yourself a cup of coffee or tea, and let's do this. 0:29 Applitools Free Account https://rcl.ink/xroZw 1:03 XPath Practice Page https://links.testguild.com/XcbFe 1:42 Applitools win's DevOps Dozen https://links.testguild.com/3NQFy 2:30 Cypress 9.3 https://links.testguild.com/HBiNh 3:00 Testcontainers-java 1.16.3 is out! https://links.testguild.com/xHJ6B 3:47 Migration to the Cloud https://links.testguild.com/csBcC 4:35 Reduce Test Failures in CI/CD Pipelines? https://links.testguild.com/RY6wm 5:08 Verica raises $12M https://links.testguild.com/ZibfC 6:08 Automation Predictions: https://links.testguild.com/jgFkA 6:39 Deploy JMeter on AWS using Terraform https://links.testguild.com/vrUnv 7:20 API Security: https://links.testguild.com/FO6hl 8:22 Cisco Security Issue: https://links.testguild.com/0rz1P
Hey, did you hear that Angie Jones is leaving the testing space? Want to know why AWS went down last week and do you use log4j. If so, your software might be at risk? Find out the answers to these and all other end and full pipeline DevOps automation, performance and security testing in this episode of the Test Guild new show for the week of December 12th. So grab your favorite cup of coffee or tea, and let's do this. Time News Title News Link 0:25 Applitools https://rcl.ink/xroZw 0:57 Angie Jones leaving Applitools https://links.testguild.com/EheA4 1:47 Karate Labs https://links.testguild.com/6qZKq 2:30 Software Testing Book https://links.testguild.com/t9yTp 3:14 Ansible Azure Cloud https://links.testguild.com/3TxVX 3:53 MockLab https://links.testguild.com/GAZ2Z 4:48 Automation Guild 2022 Reg https://links.testguild.com/2GiHM 5:41 Tips performance testing https://links.testguild.com/TR0W7 6:15 AWS outage https://links.testguild.com/Rj44K 7:35 Log4j exploploit https://links.testguild.com/NC9CE 8:35 GitGuardian https://links.testguild.com/owGIF
In this podcast episode, we talked to Applitools' Angie Jones about all things related to test automation: tools, best practices, how to reach a higher level of DevTestOps, what role AI will play in software testing, and a lot more. About Angie Angie Jones works as Head of Developer Relations at Applitools and is the founder and Executive Director of Test Automation University. She's previously worked as a Senior Software Developer at Twitter and regularly gives talks about Javascript, software development, and testing best practices. To learn more about her work and upcoming projects, you can follow Angie on her Twitter profile or check out her courses at Test Automation University. In this episode We discussed the must-have practices engineering teams should implement into their processes, along with the different challenges that can arise in software testing and the tips & tricks to solve them. We also looked at Angie's maturity framework that helps teams measure how advanced they are and enables them to reach a high level of maturity in DevTestOps. Some of the most interesting questions we covered in this episode: What role will AI play in software testing and how will it impact the day-to-day work of developers? Which should definitely be automated and which ones are still better done manually? What is your opinion about the future of codeless testing tools and their effects on the test engineers' role? How to scale and look after an ever-growing test suite? How to choose between native and cross-platform mobile test automation frameworks? Show notes & resources Angie's website: https://angiejones.tech/ Test Automation University: https://testautomationu.applitools.com/ The Future Tester, by Jason Arbon: https://www.linkedin.com/pulse/future-tester-jason-arbon/
What is the state of AI in Quality engineering? How does the new Lighthouse API help you to measure performance and best practices throughout a user flow? Why is having threat modeling skills critical to helping you break into cyber security testing? Find out the answers to these and other end-to-end full pipeline, DevOps, automation, performance, and security testing in this episode of the test guild news show for the week of Nov 7. TIME-STAMPED SHOW NOTES: 00:00 What's new 00:25 Sponsored by Applitools 00:58 The current state of artificial intelligence in QA 01:39 Healthy Testing Habits 02:38 Michael Bolton Katalon Studio examination 03:45 Appium Dashboard plugin 04:15 Angles Dashboard Solution 04:47 Testim and Salesforce support 05:24 Lighthouse new API with Playwright 06:15 Soak Testing 06:49 Threat Modeling for Newbies 07:35 New security testing platform Oxeye 08:59 My two year layoff anniversary NEWS LINKS Applitools: https://rcl.ink/xroZw AI Report: https://applitools.info/hwh Testing Habits: https://links.testguild.com/fHOt0 Katalon Review: https://links.testguild.com/L6Nfe Appium Plugin: https://links.testguild.com/6XCV4 Angles: https://links.testguild.com/iCbPw Testim: https://links.testguild.com/KyMhS Lighthouse: https://links.testguild.com/A3pxv Soak Testing: https://links.testguild.com/jDH64 Threat Modeling: https://links.testguild.com/8wN9w Oxeye: https://links.testguild.com/2OU2t
Want to know how to do QA testing in the cloud. How does Pokémon go scale to millions of requests? And why is the next generation of application security needed? Find out, the answers to these and all other end and full stack pipeline DevOps automation, testing, performance, testing, security testing, in this episode of the TestGuild new show for the week of October 31st. TIME-STAMPED SHOW NOTES: 00:23 Sponsored by Applitools 00:54 QA Cloud Testing Adrian Brociek Azimo 01:50 Prepare QA Teams for Automation Julia Pottinger 02:37 Postman survey from a Testers Beth Marshall view 03:18 TestAutomationU Angie Jones 100,000 testers! 04:01 Cypress.io new release 04:37 Automation Trends for 2022 05:08 Roblox #SRE performance issues Stephen Townshend 06:46 The Pokémon Company International Priyanka Vergadia James Prompanya 07:47 Autonomous Security Testing David Brumley ForAllSecure, Inc #appsecurity 08:33 625 Million Invicti Security Links mentioned in Today's Episode: *** Applitools free account: https://rcl.ink/xroZw *** 1. https://link.medium.com/hbJCo65bMkb 2. Julia https://bit.ly/3jVZvH5 3. Beth https://bit.ly/3jWIPiK 4. TAU https://prn.to/3GCHgAl 5. Cypress https://bit.ly/3nNEjEf 6. 2022 Trends: https://bit.ly/3nGGIjW 7. Roblox https://bit.ly/2ZIVwa6 8. Pokemon https://bit.ly/31fmoi5 9. Autonomous Security https://bit.ly/3mwwV0K 10. Invicti: https://bit.ly/2ZLMPeU Leave Some - Feedback Did you enjoy this episode? If so, please leave a short review Connect with Us: TestGuild.com AutomationGuild.com YouTube @joecolantonio @testguilds
Hey ever wonder how does Facebook scale its autonomous testing? Want to know how to interview and hire software performance testing professionals? Do you know the 5 steps organization can take to simplify multi-cloud security? Find out the answer to these and other end-to-end full pipeline automation, performance, security, and DevOps topics in this episode of the TestGuild new show for the week of Oct 24. So grab your favorite cup of coffee or tea and let's do this! *** Applitools free account: https://rcl.ink/xroZw ***
Want to know some key insights around API testing. Why did Facebook go down for almost a whole day? And did you hear about the huge Twitch hack? Find out the answers to these and all other full-stack end-to-end automation performance security in DevOps testing topics. In this episode of the Test Guild news show for the week of October 10th. So grab your favorite cup of coffee or tea, and let's do this. Exclusive Sponsor This episode of the TestGuild News Show is sponsored by the folks at Applitools. Applitools is a next-generation test automation platform powered by Visual AI. Increase quality, accelerate delivery, and reduce cost with the world's most intelligent test automation platform. Seeing is believing, so create your free account now! Links to News Mentioned in this Episode *** Applitools free account: https://rcl.ink/xroZw *** https://applitools.info/jgu https://www.selenium.dev/blog/2021/selenium-4-rc-2/ https://www.abc12.com/2021/10/06/japanese-startup-autify-raises-10m-series-advance-software-testing-automation-through-no-code-solution/ https://www.ontestautomation.com/on-codeless-automation-or-rather-on-abstraction-layers/ https://www.linkedin.com/posts/menesklou_testing-automation-testautomation-activity-6850724889448484864-U3fJ https://smartbear.com/news/news-releases/smartbear-releases-results-of-2021-state-of-softwa/ https://link.medium.com/HrqlswQh8jb https://www.marketscreener.com/quote/stock/VMWARE-INC-58476/news/VMware-Announcing-availability-of-VMware-Cloud-on-AWS-Outposts-36600466/ https://www.thousandeyes.com/blog/facebook-outage-analysis https://www.linkedin.com/posts/tammybutow_facebookdown-outages-sre-activity-6851556660675137536-RGbJ https://www.linkedin.com/posts/nvanderhoeven_testing-in-public-how-to-plan-a-load-test-activity-6851624350391574528-LNLt https://www.globenewswire.com/news-release/2021/10/06/2309760/0/en/Fluent-Project-Creat%5B%E2%80%A6%5Dirst-Mile-Data-Observability-Platform-for-Enterprises.html https://jobs.apple.com/en-us/details/200295220/site-reliability-engineer-sre-apple-cloud-services?team=SFTWR https://www.pcgamer.com/security-experts-aghast-at-the-scale-of-twitch-hack-this-is-as-bad-as-it-could-possibly-be/
Is headless automation worth it? Do you need to automate a Flutter or React application? Does your application even scale for one user? Did you know that Devices like pacemakers and insulin pumps could be vulnerable to security hacks? Find out the answer to these and other automation testing, performance testing, security testing, DevOps news items for the week of Sep 26. ***** Links for this episode *** Applitools free account: https://rcl.ink/xroZw 1.https://applitools.com/future-of-test... 2.https://link.medium.com/9utUnEhJRjb https://medium.com/geekculture/headle... 3.https://www.toolbox.com/tech/devops/g... 4.https://share.getcloudapp.com/bLuqLv2A 5.https://twitter.com/janmolak/status/7... 6.https://youtu.be/DlGQEQ2q35w 7.https://www.datanami.com/2021/09/23/o... 8.https://www.linkedin.com/posts/jamesl... 9.https://grafana.com/go/webinar/intro-... 10.https://github.com/localstack/localstack 11.https://www.theverge.com/2021/9/21/22... 12.https://techcrunch.com/2021/09/22/lg-...
Is the page object model overrated? Want to know a real-world example of performance testing across the pipeline? Worried about the Cobalt strike on Linux? Discover the answer to these that other automation testing, performance testing, security testing, DevOps automation topics, in this edition of the Test Guild new show for the week of September 19th. So grab your favorite cup of coffee or tea and let's do this. Links to News Mentioned in this Episode 1)Applitools free account: https://applitools.info/ols 2) https://applitools.info/ols 3) https://eldadu1985.medium.com/8-reasons-page-object-model-is-overrated-9c5d90779b50 https://www.linkedin.com/in/eldad-uzman/ 4) https://www.globalbankingandfinance.com/how-the-worlds-top-financial-organisations-test/ 5) https://medium.com/detesters/how-to-store-testcaf%C3%A9-tests-in-an-influxdb-95f4e59abda8 6) https://sandelk.wixsite.com/craigrisi/post/performance-testing-across-the-pipeline-with-k6 7) https://devops.com/new-relic-survey-shows-greater-appreciation-for-observability/ https://twitter.com/MartkosIT 8) https://www.infoq.com/podcasts/sre-apprentices/ 9) https://searchsecurity.techtarget.com/news/252506642/Hackers-port-Cobalt-Strike-attack-tool-to-Linux
About Angie Angie Jones is a Java Champion and Senior Director who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, writing tutorials and technical articles on angiejones.tech, and leading the online learning platform, Test Automation University.As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.Links: Applitools: https://applitools.com Black Girls Code: https://www.blackgirlscode.com Test Automation University: https://testautomationu.applitools.com Personal website: https://angiejones.tech Twitter: https://twitter.com/techgirl1908 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by CircleCI. CircleCI is the leading platform for software innovation at scale. With intelligent automation and delivery tools, more than 25,000 engineering organizations worldwide—including most of the ones that you've heard of—are using CircleCI to radically reduce the time from idea to execution to—if you were Google—deprecating the entire product. Check out CircleCI and stop trying to build these things yourself from scratch, when people are solving this problem better than you are internally. I promise. To learn more, visit circleci.com.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. If there's one thing that I have never gotten the hang of, its testing. Normally, I just whack the deploy button, throw it out into the general ecosystem, and my monitoring system is usually called ‘customers.' And if I don't want to hear from them, I just stopped answering calls from the support desk. Apparently, that is no longer state of the art because it's been about 15 years. Here to talk about testing from a more responsible direction is Angie Jones, a senior director and developer at Applitools. Thanks for joining me.Angie: Hey, Corey. [laugh]. I am cracking up at your confession there and I appreciate it because you're not unique in that story. I find that a lot of engineers [laugh] follow that same trend.Corey: There are things we talk about and there are the things that we really do instead. We see it all over the place. We talk about infrastructure as code, but everyone clicks around for a few things in the Cloud Console, for example. And so on, and so forth. We all know we should in theory be doing things, but expediency tends to win the day.And for better or worse, talking about testing, in many cases, makes some of us feel better about not actually doing testing. And one of these days, it's one of those, “I really should learn how TDD would work in an approach like this.” But my primary language has always been, well, always been a crappy version of whatever I'm using, but for the last few years, it's been Python. There are whole testing frameworks around all of these things, but I feel like it requires me to actually have good programming practices to begin with which, let's be very clear here, I most assuredly doubt.Angie: [laugh]. That's a fair assessment, but I would also argue, in cases like those, you need testing even more, right? You need something to cover your butt. So, what are you doing? You're just, kind of, living on the edge here?Corey: Sort of. In my case, it's always been that I'll bring in an actual developer who knows what they're doing to—Angie: Ah.Corey: —turn some of my early scripts into actual tools. And the first question is, “Okay, can you explain what this is doing for me?” “Great. So, we're going to throw it away and completely replace it with—so what are the inputs, what are the outputs, and do you want me to preserve the bugs or not?” At which point, it's great.It's more or less like I'm inviting someone to come in and just savage my code, which is apparently also a best practice. But for better or worse, I've never really thought of myself as an engineer, so it's one of those areas where it's it doesn't cut to the core of my identity in any particular way. I do know it would be nice that, oh yeah, when I wind up doing an iterative deployment of a Lambda function or something, if it takes five minutes to get updated, and then I forgot to put a comma in or something ridiculous like that. Yeah. Would have been nice to have something—you know, a pre-commit hook—that caught something like that.Angie: Yeah, yeah. It's interesting. You said, “Well, maybe one of these days, I'll learn.” And that's the issue I find. No matter what route you took to learn how to become—whatever you are, software engineer, whatever—testing likely wasn't part of that curriculum.So, we focus—when teaching—very heavily on teaching you how to code and how to build something, but very little, if any, on how to ensure you built the right thing and that it stands the test of time.Corey: My approach has always been well, time to write some code, and it started off as just, as a grumpy systems administrator, it was always shell scripts, which, okay, great. Instead of doing this thing on 15 machines, run upon a for loop and just iterate through them. And in time, you start inheriting other people's crappy tooling, and well, I could rewrite the entire thing and a week-and-a-half, or I could figure out just enough Perl to change that one line in there, and that's how they get you. You sort of stumbled your way into it in that direction. Naive questions I always like to ask around testing that never really get answers for because I don't think to ask these when other people are in the room and it's not two o'clock in the morning and the power is gone out.You have a basic linter test of, do you have basic syntax errors in the code? Will it run? Seems to be a sort of baseline, easy acceptance test. But then you get into higher-level testing of unit tests, integration tests, and a bunch of others I'm sure I'm glossing over because—to be direct—I tend to conflate all these in my head. What is the hierarchy of testing if there is such a thing?Angie: Yeah, so Mike Cohn actually created a model that is very heavily used within the industry, and it's called the ‘Test Automation Pyramid.' And what this model suggests is that you have your unit tests; you have some kind of, like, integration-type tests in the middle, and then you have these end-to-end tests on top. So, think of a pyramid divided into three sections. But that's not divided equally; the largest part of that pyramid, which is the base, is the unit test. So, this suggests that the bulk of your test suite should comprise of unit tests.The idea here is that these are very small, they're very targeted, meaning they're easier to write, they take less time to run, and if you have an error, it kind of pinpoints exactly what's wrong in the system. So, these are great. The next level would be your integration. So, now how do two units integrate together? So, you can test this layer multiple different ways: it might be with APIs, it might be the business logic itself, you know, calling into functions or something like that.And this one is smaller than the unit test but not as large as the final part, which is the end-to-end test. And that one is your smallest piece, and it doesn't even have to be end-to-end. It could be UI, actually. That's how it's labeled by Mike Cohn in his book: UI tests. So, the UI tests, these are going to be your most fragile tests, these are going to take the most time to write as well as the most time to execute.If something goes wrong, you have to dig down to figure out what exactly broke to make this happen. So, this should be the smallest chunk of your overall testing strategy.Corey: People far smarter than I have said that in many cases—along with access—testing, and monitoring—or observability, which is apparently a term for hipster monitoring—are lying on the same axis. Where in the olden days of systems administration, you can ping the machine and it responds just fine, but the only thing that's left on that crashed machine is just enough of the network stack to return a ping, so everything except the thing that tells you it's fine is in fact broken. So, as you wind up building more and more sophisticated applications, the idea being that the testing and the ‘is everything all right' monitoring ping tends to, more or less, coalesce into the same thing. Is that accurate from your view of the world? Is that something that is an oversimplification of something much more nuanced? Or did I completely misunderstand what they were saying, which is perfectly possible?Angie: You kind of lost me somewhere in the middle. So, I'm just going to nod and say yes. [laugh].Corey: [laugh]. No, no, it—the hard part that I've always found is… I lie to myself, when I'm writing code: “Oh, I don't need to write a unit test for this,” because I'd gotten it working, I tested it with something that I know is good, it returns what I expect; I tested with something bad and well, some undefined behavior happens—because that's a normal thing to happen with code—and great, I don't need to have a test for that because I've already got it working. Problem solved.Angie: Right. Right.Corey: It's a great lie.Angie: Yeah.Corey: And then I make a change later on that, in fact, does break it. It's the, “But I'm writing this code once and why would I ever go back to this code and write it again? It's just a quick-and-dirty patch that only needs to exist for a couple of weeks.” Yeah, the todo: remove this later, and that code segment winds up being load-bearing decades into the future. I'm like, “Yeah, one of these days, someone's going to go back and clean up all of my code for me.” Like, the code fairies are going to come in the middle of the night with the elves, and tidy everything up. I would love to hire those mythical creatures, but can't find them.Angie: This mythical sprint, where it's, “Oh, let's only clean up this entire sprint.” You know, everybody's kind of holding out and waiting for that. But no, you hit the nail on the head with the reason why you need to automate your tests, essentially. So, I find a lot of newer folks to the space, they really don't understand, why on earth would I spend time writing code to represent this test? Just like you said, “I implemented the feature. I tried it out, it worked.” [laugh]. “And hey, I even tried a non-happy path. And when it broke, I had a nice little error message to tell the user what to do.”And they feel really good about that, so they can't understand, “Why would I invest the time—which I don't have—to write some tests?” The reason for that it's just as you said: this is for regression. Unless that's the end of this application and you're not going to touch it ever again for any reason, then you need to write some tests [laugh] because you're going to constantly change the application, whether that be refactoring, whether that be adding new features to it, it's going to change in some way and you cannot be sure that the tests of yesterday still work today because whenever you make the change, you're just going to poke around manually at that little area not realizing there could be some integration things that you totally screwed up here and you miss that until it goes out into prod.Corey: The worst developer I've ever met—hands down—was me, six months before I'm looking at whatever it is that I've written. And given that I do a lot of my stuff in a vacuum and I'm the only person to ever touch these repositories, I could run Git blame, but I already know exactly what it's going to tell me—Angie: “It's me.” [laugh].Corey: —so we're just going to skip that part. Like it's a test. And, “Yeah, we're just going to try and fix that and never speak about it again.” But I can't count the number of times I have looked at code that I've written—and I do mean written; not blindly copy-and-pasted out of Stack Overflow, but actually wrote, and at the time, I understood exactly what it did—and then I look at it, and it is, “What on earth was I thinking? What—what—it technically doesn't even return anything; it can't be doing anything. I can just remove that piece entirely.” And the whole thing breaks.I've out-clevered myself in many respects. And I love the idea, the vision, that testing would catch these things as I'm making those changes, but then I never do it. It's getting started down that path and developing a more nuanced, and dare I say it, formal understanding of the art and science of software development. Always feels like the sort of thing I'll get to one of these days, but never actually got around to. Nowadays, my testing strategy is to just actually deploy things into someone else's account and hope for the best.And, “Oh, good. Well, everyone has a test account; ideally, it's not their own production account.” And then we start to expand on beyond that. You have come to this from a very different direction in a number of different ways. You are—among other things—a Java Champion, which makes it sound like you fought the final boss at the end of the developer internet. And they sound really hard. What is a Java Champion?Angie: Yeah. So, a Java Champion is essentially an influencer in the Java ecosystem. You can't just call yourself this; like you say, you got to fight the guy at the end, you know? But seriously, in order to become one, a current Java Champion has to nominate you, and all of the other Java Champions has to review your package, basically looking at your work. What have you contributed to the developer community, in terms of Java?So, I've done a number of courses that I've taught; I've taught at the university level, as well; I am always talking about testing and using Java to show how to do that, as well as talks and all of this stuff. So apparently, I had enough [laugh] for folks to vote me in. So, it is an organization that's kind of ordained by Oracle, the Gods of Java. So, it's a great accomplishment for me. I'm extremely happy about it. And just so happens to be the first black woman to become a Java Champion. So, the news made a big deal about that. [laugh].Corey: Congratulations. Anytime you wind up getting that level of recognition in any given ecosystem, it's something to stop and take note of. But that's compounded by just the sheer scale and scope of the Java community as a whole. Every big tech company I know has inordinate amounts of Java scattered throughout their infrastructure, a lot of their core services are written in Java, which makes me feel increasingly strange for not really knowing anything about it, other than that, it's big and that there are—this entire ecosystem of IDs, and frameworks, and ways to approach these things that it feels like those of us playing around in crappy bash-scripting-land have the exact opposite experience of, “Oh, I'm just going to fire up an empty page and fill it with a bunch of weird commands and run it, and it fails, and run it again, and it fails. And it finally succeeds when I fixed all the syntax errors, and that's great.” It feels like there is a much more structured approach to writing Java compared to other languages, be they scripts or full-on languages.Angie: Yeah. That's been a gift and a curse of the language. So, as newer frameworks have come out, or even as JavaScript has made its way to the front of the line, people start looking at Java, it's kind of bloated, and all of these rules and structures were in place, but that feels like boilerplate stuff and cumbersome in today's development space. So, fortunately, the powers that be have been doing a lot of changes in Java. We went for quite a while where releases were about, mmm, every three years or so.And now they've committed to releases every six months. So, [laugh] most people are on Java 8 still, but we're actually at, like, Java 16, now. So, now it's kind of hard to keep up but that makes it fun as well. There's all of these newer features and new capabilities, and now you can even do functional programming in Java, so it's pretty nice.Corey: Question I have is, does testing lend itself more easily to Java versus other language? And I promise I'm not trying to start a language war here. I just know that, “Well, how do I effectively test my Python code?” Leads to a whole bunch of? “Well, it depends.”It's like asking an attorney any question on the planet; same story. Like, “Well, it really depends on a whole bunch of things.” Is it a clearer, more structured path in Java, or is it still the same murky there are 15 different ways to do it and whichever one you pick, there's a whole cacophony of folks telling you you've done it wrong?Angie: Yeah, that's a very interesting question. I haven't dug into that deep, but Java is by far the most popular programming language for UI test automation. And I wonder why that is because you don't use Java for building front end. You use Java scripts. I don't know how this ca—I—well, I do know how it came to be.Like, back in the day, when we first started doing test automation, JavaScript was a joke, right? People would laugh at you if you said that you were going to use JavaScript. It's, you know, “I'm going to learn JavaScript and try to enter the workforce.” So, you know, that was a big no-no, and kind of a joke back then. So, Java was what a lot of your developers were using even if they were only using it for the backend, maybe.You didn't really have a [unintelligible 00:16:32] language on the client-side, back then. You had your PHP on the back end, you just did some HTML and some CSS on the front end. So, there wasn't a whole lot of scripting going on back then. So, Java was the language that people chose to use. And so there's a whole community out there for Java and testing.Like, the libraries are very mature, there's open-source products and things like this. So, this is by far the most popular language that people use, no matter what their application is built in.This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: If I were looking to get a job in enterprise these days, it feels like Java is the direction to go in, with the counterpoint that, let's say that I go the path that I went through: I don't have a college degree; I don't have a high school diploma. If I were to start out trying to be a software engineering today, or advising someone to do the same, it feels like the lingua franca of everything today seems to be JavaScript in many different respects. It does front end; it does back end; people love to complain about it, so you know it's valid. To be clear, I find myself befuddled every time I pick it up. I'm not coming at this from a JavaScript fanboy perspective in any respect.The asynchronous execution flow always messes with my head and leaves me with more questions than answers. Is that assessment though—of starting languages—accurate? Are there cases where Java is absolutely the right answer, as far as what to learn first?Angie: Yeah. So, I first started with C++, and then I learned Java. Well, what I find is, Java because it's so strict—it's a statically typed language, and there's lots of rules, and you really need to understand paradigms and stuff like that with this language—it's harder to learn, but once you learn it, it's much easier to pick up other languages, even if they're dynamically typed, you know? So, that's been my experience with this. As far as jobs, so the last time I looked at this, someone did some research and wrote it up—this was 2019—and they looked at the job openings available at the time, and they divided it by language. And Java was at, like, 65,000 jobs open, Python was a close second was 62,000, and JavaScript was third place with 39,000.So, quite a big difference. But if you looked at tech Twitter, you'd think, like, JavaScript is all there is. Most of my followers and folks that I follow are JavaScript folks, front-end folks. So, it is a language I think you definitely need to learn; it's becoming more and more prevalent. If you're going to do any sort of web app, [laugh] you definitely want to know it.So, I'm definitely not saying, “Oh, just learn Java and that's it.” I think there's definitely a need for adding JavaScript to your repertoire. But Java, there does seem to be more jobs, especially the big enterprise-type jobs, in Java.Corey: The reason I ask so much about some of the early-stage stuff is that in your spare time—which it sounds like you have so much of these days—you volunteer with Black Girls Code to help teach coding workshops to young girls in an effort to attract more women and minorities to tech. Which is phenomenal. Few years ago, I was a volunteer instructor for Year Up before people really realized, “Oh, maybe having an instructor who teaches by counterexample isn't necessarily the best approach of teaching folks who are new to the space.”But the curriculum I was given for teaching people how Linux worked and how to build a web servers and the rest, started off with a three-day module on how to use VI, an arcane text editor that no one understands, and the only reason we use it is because we don't know how to quit it.Angie: [laugh].Corey: And that's great and all, but I'm looking at this and my immediate impression was, “We're scrapping that, replacing it with nano,” which is basically what you see is what you get, and something that everyone can understand and appreciate without three days of training. And it felt an awful lot like we're teaching people VI almost as a form of gatekeeping. I'm curious; when you presumably go down the path of teaching people who are brand-new to the space? How do you wind up presenting testing as something that they should start with? Because it feels like a thing you have to know first before you can start building anything at scale, but it resonates, on some level, with feeling like it's, ah, you must be able to learn this religion first; then you'll be able to go and proceed further. How do you square that circle?Angie: Yeah. So, I had the privilege of being an adjunct professor at a college, and I taught Java programming to freshmen. This was really interesting because there's so much to teach, and this is true of all the courses. So, when I say that they don't include it in the curriculum, that's not really that much of a slight on them. Like, it's just so much you have to cover.So I, me, the testing guru, I still couldn't find space to devote an entire sitting, a chapter, or whatever on testing. So, I kind of wove it into my teaching style. So, I would just teach the concept, let's say I'm teaching loops today, and I'll have a little exercise that you do in class. So, we do things together, and then I say, okay, now you try it by yourself. Here's a problem; call me over when you're done.And as they would call me over when they're done, I would break it; I would break their code, right? I'd do some input that they weren't expecting and all of a sudden is broken. And they started expecting me to do this, you know? “She's going to come and she's going to break my stuff.” So, they start thinking themselves, “Let me test it before I give it to my user,” who is Professor Angie, or whatever.So, that's how I taught them that. Same with homework assignments. So, they would submit it, I would treat it like a code review, go through line by line, I didn't have any automated systems to test their homework assignments. I did it like a code review, gave them feedback on how to improve their style, but also I would try to break it and give them, “Here's all the areas that you didn't think of.” So, that was my way of teaching them that quality matters in how to think about beyond the requirement.The requirement is going to say, “Someone needs to be able to log in.” It's not going to give you all of the things that should happen, you know if there's a wrong password, so these are things, as an engineer, you need to think beyond that one line requirement that you've got and realize that this is part of it as well.Corey: So, it's almost a matter of giving people context beyond just the writing of the code, which frankly, seems to be something that's been missing for many aspects of engineering culture for a while, the understanding the people involved, understanding that it is not just you, or your department, or even your company in some cases.Angie: Exactly. And I tried to stress that very heavily in each lecture: who is your end-user? And your end-user cannot see your code, they cannot see your comments in the code that's telling them, “Make sure you input it this way,” or whatever. None of that is seen so you have to be very explicit in your messages, and your intent, and behavior with the end-user.Corey: One last area I wanted to cover with you, when I was doing some research on you before the show, is that you are an IBM Master Inventor, which I had no idea what that was. Is that a term of art? Let me Google it. And it turns out that you have, according to LinkedIn at least, 27 patents in your name. And it's, “Oh.”Yeah, it's one of those areas where you look at something like, what gives someone the hubris to call themselves—or the grounds to call themselves that? And, “Oh, yeah. Oh, they're super accomplished, and they have a demonstrated track record of inventing things that are substantial and meaningful. I guess that would do it.” I'd never heard the term until now. What is that? And how are you that prolific, for lack of a better term?Angie: Yeah, so I used to work at IBM and they're really big on innovation. And I haven't kept track in a while, but for many, many years, they were the number one producer of patents [laugh] of this year or whatever. So, it was kind of in the culture to innovate. Now, I will say, like, a very small percentage of people—employees—there would take it as far as I did to actually go and patent something—[laugh]—Corey: Oh, it's the ‘don't offer if you're not serious,' model.Angie: Yeah. [laugh]. But I mean, it was there; it was a program there where, hey, you got an idea for a software patent? Write it up, we'll have our lawyers, our IP lawyers review it, and then they'll take your little one-page doc and turn it into a twenty-five-page legal document that we submit to the USPTO—United States Patent Trademark Office—who then reviews it and decides if this is novel enough and grants it, or dismisses it. And, “Hey, we'll pay you for these patents. We'll pay for the whole process.” And so I thought, “Heck, why not?”And I kind of got hooked. [laugh]. So, it just so happens that I got a lot of good ideas. And I would collaborate with people from other areas of the business, and it was an excellent way for me to learn about new technologies. If something new was coming out, I would jump on that to explore, play with it, and think about, are there any problems that this technology is not aimed to solve, but if I tweak it in some way, or if I integrate it with some other concept or some other technology, do I get something unique and novel here?And it got to the point where I just started walking through life and as I'm hit with problems—like, I'll give you an example. I'm in the grocery store, right, and this inevitably happens to everyone, what, you choose the wrong line in the grocery store. “This one looks like it's moving, I'm going to go here.” And then the whole time, you're looking to your right, and that line is moving. And you're, like, stuck.Corey: Every single time.Angie: Every time. So, it got—[laugh]—Corey: Toll booths are the same way.Angie: —it got to the point where I started recognizing when I'm frustrated, and say, “This is a problem. How can I use tech to solve this?” And so I, in that problem, I came up with this solution of how I could be able to tell which one of these is the right line to get into. And that consisted of lots of things like scanning the things in everyone's cart. On your cart, you have these smart carts that know what's inside of them, polling the customers' spending or their behavior; so are they going to come up here and send the clerk back to go get cigarettes, or alcohol, or are they going to pull out 50 coupons? Are they going to write a check, which takes longer?So, kind of factoring in all of these habitual behaviors and what's in your cart right now, and determining an overall processing time. And that way, if you display that over each queue, which one would be the fastest to get into. So, things like that is what I started doing and patenting.Corey: Well, my favorite part of that story is that it is clearly a deeply technical insight into this, but you've told the story in a way that someone who is not themselves deeply technical can wrap their heads around. And I just—making sure you're aware of exactly how rare and valuable that particular skill set is. So, often there are people who are so in love with a technology that they cannot explain to another living soul who is not equally in love with that technology. That alone is one of the biggest reasons I wanted to have you on this show was your repeated, demonstrated ability to explain complex things simply in a way that—I know this is anathema for the tech industry—that is not condescending. I come away feeling I understand what you were talking about, now.Angie: Thank you so much. That is one of the skills I pride myself on. When I give talks, I want everyone in that room to understand it, even if they're not technical. And lots of times I've had comments from anyone from, like, the janitor to the folks who are working A/V who, they don't work with computers or anything at all and they've come to me after these talks like, “Okay, I heard a lot of talks in here. Everybody is over my head. I understood everything you said. Thank you.” And yet it's still beneficial to those who are deeply technical as well. Thank you so much for that.Corey: No, it's a very valuable thing and it's what I look for the most. In fact, my last question for you is tying around that exact thing. You have convinced me. I want to learn more about test automation, and learn how this works and with an eye toward possibly one day applying it to some of my crappy nonsense that I'm writing. Other than going on Google and typing in a variety of search terms that will lead me to, probably, a Stack Overflow thread that has been closed as off-topic, but still left up to pollute Google search results, where should I go?Angie: Yeah. So, I've actually started an entire university devoted to testing, and it's called Test Automation Universityand I got my employer, Applitools, to sponsor this, so all of the courses are free.And they are taught by myself as well as other leading experts in the test automation space. So, you know that it's trusted; I vet all of the instructors, I'm very [laugh] involved in going through their material and making sure that it's correct and accurate so the courses are of top quality. We have about a little over 85,000 students at Test Automation University, so you definitely need to become one if you want to learn more about testing. And we cover all of the languages, so Java, JavaScript, Python, Ruby, we have all of the frameworks, we have things around mobile testing, UI testing, unit testing, API testing. So, whatever it is that you need, we got you covered.Corey: You also go further than that; you don't just break it down by language, you break it down by use case. If I—Angie: Yeah.Corey: —look at Python, for example, you've got a Web UI path, you've got an—Angie: Exactly.Corey: API path, you've got a mobile path. It aligns not just with the language but with the use case, in many respects.Angie: Mm-hm.Corey: I'm really glad I asked that question, and we will, of course, include a link to that in the [show notes 00:31:10]. Thank you so much for taking the time to speak with me. If people want to learn more, other than going to Test Automation University, where can they find you?Angie: Mm-hm. So, my website is angiejones.tech—T-E-C-H—and I blog about test automation strategies and techniques there, so lots of good info there. I also keep my calendar of events there, so if you wanted to hear me speak or one of my talks, you can find that information there. And I live on Twitter, so definitely give me a follow. It's @techgirl1908.Corey: And we will, of course, include links to all of that. Thank you so much for being so generous with your time and insight. I really appreciate it.Angie: Yeah, thank you so much for having me. This was fun.Corey: Angie Jones, Java Champion and senior director at Applitools. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice along with a long, ranting, incoherent comment that fails to save because someone on that platform failed to write a test.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
You can find Angie's blog here, catch her on Twitter here, and connect with her on LinkedIn here.You can check out Applitools and learn about the visual AI system it uses for testing here.Our lifeboat badge of the week goes to Alex Klyubin for explaining: What is the difference between Jar signer and Apk signer?
You can find Angie's blog here, catch her on Twitter here, and connect with her on LinkedIn here.You can check out Applitools and learn about the visual AI system it uses for testing here.Our lifeboat badge of the week goes to Alex Klyubin for explaining: What is the difference between Jar signer and Apk signer?
Espresso Talks welcomes Yarden Naveh, VP of Customer Success & Support at Applitools! He is an experienced Vice President of Customer Success with a demonstrated history of working in the computer software industry. Skilled in many programming languages including Java, JS, C# and more, Management, and Optimization. Strong Customer Success professional with a track record of building a successful CS department from the bottom up. Applitools is on a mission to help Test Automation, DevOps and Development teams release and monitor flawless mobile, web, and native apps. It is the only commercial-grade, visual AI-based cloud engine that validates all the visual aspects of any Web, Mobile and Native app in a fully automated way. It supports Continuous Deployment in multi-device, multi-browser environments.
Jason Lengstorf Twitter GitHub LinkedIn Jason.af Lengstorf.com Learn with Jason Twitch Home Page Links GrAMPS Let's Learn RedwoodJS with Anthony Campolo Serverless GraphQL with Hasura with Christian Nwamba Build a Portfolio Site with Sanity.io and Gatsby with Espen Hovlandsdal Visual Testing Using Cypress and Applitools with Angie Jones Jamstack Explorers ★ Support this podcast ★
In this episode of OpenHive.JS, we talk to Gil Tayar, whose fascination with software development has not dimmed over 30 years. Passionate about distributed systems and scaling development to big teams, Gil has worked at companies including Wix and Applitools and is currently a software architect at Roundforest. Today, he talks about ECMAScript modules (ESM) with JavaScript. Welcome back to OpenHive.JS.
The DevRel teams welcomes Colby Fayock, a developer advocate from Applitools. Colby breaks down his Next WordPress Starter kit, discusses the benefits and drawbacks of headless, and talks about upcoming tech he's excited about.Colby's Starter: https://github.com/colbyfayock/next-wordpress-starterColby's YouTube: https://www.youtube.com/colbyfayockColby's Blog: https://spacejelly.dev/
In this episode, we talk about about test automation with Angie Jones, senior director of developer relations at Applitools, and creator of Test Automation University. Show Notes DevNews (sponsor) CodeNewbie (sponsor) RudderStack (sponsor) Cockroach Labs (sponsor) Cloudways (sponsor) Applitools Test Automation University Selenium Webdriver Test automation
What would happen if you eliminated all of the processes that surround your software development teams today? Adam Carmi, CTO of Applitools, joins me to discuss how his methodology works, and why you need to start eliminating processes too. Join the Dev Interrupted discord community: https://discord.gg/tpkmwM6c3g
Angie Jones, Java Champion, and Principal Developer Advocate at Applitools, walk Adi Polak through the concept of Visual testing, How Data & AI assists with test automation, and how Applitools partners with Microsoft.Follow @CH9 http://www.twitter.com/ch9 Follow @TechExceptions https://twitter.com/TechExceptions Follow @AdiPolak https://twitter.com/AdiPolak
Oren is the Founder and CEO of Testim.io. He has over 20 years of experience in tech, focusing on products for developers at IBM, Wix, Cadence, Applitools, and now Testim.io. In addition to being a busy entrepreneur, Oren is a community activist, the co-organizer of several meetups and conferences, mentored at the Google Launchpad Accelerator, and taught at Technion University.
From the olden days of DOS, Gil was, is, and always will be a software dev. He has co-founded WebCollage and designed cloudy projects at Wix. His current passion is figuring out how to test software as the Senior Architect at Applitools.
In today's episode, I interviewed Anand Bagmar from India. He has more than 20 years of experience in the software industry and today he's a Quality Evangelist and Solution Architect at Applitools. In this episode, we will learn more about visual testing and how it can enhance your test automation. Of course, we talked about Applitools, which is one of the best tools for this kind of testing. If you are involved with or interested in test automation, you will learn some very important things about it, including the importance of following a value-based and risk-based approach. Read the transcript of this episode here: https://abstracta.us/blog/podcast/visual-testing/ Get a free trial of Applitools: https://applitools.com/ Access courses to improve your test automation skills: https://testautomationu.applitools.com/ Follow Anand on Twitter: https://twitter.com/BagmarAnand
Welcome to another episode of Action & Ambition with your host, Andrew Medal. Today’s guest is Gil Sever, a serial entrepreneur who is driven to solving technical problems with innovative solutions that didn’t previously exist. Prior to Applitools, he founded Safend (acquired by Wave Systems in 2011) and Storwize (acquired by IBM in 2010), and was the COO at Ectel, a NASDAQ company. Prior to this, Gil served in an elite intelligence unit of the Israeli Defense Forces and held a variety of research, development and management positions. Gil has a B.Sc. degree in Electrical Engineering from the Technion, Israel Institute of Technology, and an M.Sc. degree in Electrical Engineering from Tel Aviv University You’re going to love this episode. Let’s get to it!
In this episode, we’re talking about testing code with Angie Jones, Senior Developer Advocate at Applitools, and former Senior Software Engineer in Test at Twitter. Angie talks about how she got into testing, some of the testing and problems she had to solve while working at Twitter, and why all developers should understand the basics of testing. Show Links Digital Ocean (sponsor) MongoDB (sponsor) Heroku (sponsor) TwilioQuest (sponsor) Applitools C++ Test automation Java Software widget Library Heuristic Pair programming Faker Application programming interface (API) Boolean expression Test Automation Frameworks Codebase Unit testing UI (User interface) Code review Test Automation University JavaScript Debugging React Ministry of Testing Conditional For loop Data structure Language-agnostic
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Looking for more ways to support diversity in tech? In this episode, Tracy Lee, founder of This Dot, and Moshe Milman, co-founder of Applitools, will discuss This Dot’s new Open Source Apprentice program. Listen in to discover how this initiative invests in open source and enables energetic and passionate junior developers to achieve This Dots’ and Applitools’ vision of supporting tech diversity through open-source programs and beyond.
Homework: Go through Angie's Visual Testing Course: Automated Visual Testing: A Fast Path To Test Automation SuccessVisual testing is like snapshot testing with images. So when your application is in the state that you want it to be in, you verify this as a human being, and then utilize tools to take a picture of your application in that state. Visual testing isn't a new concept, but the technology was previously flaky. But now, Applitools is using AI and machine learning to be able only to detect the things that we care about as human beings.Visual testing catches issues that your scripts won't detect, and Applitools is especially powerful at it. The processing gets offloaded onto the Applitools servers, and snapshots of your app are tested on multiple platforms so you can be confident that no visual bugs get created anywhere!Transcript"Angie Jones Chats With Kent About Automated Visual Testing" TranscriptResourcesTest Automation UniversityAutomated Visual Testing: A Fast Path To Test Automation SuccessAngie JonesTwitterGithubWebsiteLinkedInKent C. DoddsWebsiteTwitterGithubYoutubeTesting JavaScript
Have you ever wondered what it takes to be an effective teacher in the tech industry? Well wonder no more!We had the pleasure of chatting with Angie Jones, Senior Developer Advocate at Applitools and Director at Test Automation University about her experience as a teacher. Angie talks to us about her teaching and learning styles and shares some advice for those looking to get into the world of teaching.In this episode, we discussed how to teach to multiple skill levels, common misconceptions about being a teacher, and much more!For the full show notes and links to the speakers, check out our website!
Sponsors Sentry use the code “devchat” for 2 months free on Sentry small plan CacheFly Host: Charles Max Wood Special Guest: Gil Tayar Episode Summary In this episode of My JavaScript Story, Charles Max Wood hosts Gil Tayar, a Senior Architect at Applitools from Israel. Listen to Gil on the podcast JavaScript Jabber Testing in JavaScript with Gil Tayar. Gil started his developing journey when he was 13 years old. He continued his training during his military service and became an instructor for the PC unit. During this time, he learned and taught C, C++ and Windows. He then started working for Wix before he went onto co-found his own startup. You can listen to Dan Shappir, another developer from Wix that has been a guest on the podcast JavaScript Jabber on this episode. During this experience Gil realized he loves the coding side of the business but not the management side. Gil also loves testing and he very much enjoys his work at Applitools. As a Senior Architect in Applitools R&D, he has designed and built Applitools' Rendering Service. Links JavaScript Jabber: Testing in JavaScript with Gil Tayar JavaScript Jabber: “Web Performance API” with Dan Shappir Start-up Nation: The Story of Israel's Economic Miracle by Dan Senor and Saul Singer WIX Gil’s LinkedIn Gil’s Twitter Gil’s Medium Applitools Kubernetes https://devchat.tv/my-javascript-story/ Picks Gil Tayar: The Polish German War The Great War 1919 Channel Peaky Blinders My Struggle (Knausgård novels) Charles Max Wood: The MFCEO Project Podcast - Andy Frisella The #AskGaryVee Show podcast! - Gary Vaynerchuk A Farewell to Arms by Ernest Hemingway
Sponsors Sentry use the code “devchat” for 2 months free on Sentry small plan CacheFly Host: Charles Max Wood Special Guest: Gil Tayar Episode Summary In this episode of My JavaScript Story, Charles Max Wood hosts Gil Tayar, a Senior Architect at Applitools from Israel. Listen to Gil on the podcast JavaScript Jabber Testing in JavaScript with Gil Tayar. Gil started his developing journey when he was 13 years old. He continued his training during his military service and became an instructor for the PC unit. During this time, he learned and taught C, C++ and Windows. He then started working for Wix before he went onto co-found his own startup. You can listen to Dan Shappir, another developer from Wix that has been a guest on the podcast JavaScript Jabber on this episode. During this experience Gil realized he loves the coding side of the business but not the management side. Gil also loves testing and he very much enjoys his work at Applitools. As a Senior Architect in Applitools R&D, he has designed and built Applitools' Rendering Service. Links JavaScript Jabber: Testing in JavaScript with Gil Tayar JavaScript Jabber: “Web Performance API” with Dan Shappir Start-up Nation: The Story of Israel's Economic Miracle by Dan Senor and Saul Singer WIX Gil’s LinkedIn Gil’s Twitter Gil’s Medium Applitools Kubernetes https://devchat.tv/my-javascript-story/ Picks Gil Tayar: The Polish German War The Great War 1919 Channel Peaky Blinders My Struggle (Knausgård novels) Charles Max Wood: The MFCEO Project Podcast - Andy Frisella The #AskGaryVee Show podcast! - Gary Vaynerchuk A Farewell to Arms by Ernest Hemingway
Sponsors Sentry use the code “devchat” for 2 months free on Sentry small plan CacheFly Host: Charles Max Wood Special Guest: Gil Tayar Episode Summary In this episode of My JavaScript Story, Charles Max Wood hosts Gil Tayar, a Senior Architect at Applitools from Israel. Listen to Gil on the podcast JavaScript Jabber Testing in JavaScript with Gil Tayar. Gil started his developing journey when he was 13 years old. He continued his training during his military service and became an instructor for the PC unit. During this time, he learned and taught C, C++ and Windows. He then started working for Wix before he went onto co-found his own startup. You can listen to Dan Shappir, another developer from Wix that has been a guest on the podcast JavaScript Jabber on this episode. During this experience Gil realized he loves the coding side of the business but not the management side. Gil also loves testing and he very much enjoys his work at Applitools. As a Senior Architect in Applitools R&D, he has designed and built Applitools' Rendering Service. Links JavaScript Jabber: Testing in JavaScript with Gil Tayar JavaScript Jabber: “Web Performance API” with Dan Shappir Start-up Nation: The Story of Israel's Economic Miracle by Dan Senor and Saul Singer WIX Gil’s LinkedIn Gil’s Twitter Gil’s Medium Applitools Kubernetes https://devchat.tv/my-javascript-story/ Picks Gil Tayar: The Polish German War The Great War 1919 Channel Peaky Blinders My Struggle (Knausgård novels) Charles Max Wood: The MFCEO Project Podcast - Andy Frisella The #AskGaryVee Show podcast! - Gary Vaynerchuk A Farewell to Arms by Ernest Hemingway
In this episode of DTV, Applitools CEO Gil Sever speaks with Avery Lyford, chief customer office at Infostretch about the importance of marrying the technological and human aspects of digital transformation together. They also discuss the opportunity cost of not optimizing the entire digital user experience.
Jim Baum is currently a Venture Partner at OpenView and serves on multiple boards including Logz.io, Applitools, project44 and DataStax. Jim previously served as President and CEO of Netezza where he drove the nearly $2B acquisition of Netezza by IBM in 2010. Jim also led Endeca’s early rise from tiny startup to the leading provider of innovative information access and delivery software solutions. Prior to Endeca and Netezza, he served at PTC as an Executive Vice President and General Manager. On this episode, Jim discusses his experience as a venture partner and the difference between joining a board as an investor vs. an independent board member, why sitting on a board is a great opportunity for executives and advice for expansion stage CEOs looking for the right board members.
Angie speaks all over the world on Test Automation strategies, and she got Scott excited about Selenium again! She keynoted Selenium Conf 2018 and currently works at Applitools making automated visual testing tools. She's most recently launched on a new "Test Automation University" that's free and community driven. http://testautomationu.com http://angiejones.tech
Panel: Aimee Knight AJ O’Neal Charles Max Wood Special Guest: Gil Tayar In this episode, the panel talks with Gil Tayar who is currently residing in Tel Aviv and is a software engineer. He is currently the Senior Architect at Applitools in Israel. The panel and the guest talk about the different types of tests and when/how one is to use a certain test in a particular situation. They also mention Node, React, Selenium, Puppeteer, and much more! Show Topics: 0:00 – Advertisement: KENDO UI 0:35 – Chuck: Our panel is AJ, Aimee, myself – and our special guest is Gil Tayar. Tell us why you are famous! 1:13 – Gil talks about where he resides and his background. 2:27 – Chuck: What is the landscape like now with testing and testing tools now? 2:39 – Guest: There is a huge renaissance with the JavaScript community. Testing has moved forward in the frontend and backend. Today we have lots of testing tools. We can do frontend testing that wasn’t possible 5 years ago. The major change was React. The guest talks about Node, React, tools, and more! 4:17 – Aimee: I advocate for tests and testing. There is a grey area though...how do you treat that? If you have to get something into production, but it’s not THE thing to get into production, does that fall into product or...what? 5:02 – Guest: We decided to test everything in the beginning. We actually cam through and did that and since then I don’t think I can use the right code without testing. There are a lot of different situations, though, to consider. The guest gives hypothetical situations that people could face. 6:27 – Aimee. 6:32 – Guest: The horror to changing code without tests, I don’t know, I haven’t done that for a while. You write with fear in your heart. Your design is driven by fear, and not what you think is right. In the beginning don’t write those tests, but... 7:22 – Aimee: I totally agree and I could go on and on and on. 7:42 – Panel: I want to do tests when I know they will create value. I don’t want to do it b/c it’s a mundane thing. Secondly, I find that some times I am in a situation where I cannot write the test b/c I would have to know the business logic is correct. I am in this discovery mode of what is the business logic? I am not just building your app. I guess I just need advice in this area, I guess. 8:55 – Guest gives advice to panelist’s question. He mentions how there are two schools of thought. 10:20 – Guest: Don’t mock too much. 10:54 – Panel: Are unit tests the easiest? I just reach for unit testing b/c it helps me code faster. But 90% of my code is NOT that. 11:18 – Guest: Exactly! Most of our test is glue – gluing together a bunch of different stuff! Those are best tested as a medium-sized integration suite. 12:39 – Panel: That seems like a lot of work, though! I loathe the database stuff b/c they don’t map cleanly. I hate this database stuff. 13:06 – Guest: I agree, but don’t knock the database, but knock the level above the database. 13:49 – Guest: Yes, it takes time! Building the script and the testing tools, but when you have it then adding to it is zero time. Once you are in the air it’s smooth sailing. 14:17 – Panel: I guess I can see that. I like to do the dumb-way the first time. I am not clear on the transition. 14:47 – Guest: Write the code, and then write the tests. The guest gives a hypothetical situation on how/when to test in a certain situation. 16:25 – Panel: Can you talk about that more, please? 16:50 – Guest: Don’t have the same unit – do browser and business logic stuff separated. The real business logic stuff needs to be above that level. First principle is separation of concerns. 18:04 – Panel talks about dependency interjection and asks a question. 18:27 – Guest: What I am talking about very, very light inter-dependency interjection. 19:19 – Panel: You have a main function and you are doing requires in the main function. You are passing the pieces of that into the components that need it. 19:44 – Guest: I only do it when it’s necessary; it’s not a religion for me. I do it only for those layers that I know will need to be mocked; like database layers, etc. 20:09 – Panel. 20:19 – Guest: It’s taken me 80 years to figure out, but I have made plenty of mistakes a long the way. A test should run for 2-5 minutes max for package. 20:53 – Panel: What if you have a really messy legacy system? How do you recommend going into that? Do you write tests for things that you think needs to get tested? 21:39 – Guest answers the question and mentions Selenium! 24:27 – Panel: I like that approach. 24:35 – Chuck: When you say integration test what do you mean? 24:44 – Guest: Integration tests aren’t usually talked about. For most people it’s tests that test the database level against the database. For me, the integration tests are taking a set of classes as they are in the application and testing them together w/o the...so they can run in millisecond time. 26:54 – Advertisement – Sentry.io 27:52 – Chuck: How much do the tools matter? 28:01 – Guest: The revolutions matter. Whether you use Jasmine or Mocha or whatever I don’t think it matters. The tests matter not the tools. 28:39 – Aimee: Yes and no. I think some tools are outdated. 28:50 – Guest: I got a lot of flack about my blog where I talk about Cypress versus Selenium. I will never use Jasmine. In the end it’s the 29:29 – Aimee: I am curious would you be willing to expand on what the Selenium folks were saying about Puppeteer and others may not provide? 29:54 – Guest: Cypress was built for frontend developers. They don’t care about cross browser, and they tested in Chrome. Most browsers are typically the same. Selenium was built with the QA mindset – end to end tests that we need to do cross browser. The guest continues with this topic. 30:54 – Aimee mentions Cypress. 31:08 – Guest: My guessing is that their priority is not there. I kind of agree with them. 31:21 – Aimee: I think they are focusing on mobile more. 31:24 – Guest: I think cross browser testing is less of an issue now. There is one area that is important it’s the visual area! It’s important to test visually across these different browsers. 32:32 – Guest: Selenium is a Swiss knife – it can do everything. 33:32 – Chuck: I am thinking about different topics to talk about. I haven’t used Puppeteer. What’s that about? 33:49 – Guest: Puppeteer is much more like Selenium. The reason why it’s great is b/c Puppeteer will always be Google Chrome. 35:42 – Chuck: When should you be running your tests? I like to use some unit tests when I am doing my development but how do you break that down? 36:06 – Guest. 38:30 – Chuck: You run tests against production? 38:45 – Guest: Don’t run tests against production...let me clarify! 39:14 – Chuck. 39:21 – Guest: When I am talking about integration testing in the backend... 40:37 – Chuck asks a question. 40:47 – Guest: I am constantly running between frontend and backend. I didn’t know how to run tests for frontend. I had to invent a new thing and I “invented” the package JS DONG. It’s an implementation of Dong in Node. I found out that I wasn’t the only one and that there were others out there, too. 43:14 – Chuck: Nice! You talked in the prep docs that you urged a new frontend developer to not run the app in the browser for 2 months? 43:25 – Guest: Yeah, I found out that she was running the application...she said she knew how to write tests. I wanted her to see it my way and it probably was a radical train-of-thought, and that was this... 44:40 – Guest: Frontend is so visual. 45:12 – Chuck: What are you working on now? 45:16 – Guest: I am working with Applitools and I was impressed with what they were doing. The guest goes into further detail. 46:08 – Guest: Those screenshots are never the same. 48:36 – Panel: It’s...comparing the output to the static site to the... 48:50 – Guest: Yes, that static site – if you have 30 pages in your app – most of those are the same. We have this trick where we don’t upload it again and again. Uploading the whole static site is usually very quick. The second thing is we don’t wait for the results. We don’t wait for the whole rendering and we continue with the tests. 50:28 – Guest: I am working mostly (right now) in backend. 50:40 – Chuck: Anything else? Picks! 50:57 – Advertisement: Get A Coder Job! END – Advertisement: CacheFly! Links: JavaScript React Elixir Node.js Puppeteer Cypress SeleniumHQ Article – Ideas.Ted.Com Book: Never Split the Difference Applitools Guest’s Blog Article about Cypress vs. Selenium Gil’s Twitter Gil’s Medium Gil’s LinkedIn Sponsors: Kendo UI Sentry CacheFly Picks: Aimee How Showing Vulnerability Helps Build a Stronger Team AJ Never Split the Difference Project - TeleBit Charles Monster Hunter International Metabase Gil Cat Zero The Origin of Consciousness in the Breakdown of the Bicameral Mind
Panel: Aimee Knight AJ O’Neal Charles Max Wood Special Guest: Gil Tayar In this episode, the panel talks with Gil Tayar who is currently residing in Tel Aviv and is a software engineer. He is currently the Senior Architect at Applitools in Israel. The panel and the guest talk about the different types of tests and when/how one is to use a certain test in a particular situation. They also mention Node, React, Selenium, Puppeteer, and much more! Show Topics: 0:00 – Advertisement: KENDO UI 0:35 – Chuck: Our panel is AJ, Aimee, myself – and our special guest is Gil Tayar. Tell us why you are famous! 1:13 – Gil talks about where he resides and his background. 2:27 – Chuck: What is the landscape like now with testing and testing tools now? 2:39 – Guest: There is a huge renaissance with the JavaScript community. Testing has moved forward in the frontend and backend. Today we have lots of testing tools. We can do frontend testing that wasn’t possible 5 years ago. The major change was React. The guest talks about Node, React, tools, and more! 4:17 – Aimee: I advocate for tests and testing. There is a grey area though...how do you treat that? If you have to get something into production, but it’s not THE thing to get into production, does that fall into product or...what? 5:02 – Guest: We decided to test everything in the beginning. We actually cam through and did that and since then I don’t think I can use the right code without testing. There are a lot of different situations, though, to consider. The guest gives hypothetical situations that people could face. 6:27 – Aimee. 6:32 – Guest: The horror to changing code without tests, I don’t know, I haven’t done that for a while. You write with fear in your heart. Your design is driven by fear, and not what you think is right. In the beginning don’t write those tests, but... 7:22 – Aimee: I totally agree and I could go on and on and on. 7:42 – Panel: I want to do tests when I know they will create value. I don’t want to do it b/c it’s a mundane thing. Secondly, I find that some times I am in a situation where I cannot write the test b/c I would have to know the business logic is correct. I am in this discovery mode of what is the business logic? I am not just building your app. I guess I just need advice in this area, I guess. 8:55 – Guest gives advice to panelist’s question. He mentions how there are two schools of thought. 10:20 – Guest: Don’t mock too much. 10:54 – Panel: Are unit tests the easiest? I just reach for unit testing b/c it helps me code faster. But 90% of my code is NOT that. 11:18 – Guest: Exactly! Most of our test is glue – gluing together a bunch of different stuff! Those are best tested as a medium-sized integration suite. 12:39 – Panel: That seems like a lot of work, though! I loathe the database stuff b/c they don’t map cleanly. I hate this database stuff. 13:06 – Guest: I agree, but don’t knock the database, but knock the level above the database. 13:49 – Guest: Yes, it takes time! Building the script and the testing tools, but when you have it then adding to it is zero time. Once you are in the air it’s smooth sailing. 14:17 – Panel: I guess I can see that. I like to do the dumb-way the first time. I am not clear on the transition. 14:47 – Guest: Write the code, and then write the tests. The guest gives a hypothetical situation on how/when to test in a certain situation. 16:25 – Panel: Can you talk about that more, please? 16:50 – Guest: Don’t have the same unit – do browser and business logic stuff separated. The real business logic stuff needs to be above that level. First principle is separation of concerns. 18:04 – Panel talks about dependency interjection and asks a question. 18:27 – Guest: What I am talking about very, very light inter-dependency interjection. 19:19 – Panel: You have a main function and you are doing requires in the main function. You are passing the pieces of that into the components that need it. 19:44 – Guest: I only do it when it’s necessary; it’s not a religion for me. I do it only for those layers that I know will need to be mocked; like database layers, etc. 20:09 – Panel. 20:19 – Guest: It’s taken me 80 years to figure out, but I have made plenty of mistakes a long the way. A test should run for 2-5 minutes max for package. 20:53 – Panel: What if you have a really messy legacy system? How do you recommend going into that? Do you write tests for things that you think needs to get tested? 21:39 – Guest answers the question and mentions Selenium! 24:27 – Panel: I like that approach. 24:35 – Chuck: When you say integration test what do you mean? 24:44 – Guest: Integration tests aren’t usually talked about. For most people it’s tests that test the database level against the database. For me, the integration tests are taking a set of classes as they are in the application and testing them together w/o the...so they can run in millisecond time. 26:54 – Advertisement – Sentry.io 27:52 – Chuck: How much do the tools matter? 28:01 – Guest: The revolutions matter. Whether you use Jasmine or Mocha or whatever I don’t think it matters. The tests matter not the tools. 28:39 – Aimee: Yes and no. I think some tools are outdated. 28:50 – Guest: I got a lot of flack about my blog where I talk about Cypress versus Selenium. I will never use Jasmine. In the end it’s the 29:29 – Aimee: I am curious would you be willing to expand on what the Selenium folks were saying about Puppeteer and others may not provide? 29:54 – Guest: Cypress was built for frontend developers. They don’t care about cross browser, and they tested in Chrome. Most browsers are typically the same. Selenium was built with the QA mindset – end to end tests that we need to do cross browser. The guest continues with this topic. 30:54 – Aimee mentions Cypress. 31:08 – Guest: My guessing is that their priority is not there. I kind of agree with them. 31:21 – Aimee: I think they are focusing on mobile more. 31:24 – Guest: I think cross browser testing is less of an issue now. There is one area that is important it’s the visual area! It’s important to test visually across these different browsers. 32:32 – Guest: Selenium is a Swiss knife – it can do everything. 33:32 – Chuck: I am thinking about different topics to talk about. I haven’t used Puppeteer. What’s that about? 33:49 – Guest: Puppeteer is much more like Selenium. The reason why it’s great is b/c Puppeteer will always be Google Chrome. 35:42 – Chuck: When should you be running your tests? I like to use some unit tests when I am doing my development but how do you break that down? 36:06 – Guest. 38:30 – Chuck: You run tests against production? 38:45 – Guest: Don’t run tests against production...let me clarify! 39:14 – Chuck. 39:21 – Guest: When I am talking about integration testing in the backend... 40:37 – Chuck asks a question. 40:47 – Guest: I am constantly running between frontend and backend. I didn’t know how to run tests for frontend. I had to invent a new thing and I “invented” the package JS DONG. It’s an implementation of Dong in Node. I found out that I wasn’t the only one and that there were others out there, too. 43:14 – Chuck: Nice! You talked in the prep docs that you urged a new frontend developer to not run the app in the browser for 2 months? 43:25 – Guest: Yeah, I found out that she was running the application...she said she knew how to write tests. I wanted her to see it my way and it probably was a radical train-of-thought, and that was this... 44:40 – Guest: Frontend is so visual. 45:12 – Chuck: What are you working on now? 45:16 – Guest: I am working with Applitools and I was impressed with what they were doing. The guest goes into further detail. 46:08 – Guest: Those screenshots are never the same. 48:36 – Panel: It’s...comparing the output to the static site to the... 48:50 – Guest: Yes, that static site – if you have 30 pages in your app – most of those are the same. We have this trick where we don’t upload it again and again. Uploading the whole static site is usually very quick. The second thing is we don’t wait for the results. We don’t wait for the whole rendering and we continue with the tests. 50:28 – Guest: I am working mostly (right now) in backend. 50:40 – Chuck: Anything else? Picks! 50:57 – Advertisement: Get A Coder Job! END – Advertisement: CacheFly! Links: JavaScript React Elixir Node.js Puppeteer Cypress SeleniumHQ Article – Ideas.Ted.Com Book: Never Split the Difference Applitools Guest’s Blog Article about Cypress vs. Selenium Gil’s Twitter Gil’s Medium Gil’s LinkedIn Sponsors: Kendo UI Sentry CacheFly Picks: Aimee How Showing Vulnerability Helps Build a Stronger Team AJ Never Split the Difference Project - TeleBit Charles Monster Hunter International Metabase Gil Cat Zero The Origin of Consciousness in the Breakdown of the Bicameral Mind
Panel: Aimee Knight AJ O’Neal Charles Max Wood Special Guest: Gil Tayar In this episode, the panel talks with Gil Tayar who is currently residing in Tel Aviv and is a software engineer. He is currently the Senior Architect at Applitools in Israel. The panel and the guest talk about the different types of tests and when/how one is to use a certain test in a particular situation. They also mention Node, React, Selenium, Puppeteer, and much more! Show Topics: 0:00 – Advertisement: KENDO UI 0:35 – Chuck: Our panel is AJ, Aimee, myself – and our special guest is Gil Tayar. Tell us why you are famous! 1:13 – Gil talks about where he resides and his background. 2:27 – Chuck: What is the landscape like now with testing and testing tools now? 2:39 – Guest: There is a huge renaissance with the JavaScript community. Testing has moved forward in the frontend and backend. Today we have lots of testing tools. We can do frontend testing that wasn’t possible 5 years ago. The major change was React. The guest talks about Node, React, tools, and more! 4:17 – Aimee: I advocate for tests and testing. There is a grey area though...how do you treat that? If you have to get something into production, but it’s not THE thing to get into production, does that fall into product or...what? 5:02 – Guest: We decided to test everything in the beginning. We actually cam through and did that and since then I don’t think I can use the right code without testing. There are a lot of different situations, though, to consider. The guest gives hypothetical situations that people could face. 6:27 – Aimee. 6:32 – Guest: The horror to changing code without tests, I don’t know, I haven’t done that for a while. You write with fear in your heart. Your design is driven by fear, and not what you think is right. In the beginning don’t write those tests, but... 7:22 – Aimee: I totally agree and I could go on and on and on. 7:42 – Panel: I want to do tests when I know they will create value. I don’t want to do it b/c it’s a mundane thing. Secondly, I find that some times I am in a situation where I cannot write the test b/c I would have to know the business logic is correct. I am in this discovery mode of what is the business logic? I am not just building your app. I guess I just need advice in this area, I guess. 8:55 – Guest gives advice to panelist’s question. He mentions how there are two schools of thought. 10:20 – Guest: Don’t mock too much. 10:54 – Panel: Are unit tests the easiest? I just reach for unit testing b/c it helps me code faster. But 90% of my code is NOT that. 11:18 – Guest: Exactly! Most of our test is glue – gluing together a bunch of different stuff! Those are best tested as a medium-sized integration suite. 12:39 – Panel: That seems like a lot of work, though! I loathe the database stuff b/c they don’t map cleanly. I hate this database stuff. 13:06 – Guest: I agree, but don’t knock the database, but knock the level above the database. 13:49 – Guest: Yes, it takes time! Building the script and the testing tools, but when you have it then adding to it is zero time. Once you are in the air it’s smooth sailing. 14:17 – Panel: I guess I can see that. I like to do the dumb-way the first time. I am not clear on the transition. 14:47 – Guest: Write the code, and then write the tests. The guest gives a hypothetical situation on how/when to test in a certain situation. 16:25 – Panel: Can you talk about that more, please? 16:50 – Guest: Don’t have the same unit – do browser and business logic stuff separated. The real business logic stuff needs to be above that level. First principle is separation of concerns. 18:04 – Panel talks about dependency interjection and asks a question. 18:27 – Guest: What I am talking about very, very light inter-dependency interjection. 19:19 – Panel: You have a main function and you are doing requires in the main function. You are passing the pieces of that into the components that need it. 19:44 – Guest: I only do it when it’s necessary; it’s not a religion for me. I do it only for those layers that I know will need to be mocked; like database layers, etc. 20:09 – Panel. 20:19 – Guest: It’s taken me 80 years to figure out, but I have made plenty of mistakes a long the way. A test should run for 2-5 minutes max for package. 20:53 – Panel: What if you have a really messy legacy system? How do you recommend going into that? Do you write tests for things that you think needs to get tested? 21:39 – Guest answers the question and mentions Selenium! 24:27 – Panel: I like that approach. 24:35 – Chuck: When you say integration test what do you mean? 24:44 – Guest: Integration tests aren’t usually talked about. For most people it’s tests that test the database level against the database. For me, the integration tests are taking a set of classes as they are in the application and testing them together w/o the...so they can run in millisecond time. 26:54 – Advertisement – Sentry.io 27:52 – Chuck: How much do the tools matter? 28:01 – Guest: The revolutions matter. Whether you use Jasmine or Mocha or whatever I don’t think it matters. The tests matter not the tools. 28:39 – Aimee: Yes and no. I think some tools are outdated. 28:50 – Guest: I got a lot of flack about my blog where I talk about Cypress versus Selenium. I will never use Jasmine. In the end it’s the 29:29 – Aimee: I am curious would you be willing to expand on what the Selenium folks were saying about Puppeteer and others may not provide? 29:54 – Guest: Cypress was built for frontend developers. They don’t care about cross browser, and they tested in Chrome. Most browsers are typically the same. Selenium was built with the QA mindset – end to end tests that we need to do cross browser. The guest continues with this topic. 30:54 – Aimee mentions Cypress. 31:08 – Guest: My guessing is that their priority is not there. I kind of agree with them. 31:21 – Aimee: I think they are focusing on mobile more. 31:24 – Guest: I think cross browser testing is less of an issue now. There is one area that is important it’s the visual area! It’s important to test visually across these different browsers. 32:32 – Guest: Selenium is a Swiss knife – it can do everything. 33:32 – Chuck: I am thinking about different topics to talk about. I haven’t used Puppeteer. What’s that about? 33:49 – Guest: Puppeteer is much more like Selenium. The reason why it’s great is b/c Puppeteer will always be Google Chrome. 35:42 – Chuck: When should you be running your tests? I like to use some unit tests when I am doing my development but how do you break that down? 36:06 – Guest. 38:30 – Chuck: You run tests against production? 38:45 – Guest: Don’t run tests against production...let me clarify! 39:14 – Chuck. 39:21 – Guest: When I am talking about integration testing in the backend... 40:37 – Chuck asks a question. 40:47 – Guest: I am constantly running between frontend and backend. I didn’t know how to run tests for frontend. I had to invent a new thing and I “invented” the package JS DONG. It’s an implementation of Dong in Node. I found out that I wasn’t the only one and that there were others out there, too. 43:14 – Chuck: Nice! You talked in the prep docs that you urged a new frontend developer to not run the app in the browser for 2 months? 43:25 – Guest: Yeah, I found out that she was running the application...she said she knew how to write tests. I wanted her to see it my way and it probably was a radical train-of-thought, and that was this... 44:40 – Guest: Frontend is so visual. 45:12 – Chuck: What are you working on now? 45:16 – Guest: I am working with Applitools and I was impressed with what they were doing. The guest goes into further detail. 46:08 – Guest: Those screenshots are never the same. 48:36 – Panel: It’s...comparing the output to the static site to the... 48:50 – Guest: Yes, that static site – if you have 30 pages in your app – most of those are the same. We have this trick where we don’t upload it again and again. Uploading the whole static site is usually very quick. The second thing is we don’t wait for the results. We don’t wait for the whole rendering and we continue with the tests. 50:28 – Guest: I am working mostly (right now) in backend. 50:40 – Chuck: Anything else? Picks! 50:57 – Advertisement: Get A Coder Job! END – Advertisement: CacheFly! Links: JavaScript React Elixir Node.js Puppeteer Cypress SeleniumHQ Article – Ideas.Ted.Com Book: Never Split the Difference Applitools Guest’s Blog Article about Cypress vs. Selenium Gil’s Twitter Gil’s Medium Gil’s LinkedIn Sponsors: Kendo UI Sentry CacheFly Picks: Aimee How Showing Vulnerability Helps Build a Stronger Team AJ Never Split the Difference Project - TeleBit Charles Monster Hunter International Metabase Gil Cat Zero The Origin of Consciousness in the Breakdown of the Bicameral Mind
Gil Sever and Applitools are pioneering a new segment in the AI market, Visual AI. Originally used as part of their testing and monitoring solution, Applitools believes that visual AI will become an entire segment with the AI space. When it does, Applitools will be the leader in it. Great vision from the CEO. Right now Applitools is doing a great job with visual testing and monitoring. We check in with Gil and get the latest.
On this episode, Alison Wade and Jessie Shternshus chat with Angie Jones, a formidable automation engineer who is literally changing the face of technology. This year she recorded and commercial for John Frieda talking about changing the narrative about women of color in tech. Angie holds some 25 patented inventions and has carved a remarkable path for herself working at IBM, Lexis Nexis, Twitter and most recently as Senior Developer Advocate at Applitools. In this episode, you will hear about the decisions that Angie made that lead to her showing up in a way that is b.old and fearless for herself and for others. Listen in and find out what makes Angie Jones tick!