Podcasts about unit tests

Software testing method by which individual units of source code are validated

  • 102PODCASTS
  • 168EPISODES
  • 38mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Dec 24, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about unit tests

Latest podcast episodes about unit tests

Engineering Kiosk
#175 Von Lustig bis Traurig: Wenn Open Source Geschichten schreibt

Engineering Kiosk

Play Episode Listen Later Dec 24, 2024 46:42


Die Transparenz von Open Source schreibt Geschichten, die erzählt werden wollen50% des Begriffes “Open Source” besteht aus dem Wort “Open”. Ok. Für diese Erkenntnis muss man nun nicht studiert haben. Open bzw. Offen bzw. Transparenz bezieht sich dabei nicht nur auf den Source Code selbst, sondern i.d.R. auf alles, was das entsprechende Projekt betrifft. Dazu zählen u.a. für jedermann einsehbare Bug-Reports und Pull Requests. Wenn man dies nun mit weltweiter Kollaboration verschiedener Menschen und Kulturen mixt, ist eins vorprogrammiert: Kreativität, WTF-Momente, persönliche Schicksale und Geschichten, die erzählt werden wollen. Diese Episode erzählt einige dieser Open Source Geschichten. Wir sprechen darüber, wie man Douglas Crockford dazu bringt, über JavaScript Code zu streiten, wann für einen Pull Request eine eigene Torte gebacken wird und warum dies dann zu einem Merge führt, sowie wann und warum Unit Tests fehlschlagen, wenn diese in Australien ausgeführt werden. Es geht aber auch um traurige Seiten und persönliche Schicksale. Zum Beispiel eine Gefängnisverurteilung eines Maintainers von einem Projekt, welches 26 Millionen Downloads pro Woche hat, eine Krebserkrankungen mit verbundener Anteilnahme der Community und wie der Maintainer die Zukunft des Projektes sichert für die Zeit, wenn er nicht mehr da ist oder auch wie die Maidan-Revolution und der Ukraine-Krieg Open Source beeinflussen.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

Walk Boldly With Jesus
Witness Wednesday #140 Encounter Ministries Testimonies

Walk Boldly With Jesus

Play Episode Listen Later Dec 11, 2024 12:44


Today's witnesses are from the Encounter Ministries website. Encounter Ministry has healing services as well as several conferences throughout the year. These testimonies are all from people who attended either a healing session or their conference. I love that Jesus is still healing today. I don't think everyone knows this or believes this. This is why I think it is so important that we are constantly reminding people of that amazing fact. Nothing is too hard for Jesus to heal. It is also important that we remember it is not just physical healing but emotional and mental ones, too! If you have a loved one suffering right now, it is not too late, and it is not too severe. God is the God of the impossible. He can heal them, just ask.Diana- In June 2017, I was diagnosed with Relapse Remitting Multiple Sclerosis. My symptoms started in December of 2016 when I got my first lesion in my neck. From that day until the healing session, I had not had a single moment without extreme discomfort from itching, burning, shocks, buzzing, twitching, weakness, etc. Since the healing, I have had none of those. I have been able to reduce my medications to the levels they were pre-MS. I am scheduled for MRIs and am looking forward to seeing if the lesions are still there or if they've been healed, as well as my symptoms.
I will admit I was highly skeptical going into the session, having been anointed by a priest and several other healers praying over and for me throughout the years. Praise God, I agreed to try. He has worked a miracle.
I am not 100%. You try laying in bed all day, every day for 6 years, and see how much stamina and strength you have. I had to relearn how to walk without my previous balance issues from MS. I still fall down occasionally, but I get back up with a laugh and a joke.Joan- During the Healing Session with Father Mathias, he mentioned Autoimmune Disease. He also mentioned eyes. At that precise moment, my eyes were flooded with moisture. Previously, I have been treated for chronic eye dryness from Raynaud's Disease. My eyes were so dry that I instilled drops 6-8 times a day. I was not asking for that, but that is what happened to me. I'm so grateful. God is good and faithful. It was an unforgettable conference!Karen- Before attending the North West Encounter Conference in 2019, I had a vision while praying of Jesus on his knees, washing my feet, and looking up at me with a look of completeness (meaning total absolute or fully carried out, thorough). Jesus then stood up and placed a white heavenly cloak/mantle around my shoulders. I had this vision several times during the weekend conference. While this was going on, during the sessions on Saturday, I was suffering greatly from a migraine. I have suffered migraines since I was a little girl. (I am currently 58). Sometimes lasting 2 to 5 days and up to 7 days at a time. It had been a miserable experience. However, it was especially challenging at the Conference as I was trying to stay focused and open to the Lord. By the time Saturday night came, I was physically sick and in tremendous pain from my migraine. While being prayed over earlier by the Encounter Team, I saw Jesus declaring over me that I no longer need to carry two crosses. One of the crosses was that of my own, and the other was that of my daughter, who is in a wheelchair. The vision allowed me to see two crosses lying side by side. Immediately, I felt lighter by recognizing this! Also, I was taken back to a childhood memory of falling, and I thought, perhaps this is the root of my migraines. Having talked to one of the team members, I was encouraged to consider staying until Saturday night instead of going home because of the pain. I believed that Jesus had something more in store for me. I decided to stay and press through the pain. During a time of praise and worship, Fr. Mathias came through and placed hands on my head, and prayed for more of the Holy Spirit. I knew then I was to receive Grace upon Grace from Jesus. I began to experience chills while also experiencing an inner warmth. Even after I went home that night, I continued to experience the presence of God in my body for some time. And, then I noticed it – my migraines disappeared and have not returned since that day in June 2019. I am writing this Testimony of Healing nearly a year to the day later to testify and give all the Glory to God for my personal healing, both physically and for the special love that he has for me and you! Though I do get headaches occasionally, they are less frequent and short-lived as I continue to pray and claim Jesus' healing over my life, and sure enough, they go away in his precious name. Praise God for the Grace of Jesus living in us! Many Blessings and Love to and for the Encounter Ministries.Jeremiah 17:14. Heal me, O Lord, and I shall be healed; save me, and I shall be saved, for you are my praise.Alexis- I had an incredible encounter with God! I was physically healed, internally healed and spiritually filled. Praise the Lord! I came to the Summer Intensive not really knowing what to expect. I was invited by some ladies at Church. I am a Pentecostal to Catholic convert, and I came from a very charismatic evangelical church. At this conference, I started raising my hands in worship again. I had done this before, but not since I became Catholic seven years prior.On the third night, the physical healing night, the speaker asked if anyone was in physical discernible pain. There was a storm that had passed through earlier that day, and my body ached. I was in a car accident in 2005, and I injured the cervical vertebrae C5 in my neck from whiplash. I had been living with headaches and pain for years. I have been going to the chiropractor more than once a week for the last fifteen years. The second major injury to my spine occurred in the summer of 2019 while I was strawberry picking with my son. I had bulged a disk in my lower back L5. I had to go to physical therapy for six weeks after that to help strengthen my back. The physical therapist had told me to think every time that I bent down that I had about one hundred thousand more bends left for the rest of my life and to use them carefully. My middle back was also aching, most likely from picking up my forty-pound son. I have also had lung issues for years and had trouble taking deep breaths. Each of these physical ailments was healed one at a time. The women praying over me were amazing, and they began to pray fervently. My middle back was healed first. Then, it felt like something was removed or lifted off from my chest, and I could easily take a deep breath. Then, something was released in my lower back. I bent down, expecting the familiar twinge, but it was gone. I could move freely. Lastly, I felt something being removed from my neck, and I felt the release. Everything that I had asked to be healed was healed. I was no longer in physical discernible pain. The most amazing part for me was that I could sense the presence of my deceased husband, so I knew he was praying for me. This was confirmed when one of the women who was praying for me shared a vision of the heavens opening up and my husband's hand touching me in all the places that I was healed.The fourth and last night was the internal healing night. I have suffered a lot of loss in my life. When I was seven years old, my little sister died, and my parents divorced. I became very good at building an internal wall to protect myself. My husband died in 2016, three days before our only son was born. But the most recent loss that hit me like a ton of bricks was the loss of one of my students. I am a high school science teacher, and one of my Physics students died in a car crash in March 2019. The day after she got the highest grade on the Unit Test in my class, I was so excited to tell her that all of her hard work had paid off. Then I got the news that she had died that morning from complications from the accident, and it hit me harder than I expected. When the prayer team came around to pray, I sank to the floor and cried. Then I received a vision of a dam blocking the flow of water that was starting to crack, and it was preventing the source of the water from pouring out abundantly. They began praying for me, and I felt the dam burst out of me. The wall was gone, and the fountain of life and joy was free to pour out of me. I just kept saying AMEN in confirmation! I was really moved by the last speaker of the night, Sarah. Through praying with her, I had an image of myself as a child sitting on Jesus' lap, loved and safe. Jesus spoke to me in this place and said, “Come here often, and I will give you rest.” I had found the source of healing, strength, joy, peace, comfort, and so much more. I came to the conference very defeated, empty, and angry as a Catholic. I did not want to be Catholic anymore. I left the conference healed, refreshed, renewed, and filled with the Holy Spirit!Thank you all so much for sharing your stories, and thank you to Encounter Ministries for displaying them on your website so that all of our faiths can be built up by them. Thank you also for hearing the call to Teach, Equip, and activate the world with the gifts of the Holy Spirit. You are changing the world. www.findingtruenorthcoaching.comCLICK HERE TO DONATECLICK HERE to sign up for Mentoring CLICK HERE to sign up for Daily "Word from the Lord" emailsCLICK HERE to sign up for my newsletter & receive a free audio training about inviting Jesus into your daily lifeCLICK HERE to buy my book Total Trust in God's Safe Embrace

Devs on Tape
KI in der Softwareentwicklung: Innovationen und Herausforderungen mit Fabian Neureiter

Devs on Tape

Play Episode Listen Later Dec 4, 2024 62:12 Transcription Available


Podcast Summary Fabian Neureiter, unser geschätzter Gast auf der DOAG 2024 in Nürnberg, gibt tiefe Einblicke in seine Arbeit als Apex-Entwickler bei High End. Als Experte für JavaScript und Testing, sowie Entwickler des Low-Code-Testing-Tools LCT, zeigt er, wie Innovation und Kreativität im Bereich Künstliche Intelligenz die Softwareentwicklung verändern können. Sein Vortrag zu Oracle Document Understanding und dessen Integration in Apex verspricht spannende Zukunftsperspektiven. Mit einem Augenzwinkern berichtet er auch über die Zusammenarbeit mit seinem Chef Kai, was für einige humorvolle Momente sorgt. Die neuesten Entwicklungen bei Visual Studio Code Extensions, SQL Developer und Data Modeler sind ein weiterer Schwerpunkt dieser Episode. Wir diskutieren, wie Tools wie Dependency Watcher den Alltag von Entwicklern enorm erleichtern können, indem sie die Aktualisierung von Node.js-Projekten vereinfachen. Die ständige Herausforderung, mit den technologischen Fortschritten Schritt zu halten, ist ein Thema, das viele von uns bewegt und für anregende Diskussionen sorgt. Mit einem kritischen Blick beleuchten wir außerdem den Einsatz von KI-gestützten Tools wie GitHub Copilot. Wir wägen die Effizienzgewinne durch die Automatisierung mühsamer Aufgaben gegen die Notwendigkeit ab, den erstellten Code sorgfältig zu überprüfen. Ob es um das Schreiben von Unit-Tests oder das Generieren von Testdaten geht – KI eröffnet neue Möglichkeiten, erfordert jedoch auch ein Umdenken in der Arbeitsweise. Abschließend reflektieren wir die Balance zwischen Effizienz und Bequemlichkeit, die durch den Einsatz von KI entsteht, und blicken gespannt auf die technologischen Entwicklungen der Zukunft.

Smart Software with SmartLogic
Creating the Igniter Code Generation Framework with Zach Daniel

Smart Software with SmartLogic

Play Episode Listen Later Oct 17, 2024 52:55


To kick off Elixir Wizards Season 13, The Creator's Lab, we're joined by Zach Daniel, the creator of Igniter and the Ash framework. Zach joins hosts Owen Bickford and Charles Suggs to discuss the mechanics and aspirations of his latest brainchild, Igniter—a code generation and project patching framework designed to revolutionize the Elixir development experience. Igniter isn't just about generating code; it's about generating smarter code. By leveraging tools like Sourcerer and Rewrite, Igniter allows developers to modify source code and batch updates by directly interacting with Elixir's AST instead of regex patching. This approach streamlines new project setup and package installations and enhances overall workflow. They also discuss the strategic implications of Igniter for the broader Elixir community. Zach hopes Igniter will foster a more interconnected and efficient ecosystem that attracts new developers to Elixir and caters to the evolving needs of seasoned Elixir engineers. Topics discussed in this episode: Advanced package installation and code generation improve the developer experience Scripting and staging techniques streamline project updates Innovative methods for smoother installation processes in Elixir packages High-level tools apply direct patches to source code Progressive feature additions simplify the mix phx.new experience Chaining installers and composing tasks for more efficient project setup Continuous improvement in developer experiences to boost Elixir adoption Encourage listeners to collaborate by sharing code generation patterns Introduction of a new mix task aimed at removing the "unless" keyword in preparation for Elixir 1.18 You can learn more in the upcoming book "Building Web Applications with Ash Framework" by Zach and Rebecca Links mentioned: https://smartlogic.io/ https://alembic.com.au/blog/igniter-rethinking-code-generation-with-project-patching https://hexdocs.pm/igniter/readme.html https://github.com/ash-project/igniter https://www.zachdaniel.dev/p/serialization-is-the-secret https://www.zachdaniel.dev/p/welcome-to-my-substack https://ash-hq.org/ https://hexdocs.pm/sourceror/readme.html https://smartlogic.io/podcast/elixir-wizards/s10-e09-hugo-lucas-future-of-elixir-community/ https://github.com/hrzndhrn/rewrite https://github.com/zachdaniel https://github.com/liveshowy/webauthn_components https://hexdocs.pm/elixir/Regex.html https://github.com/msaraiva/vscode-surface https://github.com/swoosh/swoosh https://github.com/erlef/oidcc https://alembic.com.au/ https://www.zachdaniel.dev/ Special Guest: Zach Daniel.

.NET in pillole
260 - Moq, la più utilizzata libreria di mock...per i nostri unit-test

.NET in pillole

Play Episode Listen Later Sep 30, 2024 11:32


Per testare il codice al meglio, abbiamo bisogno di isolare il codice da molte dipendenze, e Moq risolve proprio questo, dandoci la possibilità di creare con facilità dei mock di interfacce e classe da cui il nostro codice dipende.https://github.com/devlooped/moqhttps://learn.microsoft.com/it-it/shows/visual-studio-toolbox/unit-testing-moq-framework#dotnetinpillole #unittest #moq #dotnet

.NET in pillole
258 - Unit test di componenti Blazor con bUnit

.NET in pillole

Play Episode Listen Later Sep 16, 2024 13:17


Può tornare comodo testare con degli Unit-Test anche l'output (e il comportamento) di componenti Blazor, e bUnit serve proprio a questo. Utile per chi ha librerie di componenti che riutilizza in N progetti.https://bunit.dev/https://blazordev.it/articoli/testing-dei-componenti-blazor-con-bunit/https://learn.microsoft.com/en-us/aspnet/core/blazor/test?view=aspnetcore-8.0#bunit #unittest #blazor #dotnetinpillole

.NET in pillole
Introduzione allo Unit Testing

.NET in pillole

Play Episode Listen Later Sep 9, 2024 17:16


Ecco un argomento molto discusso, molti li conoscono ma pochi li usano. Con la puntata di oggi voglio dare il via ad un percorso alla scoperta del mondo dei Test, iniziando oggi con un'introduzione agli Unit Test.https://amzn.to/3TdJFJIhttps://learn.microsoft.com/en-us/visualstudio/test/unit-test-basics?view=vs-2022https://learn.microsoft.com/en-us/visualstudio/test/create-unit-tests-menu?view=vs-2022#visualstudio #unittest #dotnetinpillole #podcast #xunit #nunit #mstest

PodRocket - A web development podcast from LogRocket
Production horror stories with Dan Neciu

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Aug 22, 2024 27:17


Dan Neciu, technical co-founder and tech lead of CareerOS, shares intriguing production horror stories, discusses the importance of rigorous testing, and provides valuable insights into preventing and managing software bugs in both backend and frontend development. Links https://neciudan.dev https://www.youtube.com/@NeciuDan https://www.linkedin.com/in/neciudan https://x.com/neciudan We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Dan Neciu.

The Bike Shed
430: Test Suite Pain & Anti-Patterns

The Bike Shed

Play Episode Listen Later Jun 25, 2024 40:57


Stephanie and Joël discuss the recent announcement of the call for proposals for RubyConf in November. Joël is working on his proposals and encouraging his colleagues at thoughtbot to participate, while Stephanie is excited about the conference being held in her hometown of Chicago! The conversation shifts to Stephanie's recent work, including completing a significant client project and her upcoming two-week refactoring assignment. She shares her enthusiasm for refactoring code to improve its structure and stability, even when it's not her own. Joël and Stephanie also discuss the everyday challenges of maintaining a test suite, such as slowness, flakiness, and excessive database requests. They discuss strategies to balance the test pyramid and adequately test critical paths. Finally, Joël emphasizes the importance of separating side effects from business logic to enhance testability and reduce complexity, and Stephanie highlights the need to address testing pain points and ensure tests add real value to the codebase. RubyConf CFP (https://sessionize.com/rubyconf-2024/) RubyConf CFP coaching (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform?pli=1) Testing pyramid (https://thoughtbot.com/blog/rails-test-types-and-the-testing-pyramid) Outside-in testing (https://thoughtbot.com/blog/testing-from-the-outsidein) Writing fewer system specs with request specs (https://thoughtbot.com/blog/faster-tests-with-capybara-and-request-specs) Unnecessary factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Your Test Suite is Making Too Many Database Calls (https://www.youtube.com/watch?v=LOlG4kqfwcg) Your flaky tests might be time dependent (https://thoughtbot.com/blog/your-flaky-tests-might-be-time-dependent) The Secret Ingredient: How To Understand and Resolve Just About Any Flaky Test (https://www.youtube.com/watch?v=De3-v54jrQo) Separating side effects to improve tests (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects) Functional core, imperative shell (https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell) Thoughtbot testing articles (https://thoughtbot.com/blog/tags/testing) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Something that's new in my world is that RubyConf just announced their call for proposals for RubyConf in November. They're open for...we're currently recording in June, and it's open through early July, and they're asking people everywhere to submit talk ideas. I have a few of my own that I'm working with. And then, I'm also trying to mobilize a lot of other colleagues at thoughtbot to get excited to submit. STEPHANIE: Yes, I am personally very excited about this year's RubyConf in November because it's in Chicago, where I live, so I have very little of an excuse not to go [laughs]. I feel like so much of my conference experience is traveling to just kind of, like, other cities in the U.S. that I want to spend some time in and, you know, seeing all of my friends from...my long-distance friends. And it definitely does feel like just a bit of an immersive week, right? And so, I wonder how weird it will feel to be going to this conference and then going home at the end of the night. Yeah, that's just something that I'm a bit curious about. So, yeah, I mean, I am very excited. I hope everyone comes to Chicago. It's a great city. JOËL: I think the pitch that I'm hearing is submit a proposal to the RubyConf CFP to get a chance to get a free ticket to go to RubyConf, where you get to meet Bike Shed co-host Stephanie Minn. STEPHANIE: Yes. Ruby Central should hire me to market this conference [laughter] and that being the main value add of going [laughs], obviously. Jokes aside, I'm excited for you to be doing this initiative again because it was so successful for RailsConf kind of internally at thoughtbot. I think a lot of people submitted proposals for the first time with some of the programming you put on. Are you thinking about doing things any differently from last time, or any new thoughts about this conference cycle? JOËL: I think I'm iterating on what we did last time but trying to keep more or less the same formula. Among other things, people don't always have ideas immediately of what they want to speak about. And so, I have a brainstorming session where we're just going to get together and brainstorm a bunch of topics that are free for anyone to take. And then, either someone can grab one of those topics and pitch a talk on it, or it can be, like, inspiration where they see that it jogs their mind, and they have an idea that then they go off and write a proposal. And so, that allows, I think, a lot of colleagues as well, who are maybe not interested in speaking but might have a lot of great ideas, to participate and sort of really get a lot of that energy going. And then, from there, people who are excited to speak about something can go on to maybe draft a proposal. And then, I've got a couple of other events where we support people in drafting a proposal and reviewing and submitting, things like that. STEPHANIE: Yes, I really love how you're just involving people with, you know, just different skills and interests to be able to support each other, even if, you know, there's someone on our team who's, like, not interested in speaking at all, but they're, like, an ideas person, right? And they would love to see their idea come to life in a talk from someone else. Like, I think that's really cool, and I certainly appreciate it as a not ideas person [laughs]. JOËL: Also, I want to shout out that Ruby Central is doing CFP coaching sessions on June 24th, June 25th, and June 26th, and those are open to anyone. You can sign up. We'll put a link to the signup form in the show notes. If you've never submitted something before and you'd like some tips on what makes for a good CFP, how can you up your chances of getting accepted, or maybe you've submitted before, you just want to get better at it; I recommend joining one of those slots. So, Stephanie, what's new in your world? STEPHANIE: So, I just successfully delivered a big project on my client work last week. So, I'm kind of riding that wave and getting into the next bit of work that I have been assigned for this team, and I'm really excited to do this. But I also, I don't know, I've been just, like, thinking about it quite a bit. Basically, I'm getting to spend two dedicated weeks to just refactoring [laughs] some really, I guess, complicated code that has led to some bugs recently and just needing some love, especially because there's some whiffs of potentially, like, really investing in this area of the product, and people wanting to make sure that the foundation does feel very stable to build on top of for extending and changing that code. And I think I, like, surprised myself by how excited I was to do this because it's not even code I wrote. You know, sometimes when you are the one who wrote code, you're like, oh, like, I would love time to just go back and clean up all these things that I kind of missed the first time around or just couldn't attend to for whatever reason. But yeah, I think I was just a little bit in the peripheries of that code, and I was like, oh, like, just seeing some weird stuff. And now to kind of have the time to be like, oh, this is all I'm going to be doing for two weeks, to, like, really dive into it and get my hands dirty [laughs], I'm very excited. JOËL: I think that refactoring is a thing that can be really fun. And also, when you have a larger chunk of time, like two days, it's easy to sort of get lost in sort of grand visions or projects. How do you kind of balance the, I want to do a lot of refactoring; I want to take on some bigger things while maybe trying to keep some focus or have some prioritization? STEPHANIE: Yeah, that's a great question. I was actually the one who said, like, "I want two weeks on this." And it also helped that, like, there was already some thoughts about, like, where they wanted to go with this area of the codebase and maybe what future features they were thinking about. And there are also a few bugs that I am fixing kind of related to this domain. So, I think that is actually what I started with. And that was really helpful in just kind of orienting myself in, like, the higher impact areas and the places that the pain is felt and exploring there first to, like, get a sense of what is going on here. Because I think that information gathering is really important to be able to kind of start changing the code towards what it wants to be and what other devs want it to be. I actually also started a thread in Slack for my team. I was, like, asking for input on what's the most confusing or, like, hard to reason about files or areas in this particular domain or feature set and got a lot of really good engagement. I was pleasantly surprised [laughs], you know, because sometimes you, like, ask for feedback and just crickets. But I think, for me, it was very affirming that I was, like, exploring something that a lot of people are like, oh, we would love for someone to, you know, have just time to get into this. And they all were really excited for me, too. So, that was pretty cool. JOËL: Interesting. So, it sounds like you sort of budgeted some refactoring time and then, from there, broke it down into a series of a couple of debugging projects and then a couple of, like, more bounded refactoring projects, where, like, specifically, I want to restructure the way this object works or something like that. STEPHANIE: Yeah. I think there was that feeling of wanting to clean up this area of the codebase, but you kind of caught on to that bit of, you know, it can go so many different ways. And, like, how do you balance your grand visions [laughs] of things with, I guess, a little bit of pragmatism? So, it was very much like, here's all these bugs that are causing our customers problems that are kind of, like, hard for the devs to troubleshoot. You know, that kind of prompts the question, like, why? And so, if there can be, you know, the fixing of the bugs, and then the learning of, like, how that part of the system works, and then, hopefully, some improvements along the way, yeah, that just felt like a dream [laughs] for me. And two weeks felt about the right amount of time. I don't know if anyone kind of hears that and feels like it's too long or too little. I would be really curious. But I feel like it is complex enough that, like, context switching would, I think, make this work harder, and you kind of do have to just sit with it for a little bit to get your bearings. JOËL: A scenario that we encounter on a pretty regular basis is a customer coming to us and telling us that they're feeling a lot of test pain and asking what are the ways that we can help them to make things better and that test pain can come under a lot of forms. It might be a test suite that's really slow and that's hurting the team in terms of their velocity. It might be a test suite that is really flaky. It might be one that is really difficult to add to, or maybe one that has very low coverage, or one that is just really brittle. Anytime you try to make a change to it, a bunch of things break, and it takes forever to ship anything. So, there's a lot of different aspects of challenging test suites that clients come to us with. I'm curious, Stephanie, what are some of the ones that you've encountered most frequently? STEPHANIE: I definitely think that a slow test suite and a flaky test suite end up going hand in hand a lot, or just a brittle one, right? That is slowing down development and, like you said, causing a lot of pain. I think even if that's not something that a client is coming to us directly about, it maybe gets, like, surfaced a little bit, you know, sometime into the engagement as something that I like to keep an eye on as a consultant. And I actually think, yeah, that's one of kind of the coolest things, I think, about our consulting work is just getting to see so many different test suites [laughs]. I don't know. I'm a testing nerd, so I love that kind of stuff. And then, I think you were also kind of touching on this idea of, like, maintaining a test suite and, yeah, making testing just a better experience. I have a theory [laughs], and I'd be curious to get your thoughts on it. But one thing that I really struggle with in the industry is when people talk about writing tests as if it's, like, the morally superior thing to do. And I struggle with this because I don't think that it is a very good strategy for helping people feel better or more confident and, like, upskill at writing tests. I think it kind of shames people a little bit who maybe either just haven't gotten that experience or, you know, just like, yeah, like, for whatever reason, are still learning how to do this thing. And then, I think that mindset leads to bad tests [laughs] or tests that aren't really serving the purpose that you hope they would because people are doing it more out of obligation rather than because they truly, like, feel like it adds something to their work. Okay, I kind of just dropped that on you [laughs]. Do you have any reactions? JOËL: Yeah, I guess the idea that you're just checking a box with your test rather than writing code that adds value to the codebase. They're two very different perspectives that, in the end, will generate more lines of code if you're just doing a checkbox but may or may not add a whole lot of value. So, maybe before even looking at actual, like, test practices, it's worth stepping back and asking more of a mindset question: Why does your team test? What is the value that your team feels they get out of testing? STEPHANIE: Yeah. Yeah. I like that because I was about to say they go hand in hand. But I do think that maybe there is some, you know, question asking [laughs] to be done because I do think people like to kind of talk about the testing practices before they've really considered that. And I am, like, pretty certain from just kind of, at least what I've seen, and what I've heard, and what I've experienced on embedding into client teams, that if your team can't answer that question of, like, "What value does testing bring?" then they probably aren't following good testing practices [laughs]. Because I do think you kind of need to approach it from a perspective of like, okay, like, I want to do this because it helps me, and it helps my team, rather than, like you said, getting the check mark. JOËL: So, once we've sort of established maybe a bit of a mindset or we've had a conversation with the team around what value they think they're getting out of tests, or maybe even you might need to sell the team a little bit on like, "Hey, here's, like, all these different ways that testing can bring value into your life, make your life as developers easier," but once you've done that sort of pre-work and you can start looking at what's actually the problem with a test suite, a common complaint from developers is that the test suite is too slow. How do you like to approach a slow test suite? STEPHANIE: That's a good question. I actually...I think there's a lot of ways to answer that. But to kind of stay on the theme of stepping back a little bit, I wonder if assessing how well your test suite aligns with the testing pyramid would be a good place to start; at least, that could be where I start if I'm coming into a client team for the first time, right, and being asked to start assessing or just poking around. Because I think the slowness a lot of the time comes from a lot of quote, unquote, "integration tests" or, like, unit tests masquerading as integration tests, where you end up having, like, a lot of duplication of things that are being tested in ways that are integrating with some slow parts of the system like the database. And yeah, I think even before getting into some of the more discreet reasons why you might be writing slow tests, just looking at the structure of your test suite and what kinds of things you're testing, and, again, even going back to your team and asking, like, "What kinds of things do you test?" Or like, "Do you try to test or wish to be testing more of, less of?" Like looking at the structure, I have found to be a good place to start. JOËL: And for those who are not familiar, you used the term testing pyramid. This is a concept which says that you probably want to have a lot of small, fast unit tests, a medium amount of integration tests that test a few different components together, and then a few end-to-end tests. Because as you go up that pyramid, tests become more expensive. They take a lot longer to run, whereas the little unit tests are super cheap. You can create thousands of them, and they will barely impact your run time. Adding a dozen end-to-end tests is going to be noticeable. So, you want to balance sort of the coverage that you get from end to end with the sort of cheapness and ubiquity of the little unit tests, and then split the difference for tests that are in between. STEPHANIE: And I think that is challenging, even, you know, you're talking about how you want the peak of your pyramid to be end-to-end tests. So, you don't want a lot of them, but you do want some of them to really ensure that things are totally plumbed and working correctly. But that does require, I think, really looking at your application and kind of identifying what features are the most critical to it. And I think that doesn't get paid enough attention, at least from a lot of my client experiences. Like, sometimes teams just end up with a lot of feature bloat and can't say like, you know, they say, "Everything's important [chuckles]," but everything can't be equally important, you know? JOËL: Right. I often like to develop using a sort of outside-in approach, where you start by writing an end-to-end test that describes the behavior that your new feature ticket is asking for and use that to drive the work that I'm doing. And that might lead to some lower-level unit tests as I'm building out different components, but the sort of high-level behavior that we're adding is driven by adding an end-to-end spec. Do you feel that having one new end-to-end spec for every new feature ticket that you work on is a reasonable thing to do, or do you kind of pick and choose? Do you write some, but maybe start, like, coalescing or culling them, or something like that? How do you manage that idea that maybe you would or would not want one end-to-end spec for each feature ticket? STEPHANIE: Yeah, it's a good question. Actually, as you were saying that, I was about to ask you, do you delete some afterwards [laughs]? Because I think that might be what I do sometimes, especially if I'm testing, you know, edge cases or writing, like, the end-to-end test for error states. Sometimes, not all of them make it into my, like, final, you know, commit. But they, you know, had their value, right? And at least it prompted me to make sure I thought about them and make sure that they were good error states, right? Like things that had visible UI to the user about what was going on in case of an error. So, I would say I will go back and kind of coalesce some of them, but they at least give me a place to start. Does that match your experience? JOËL: Yeah, I tend to mostly write end-to-end tests for happy paths and then write kind of mid-level things to cover some of my edge cases, maybe a couple of end-to-end tests for particularly critical paths. But, at some point, there's just too many paths through the app that you can't have end-to-end coverage for every single branch on every single path that can happen. STEPHANIE: Yeah, I like that because if you find yourself having a lot of different conditions that you need to test in an end-to-end situation, maybe there's room for that to, like, be better encapsulated in that, like, more, like, middle layer or, I don't know, even starting to ask questions about, like, does this make sense with the product? Like, having all of these different things going on, does that line up with kind of the vision of what this feature is trying to be or should be? Because I do think the complexity can start at that high of a level. JOËL: How do you feel about the idea that adding more end-to-end tests, at some point, has diminishing returns? STEPHANIE: I'm not quite sure I'm following [laughs]. JOËL: So, let's say you have an end-to-end test for the happy path for every core feature of the app. And you decide, you know what, I want to add maybe some, like, side features in, or maybe I want to have more error states. And you start, like, filling in more end-to-end tests for those. Is it fair to say that adding some of those is a bit of a diminishing return? Like, you're not getting as much value as you would from the original specs. And maybe as you keep finding more and more rare edge cases, you get less and less value for your test. STEPHANIE: Oh, yeah, I see. And there's more of a cost, too, right? The cost of the time to run, maintain, whatever. JOËL: Right. Let's say they're roughly all equally expensive in terms of cost to run. But as you stray further and further off of that happy path, you're getting less and less value from that integration test or that end-to-end test. STEPHANIE: I'm actually a little conflicted about this because that sounds right in theory, but then in practice, I feel like I've seen error states not get enough love [laughs] that it's...I don't even want to say, like, you make any kind of claim [laughs] about it. But, you know, if you're going to start somewhere, if you have, like, a limited amount of time and you're like, okay, I'm only going to write a handful of end-to-end tests, yeah, like, write tests for your happy paths [laughs]. JOËL: I guess it's probably fair to say that error states just don't get as much love as they should throughout the entire testing stack: at the unit level, at the integration level, all the way up to end to end. STEPHANIE: I'm curious if you were trying to get at some kind of conclusion, though, with the idea of diminishing returns. JOËL: I guess I'm wondering if, from there, we can talk about maybe a breakdown of a particular testing pyramid for a particular test suite is being top heavy, and whether there's value in maybe pushing some of these tests, some of these edge cases, some of these maybe less important features down from that, like, top end-to-end layer into maybe more of an integration layer. So, in a Rails context, that might be moving system specs down to something like a request spec. STEPHANIE: Yeah, I think that is what I tend to do. I'm trying to think of how I get there, and I'm not quite sure that I can explain it quite yet. Yeah, I don't know. Do you think you can help me out here? Like, how do you know it's time to start writing more tests for your unhappy paths lower on the pyramid? JOËL: Ideally, I think a lot of your code should be unit-tested. And when you are unit testing it, those pieces all need coverage of the happy and unhappy paths. I think the way it may often happen naturally is if you're pushing logic out of your controllers because it's a little bit challenging sometimes to test Rails controllers. And so, if you're moving things into domain objects, even service objects, depending on how you implement them, just doing that and then making sure you unit test them can give you a lot more coverage of all the different edge cases that can happen. Where things sometimes fall apart is getting out of that business layer into the web layer and saying, "Hey, if something raises an error or if the save fails or something like that, does the user get a good experience, or do we just crash and give them a 500 page?" STEPHANIE: Yeah, that matches with a lot of what I've seen, where if you then spend too much time in that business layer and only handling errors there, you don't really think too much about how it bubbles up. And, you know, then you are digging through, like, your error monitoring [laughs] service, trying to find out what happened so that you can tell, you know, your customer support team [laughs] to help them resolve, like, a bug report they got. But I actually think...and you were talking about outside in, but, in some ways, in my experience, I also get feedback from the bottom up sometimes that then ends up helping me adjust some of those integration or end-to-end tests about kind of what errors are possible, like, down in the depths of the code [laughs], and then finding ways to, you know, abstract that or, like, kind of be like, "Oh, like, here are all these possible, like, exceptions that might be raised." Like, what HTTP status code do I want to be returned to capture all of these things? And what do I want to say to the user? So, yeah, I'm [laughs] kind of a little lost myself, but this idea that going both, you know, outside in and then maybe even going back up a little bit has served me well before. JOËL: I think there can be a lot of value in sort of dropping down a level in the pyramid, and maybe instead of doing sort of end-to-end tests where you, like, trigger a scenario where something fails, you can just write a request back against the controller and say, "Hey, if I go to this controller and something raises an error, expect that you get redirected to this other location." And that's really cheap to run compared to an end-to-end test. And so, I think that, for me, is often the right compromise is handling error states at sort of the next lowest level and also in slightly more atomic pieces. So, more like, if you hit this endpoint and things go wrong, here's how things happen. And I use endpoint not so much in an API sense, although it could be, but just your, you know, maybe you've got a flow that's multiple steps where, you know, you can do a bunch of things. But I might have a test just for one controller action to say, "Hey, if things go wrong, it redirects you here, or it shows you this error page." Whereas the end-to-end test might say, "Oh, you're going to go through the entire flow that hits multiple different controllers, and the happy path is this nice chain." But each of the exit points off at where things fail would be covered by a more scoped request spec on that controller. STEPHANIE: Yeah. Yeah. That makes sense. I like that. JOËL: So, that's kind of how I've attempted to balance my pyramid in a way that balances complexity and time with coverage. You mentioned that another area that test suites get slow is making too many requests to the database. There's a lot of ways that that happens. Oftentimes, I think a classic is using a factory where you really don't need to, persisting data to the database when all you needed was some object in memory. So, there are different strategies for avoiding that. It's also easy to be creating too much data. So, maybe you do need to persist some things in the database, but you're persisting a hundred objects into memory or into the database when you really meant to persist two, so that's an easy accident. A couple of years ago, I gave a talk at RailsConf titled "Your Test Suite is Making Too Many Database Requests" that went over a bunch of different ways that you can be doing a lot of expensive database requests you didn't plan on making and how that slows down your test suite. So, that is also another hot spot that I like to look at when dealing with a slow test suite. STEPHANIE: Yeah, I mentioned earlier the idea of unit tests really masquerading as integration tests [laughs]. And I think that happens especially if you're starting with a class that may already be a little bit too big than it should be or have more responsibilities than it should be. And then, you are, like, either just, like, starting with using the create build, like, strategy with factories, or you find yourself, like, not being able to fully run the code path you're trying to test without having stuff persisted. Those are all, I think, like, test smells that, you know, are signaling a little bit of a testing anti-pattern that, yeah, like, is there a way to write, like, true unit tests for this stuff where you are only using objects in memory? And does that require breaking out some responsibilities? That is a lot of what I am kind of going through right now, actually, with my little refractoring project [laughs] is backfilling some tests, finding that I have to create a lot of records. And you know what? Like, the first step will probably be to write those tests and commit them, and just have them live there for a little while while I figure out, you know, the right places to start breaking things up, and that's okay. But yeah, I did want to, like, just mention that if you are having to create a lot of records and then also noticing, like, your test is running kind of slow [laughs], that could be a good indicator to just give a good, hard look at what kind of style of test you think you're writing [laughs]. JOËL: Yeah, your tests speak to you, and when you're feeling pain, oftentimes, it can be a sign that you should consider refactoring your implementation. And I think that's doubly true if you're writing tests after the fact rather than test driving. Because sometimes you sort of...you came up with an implementation that you thought would be good, and then you're writing tests for it, and it's really painful. And that might be telling you something about the underlying implementation that you have that maybe it's...you thought it's well scoped, but maybe it actually has more responsibilities than you initially realized, or maybe it's just really tightly coupled in a way that you didn't realize. And so, learning to listen to your tests and not just sort of accepting the world for being the way it is, but being like, "No, I can make it better." STEPHANIE: Yeah, I've been really curious why people have a hard time, like, recognizing that pain sometimes, or maybe believing that this is the way it is and that there's not a whole lot that you can do about it. But it's not true, like, testing really does not have to be painful. And I feel like, again, this is one of those things that's like, it's hard to believe until you really experience it, at least, that was the case for me. But if you're having a hard time with tests, it's not because you're not smart enough. Like, that, I think, is a thing that I really want to debunk right now [laughs] for anyone who has ever had that thought cross their mind. Yeah, things are just complicated and complex somehow, or software entropy happens. That's, like, not how it should be, and we don't have to accept that [laughs]. So, I really like what you said about, oh, you can change it. And, you know, that is a bit of a callback to the whole mindset of testing that we mentioned earlier at the beginning. JOËL: Speaking of test suites, we have not covered yet is paralyzing it. That could probably be its own Bike Shed episode on its own entirely on paralyzing a test suite. We've done entire engagements where our job was to come in and help paralyze a test suite, make it faster. And there's a lot of, like, pros and cons. So, I think maybe we can save that for a different episode. And, instead, I'd like to quickly jump in a little bit to some other common pain points of test suites, and I would say probably top of that list is test flakiness. How do you tend to approach flakiness in a client project? STEPHANIE: I am, like, laughing to myself a little bit because I know that I was dealing with test flakiness on my last client engagement, and that was, like, such a huge part of my day-to-day is, like, hitting that retry button. And now that I am on a project with, like, relatively low flakiness, I just haven't thought about it at all [laughs], which is such a privilege, I think [laughs]. But one of the first things to do is just start, like, capturing metrics around it. If you, you know, are hearing about flakiness or seeing that, like, start to plague your test suite or just, you know, cropping up in different ways, I have found it really useful to start, like, I don't know, just, like, maybe putting some of that information in a dashboard and seeing how, just to, like, even make sure that you are making improvements, that things are changing, and seeing if there's any, like, patterns around what's causing the flakiness because there are so many different causes of it. And I think it is pretty important to figure out, like, what kind of code you're writing or just trying to wrangle. That's, you know, maybe more likely to crop up as flakiness for your particular domain or application. Yeah, I'm going to stop there and see, like, because I know you have a lot of thoughts about flakiness [laughs]. JOËL: I mean, you mentioned that there's a lot of different causes for flakiness. And I think, in my experience, they often sort of group into, let's say, like, three different buckets. Anytime you're testing code that's doing things that are non-deterministic, that's easy for tests to be flaky. And so, you might think, oh, well, you know, you have something that makes a call to random, and then you're going to assert on a particular outcome of that. Well, clearly, that's going to not be the same every time, and that might be flaky. But there are, like, more subtle versions of that, so maybe you're relying on the system clock in some way. And, you know, depending on the time you run that test, it might give you a different value than you expect, and that might cause it to fail. And it doesn't have to be you're asserting on, like, oh, specifically a given millisecond. You might be doing math around, like, number of days, and when you get near to, let's say, the daylight savings boundary, all of a sudden, no, you're off by an hour, and your number of days...calculation breaks because relying on the clock is something that is inherently non-deterministic. Non-determinism is a bucket. Leaky tests is another bucket of failures that I see, things where one test might impact another that gets run after the fact, oftentimes by mutating some sort of global state. So, maybe you're both relying on some sort of, like, external file that you're both writing to or maybe a cache that one is writing to that the other one is reading from, something like that. It could even just be writing records into the database in a way that's not wrapped in a transaction, such that there's more data in the database when the next test runs than it expects. And then, finally, if you are doing any form of parallelization, that can improve your test suite speed, but it also potentially leads to race conditions, where if your resources aren't entirely isolated between parallel test runners, maybe you're sharing a database, maybe you're sharing Redis instance or whatever, then you can run into situations where you're both kind of fighting over the same resources or overriding each other's data, or things like that, in a way that can cause tests to fail intermittently. And I think having a framework like that of categorization can then help you think about potential solutions because debugging approaches and then solutions tend to be a little bit different for each of these buckets. STEPHANIE: Yeah, the buckets of different causes of flaky tests you were talking about, I think, also reminded me that, you know, some flakiness is caused by, like, your testing environment and your infrastructure. And other kinds of flakiness are maybe caused more from just the way that you've decided how your code should work, especially that, like, non-deterministic bucket. So, yeah, I don't know, that was just, like, something that I noticed as you were going through the different categories. And yeah, like, certainly, the solutions for approaching each kind are very different. JOËL: I would like to pitch a talk from RubyConf last year called "The Secret Ingredient: How To Resolve And Understand Just About Any Flaky Test" by Alan Ridlehoover. Just really excellent walkthrough of these different buckets and common debugging and solving approaches to each of them. And I think having that framework in mind is just a great way to approach different types of flaky tests. STEPHANIE: Yes, I'll plus one that talk, lots of great pictures of delicious croissants as well. JOËL: Very flaky pastry. STEPHANIE: [laughs] Joël, do you have any last testing anti-pattern guidances for our audience who might be feeling some test pain out there? JOËL: A quick list, I'm going to say tight coupling that has then led to having a lot of stubbing in your tests often leads to tests that are very brittle, so tests that maybe don't fail when they should when you've actually broken things, or maybe, alternatively, tests that are constantly failing for the wrong reasons. And so, that is a thing that you can fix by making your code less coupled. Tests that also require stubbing a lot of things because you do a lot of side effects. If you are making a lot of HTTP calls or things like that, that can both make a test more complex because it has to be aware of that. But also, it can make it more non-deterministic, more flaky, and it can just make it harder to change. And so, I have found that separating side effects from sort of business logic is often a great way to make your test suite much easier to work with. I have a blog post on that that I'll link in the show notes. And I think this maybe also approaches the idea of a functional core and an imperative shell, which I believe was an idea pitched by Gary Bernhardt, like, over ten years ago. There's a famous video on that that we'll also link in the show notes. But that architecture for building an app can lead to a much nicer test to write. I guess the general idea being that testing code that does side effects is complicated and painful. Testing code that is more functional tends to be much more pleasant. And so, by not intermingling the two, you tend to get nicer tests that are easier to maintain. STEPHANIE: That's really interesting. I've not heard that guidance before, but now I am intrigued. That reminded me of another thing that I had a conversation with someone about. Because after the RailsConf talk I gave, which was about testing pain, there was some stubbing involved in the examples that I was showing because I just see a lot of that stuff. And, you know, this audience member kind of had that question of, like, "How do you know that things are working correctly if you have to stub all this stuff out?" And, you know, sometimes you just have to for the time being [chuckles]. And I wanted to just kind of call back to that idea of having those end-to-end tests testing your critical paths to at least make sure that those things work together in the happy way. Because I have seen, especially with apps that have a lot of service objects, for some reason, those being kind of the highest-level test sometimes. But oftentimes, they end up not being composed well, being quite coupled with other service objects. So, you end up with a lot of stubbing of those in your test for them. And I think that's kind of where you can see things start to break down. JOËL: Yep. And when the RailsConf videos come out, I recommend seeing Stephanie's talk, some great gems in there for building a more maintainable test suite. Stephanie and I and, you know, most of us here at thoughtbot, we're testing nerds. We think about this a lot. We've also written a lot about this. There are a lot of resources in the show notes for this episode. Check them out. Also, just generally, check out the testing tag on the thoughtbot blog. There is a ton of content there that is worth looking into if you want to dig further into this topic. STEPHANIE: Yeah, and if you are wanting some, like, dedicated, customized testing training, thoughtbot offers an RSpec workshop that's tailored to your team. And if you kind of are interested in the things we're sharing, we can definitely bring that to your company as well. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Maintainable
Robin Heinze - React Native and the Art of Flexibility

Maintainable

Play Episode Listen Later Jun 18, 2024 40:35


In this episode, Robby welcomes Robin Heinze, Director of Engineering at Infinite Red, to discuss the intricacies of building and maintaining robust software systems. Key topics covered include:Characteristics of Maintainable Software: Robin shares insights from her team on what makes software maintainable, emphasizing the need for clear documentation, robust setup scripts, and ongoing code refinement.Technical Debt: They delve into managing technical debt, particularly in a consultancy setting, and how to balance client expectations with software quality.React Native: Robin explains the advantages of using React Native for cross-platform development, highlighting its efficiency and accessibility to a broader range of developers.Consultancy Challenges: The conversation also covers the unique aspects of working in a consultancy, including how to embed standards while respecting client processes.Major Takeaways:Effective communication and a proactive approach to maintenance are crucial in software development.Visual elements like graphics can significantly enhance the accessibility and appeal of open source projects.Book Recommendation:A Thursday Murder Club Mystery Series by Richard OsmanHelpful Links:React NativeInfinite RedChain React ConferenceReact Native Radio PodcastFor more insights, make sure to follow Robin on: LinkedInTwitterThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Maintainable
Scott Hanselman - The Fear Factor in Maintainable Software

Maintainable

Play Episode Listen Later Jun 11, 2024 36:35


In this episode of Maintainable, Robby welcomes Scott Hanselman, VP of Developer Community at Microsoft and host of the Hanselminutes Podcast, to discuss the emotional side of maintainable software. Scott shares his thoughts on fear as a common thread in poorly maintained software, the importance of building a team culture of trust, and how finding a good work-life balance helps create better software.The Role of Fear in Technical DebtScott believes that if you fear the software you work on, it's a tell-tale sign that it has maintainability issues.Technical debt is rooted in fear--either fear of making a change that will break something or fear of being unable to change something when needed.He encourages teams to talk openly about their fears and anxieties regarding the software and to consider what things give them confidence in the codebase.Building a Team Culture of ConfidenceScott emphasizes the importance of empathy in overcoming technical debt and making software more maintainable.Senior engineers and team leads have a responsibility to make junior developers feel safe enough to speak up and ask questions.He advocates for providing new hires with small, achievable tasks to build their confidence and trust in the software.Scott encourages teams to use "inner loop" and "outer loop" thinking.Inner loop - The cycle of making a change, hitting f5, and seeing changes immediately.Outer loop - Things like deploying the codebase, getting it tested, ensuring production stability.Both experienced and junior engineers have their own inner and outer loops as individuals, and continuous improvement at all levels is key.Overcoming Fear, Embracing Maintainability, and Finding BalanceScott shares stories about Microsoft's journey with open-source software and how that process has shaped the company's culture around maintainable code.He talks about the importance of striking a balance between source-opened and open-source software and finding the sweet spot for a project or organization.Scott warns against the trap of striving for unattainable perfection. Aiming for good, solid repeatable work over perfection ultimately yields better results.He uses his own projects, like the Hanselminutes podcast, as examples of focusing on consistent outputs and utilizing a simple workflow.Scott advocates for using AI tools to transcribe coding sessions, freeing up developers from extensive note-taking.Book Recommendation:The Daily Stoic By Ryan HolidayScott Hanselman - Personal WebsiteHanselminutes Podcast.NET Open Source History: .NET Core | Microsoft LearnHelpful Links:Scott Hanselman - Personal WebsiteHanselminutes Podcast.NET Open Source History: .NET Core | Microsoft LearnThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Maintainable
Stig Brautaset: Understanding Alien Artifacts in Legacy Code

Maintainable

Play Episode Listen Later Jun 4, 2024 46:04


In this episode of Maintainable, Robby chats with Stig Brautaset, Staff Software Engineer at CircleCI. Stig shares his insights on maintaining well-documented but complex legacy code, the impact of team dynamics on software maintenance, and his experiences with the SBJSON library.Stig discusses the characteristics of well-maintained software, emphasizing the importance of team experience, domain knowledge, and risk appetite. He reflects on his own career journey, highlighting the transition from overconfidence to a balanced approach to risk-taking.A significant portion of the conversation delves into Stig's concept of "Alien Artifacts," which describes highly resistant legacy code written by highly skilled engineers. He explains the challenges of modifying such code and shares examples from his own experiences.Stig also talks about his work on the SBJSON library, addressing the complexities of handling multiple versions and dependency conflicts. He advocates for developers maintaining the software they ship and discusses the balance between shipping features quickly and maintaining long-term code quality.Key TakeawaysThe influence of team dynamics on software maintenanceUnderstanding the concept of "Alien Artifacts" in legacy codeStrategies for handling multiple versions of a software libraryThe importance of developers being on call for the software they shipManaging different types of technical debtBook Recommendation:The Scout Mindset by Julia GalefStig Brautaset on LinkedInAlien Artifacts Blog PostSBJSON Library CircleCIThe Confident Commit PodcastHelpful Links:Stig Brautaset on LinkedInAlien Artifacts Blog PostSBJSON Library CircleCIThe Confident Commit PodcastWant to share your thoughts on this episode? Reach out to Robby at robby@maintainable.fm.Thanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Engineering Kiosk
#126 Killing the Mutant: Teststrategien mit Sebastian Bergmann

Engineering Kiosk

Play Episode Listen Later Jun 4, 2024 79:39


Testing ist nicht gleich Testing - Ein Deep Dive mit Sebastian BergmannViele Software-Entwickler⋅innen kennen Unit-Tests. Einige schreiben Unit Tests bei der Entwicklung. Wenige machen wirklich Test-Driven-Development. Doch beim Unit-Testing fängt das ganze Thema Testing doch erst an. Wie sieht es denn mit Static Testing, Non-Functional-Testing, White-Box-Testing, End-to-End-Testing, Dynamic Testing oder Integration Testing aus? Und hast du schon mal von Mutanten Testing gehört?Ganz schön viele Buzzwords. Und dabei haben wir noch gar nicht die Fragen beantwortet, was eigentlich gute Tests sind, wie viele Tests genug Tests sind, wie AI uns helfen kann bessere Tests zu schreiben oder ob Testing eigentlich moderner Kram ist oder schon seit Anbeginn des Programmier Zeitalters eine Rolle gespielt hat.In dieser Episode gibt es einen Rundumschlag zum Thema Testing mit Sebastian Bergmann.Bonus: Die Amiga-Szene lebt.Das schnelle Feedback zur Episode:

The Real Python Podcast
Building Python Unit Tests & Exploring a Data Visualization Gallery

The Real Python Podcast

Play Episode Listen Later May 31, 2024 42:43


How do you start adding unit tests to your Python code? Can the built-in unittest framework cover most or all of your needs? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

Maintainable
Brit Myers - Decoding Product vs. Technical Risk

Maintainable

Play Episode Listen Later May 28, 2024 42:20


Join Robby as he welcomes Brit Myers to the podcast. Brit, currently thriving as the VP of Engineering at System Initiative, discusses the intricacies of maintaining software. She emphasizes the importance of navigable software, where the ease of tracing the code and understanding its structure is paramount. Brit highlights the significance of clear naming conventions and inline documentation, as they help in maintaining a cohesive narrative within the software. The conversation touches on the challenges posed by discrepancies in vocabulary between product management and engineering, and how glossaries can bridge these communication gaps. Brit advocates for the use of glossaries more as a reflective tool rather than a proactive one, given the dynamic nature of software development. She also delves into strategies for managing legacy code and technical debt, proposing a pragmatic approach where wrapping and modularizing legacy components can mitigate risks. She discusses the balance between immediate feature delivery and long-term code health, stressing the importance of aligning technical risks with business objectives. The episode explores the impact of company culture on development practices, the benefits of synchronous work environments, and the evolving landscape of DevOps. Tune in to tap into Brit's valuable wisdom.Book Recommendation:Crucial Conversations: Tools for Talking When Stakes are High By Kerry Patterson, Stephen R. Covey, Joseph Grenny, Ron McMillan, and Al SwitzlerHelpful Links:System InitiativeBrit on LinkedInSPACE FrameworkDORA metricsThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Maintainable
Andrea Guarino - Leveraging Static Analysis for Better Code

Maintainable

Play Episode Listen Later May 21, 2024 36:18


In this episode, Robby interviews Andrea Guarino, a Software Engineer at Sonar, about the importance of leveraging static analysis tools for maintaining clean and adaptable code. Andrea emphasizes that well-maintained software should be easy to change, consistent, intentional, and responsible. He explains that static analysis tools play a crucial role in identifying potential issues, ensuring code quality, and preventing security leaks. Andrea also highlights the importance of educating developers on these best practices and integrating such tools into the development workflow to uphold a high standard of code quality. He discusses the challenges of maintaining consistency in code, especially when dealing with legacy code written in different periods and by different teams. Andrea also touches on the concept of technical debt, suggesting a pragmatic approach to address it by balancing between new code quality and gradual improvements to legacy code. Stay tuned for that and more!Book Recommendation:The Brothers Karamazov by Fyodor DostoevskyHelpful Links:Andrea on LinkedInSonarPersonal WebsiteThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Programming Throwdown
173: Mocking and Unit Tests

Programming Throwdown

Play Episode Listen Later Apr 29, 2024 95:22


173: Mocking and Unit TestsIntro topic:  HeadphonesNews/Links:Texas A&M University Physics Festivalhttps://physicsfestival.tamu.edu/Rust vs Cpp at GoogleLars Bergstrom (Google Director of Engineering): Rust teams at Google are as productive as the ones using Go and 2x those using Cpphttps://youtu.be/6mZRWFQRvmw?t=27012Is Cosine Similarity Really About Similarityhttps://arxiv.org/abs/2403.05440Xz utils supply chain attackAndres Freund at Microsofthttps://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/Book of the ShowPatrick:80/20 Running by Matt Fitzgeraldhttps://amzn.to/3xyEKLoJason: A Movie Making Nerdhttps://amzn.to/49ycDJjPatreon Plug https://www.patreon.com/programmingthrowdown?ty=hTool of the ShowPatrick: Shapez Android: https://play.google.com/store/apps/details?id=com.playdigious.shapez&hl=en_US&gl=USShapez iOS: https://apps.apple.com/us/app/shapez-factory-game/id6450830779Jason: Dwarf Fortresshttps://store.steampowered.com/app/975370/Dwarf_Fortress/Topic: Mocking and Unit TestsWhat are Unit TestsBalance between utility, maintenance, and coverageUnit Test: testing small functionsRegression Test: Testing larger functionsSystem Test: End-to-end testing of programsWhat are mocks & fakesWhen to use mock vs. fakeMocking libraries in various languagesPython: https://docs.python.org/3/library/unittest.mock.htmlJava: https://github.com/mockito/mockitoC++:  https://github.com/google/googletest ★ Support this podcast on Patreon ★

Maintainable
Martin Emde - Ruby Central and the Art of Being Tolerant to Change

Maintainable

Play Episode Listen Later Apr 23, 2024 52:47


In this episode of Maintainable, our host Robby Russell sits down with Martin Emde, a sage in the Ruby community and the current Director of Open Source at Ruby Central. Together, they weave through the intricacies of maintainable software, legacy code, and the unwavering power of the Ruby ecosystem. Martin, with his wealth of experience, shares tales from the trenches of open-source software development, focusing on RubyGems and Bundler, and how they've evolved to face the challenges of modern software needs.Martin addresses the elephant in the room - complexity in software. He muses on the natural progression of software projects from simplicity to complexity, drawing parallels to the growth of living organisms. It's not about fighting complexity, but embracing it with open arms, ensuring the software remains adaptable and maintainable. This conversation sheds light on the importance of testing, documentation, and community support in navigating the seas of complex software development.Diving deeper, they discuss the essence of technical debt, not as a villain in our stories but as a necessary step in the rapid evolution of technology. Martin's perspective on technical debt as a tool for progress rather than an obstacle is refreshing, encouraging developers to approach their work with more kindness and understanding.The discussion also highlights Ruby Central's pivotal role in nurturing the Ruby community, emphasizing the importance of contributions, whether code, conversation, or financial support. Martin's call to action for developers to engage with open-source projects, to adopt gems in need, and to provide support where possible, is a heartwarming reminder of the collective effort required to sustain the vibrant Ruby ecosystem.For those curious minds eager to dive into the world of Ruby, contribute to its growth, or simply enjoy a captivating discussion on software development, this episode is a delightful journey through the challenges and joys of maintaining open-source software. Don't miss out on the gems of wisdom shared in this episode, and be sure to check out the useful links below for more information on how you can contribute to the Ruby community.Book Recommendation:Project Hail Marry by Andy WeirHelpful Links:BundlerRuby CentralAdopt a GemMartin on GithubMartin's websiteThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Segfault.fm
0x28 xz - You own Freund a beer!

Segfault.fm

Play Episode Listen Later Apr 6, 2024 112:08


Beschreibung: Summary durch AI generiert: In dieser Episode von Sackford FM wird die Entdeckung einer potenziell schwerwiegenden Backdoor in der XZ-Kompressionssoftware diskutiert. Der Microsoft-Ingenieur Andres Freund identifizierte die Backdoor durch ungewöhnliche CPU-Auslastung von OpenSSH. Es wird betont, dass solche Sicherheitsprobleme in der Open-Source-Welt schnell angegangen werden müssen, um Katastrophen zu verhindern. Technische Details der Backdoor, wie ihre Aktivierung und Tarnung als Unit-Tests, werden ausführlich behandelt. Die Diskussion endet mit Überlegungen zu potenziellen Lösungsansätzen in der Open-Source-Community, um von einzelnen Maintainern abhängige Sicherheitsrisiken zu minimieren. Shownotes: Andres Freund initial e-mail Timeline of the xz open source attack The xz attack shell script ArchLinux: The xz package has been backdoored Some thoughs from Brian Krebs on xz xz/liblzma: Bash-stage Obfuscation Explained xz outbreak (jpg) xz utils backdoor FAQ on the xz-utils backdoor (CVE-2024-3094) Small Interview of Andres Freund xzbot Reflections on distrusting xz XZ Utils Backdoor - critical SSH vulnerability (CVE-2024-3094) The Mystery of ‘Jia Tan,' the XZ Backdoor Mastermind

Maintainable
Irina Nazarova - Investing in Innovation: The Consultancy's Guide to Growth

Maintainable

Play Episode Listen Later Mar 12, 2024 45:48


In the latest episode of Maintainable, Robby Russell has a fascinating conversation with Irina Nazarova, the CEO of Evil Martians, a name that resonates with innovation and bold strides in the software development world. They dive deep into what it takes to maintain not just code, but also the delicate balance between rapid development and long-term sustainability in the ever-evolving startup landscape.Irina shares her unique perspective on the common traits of well-maintained software, stressing the importance of adaptability and the role of technical debt at different stages of a company's growth. With a background rich in pushing the boundaries of what's possible in software consultancy, she offers a fresh take on commercializing open-source projects, nurturing innovation within the team, and the significance of building genuine relationships with clients.Listeners will get a glimpse into the challenges and triumphs of running a software consultancy that dares to dream big. From the intricacies of investing in internal projects to the philosophy behind fostering a culture of innovation and respect, this episode is a goldmine of insights for anyone curious about the intersection of consultancy work and product development.Don't miss out on this engaging discussion that reveals the byproducts of passion, dedication, and a relentless pursuit of excellence in the software industry. Check out the episode and let us know your thoughts!Book Recommendations:The Challenger SaleHelpful Links:Evil MartiansIrina on LinkedInIrina on TwitterAnyCableLayered Design for Ruby on Rails Applications by Vladimir DementyevThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Maintainable
Kyle Daigle - Scaling Up with AI: A New Era of Code Maintenance

Maintainable

Play Episode Listen Later Mar 5, 2024 47:19


Robby has a chat with Kyle Daigle, the Chief Operating Officer at GitHub. They dive into the evolution of software development from the perspective of maintaining and scaling software within large organizations like GitHub. Kyle talks about the importance of simplicity and readability in code over complexity, advocating for well-named variables and straightforward codebases to enhance maintainability.He reflects on his journey from a young developer to understanding the value of well-maintained software, noting the balance between creativity in naming and the necessity for clarity as projects and teams grow. The conversation also covers the approach to technical debt, highlighting that not all old code is debt, but rather it depends on whether it hinders progress. Additionally, they explore the impact of AI tools like GitHub Copilot on software development, suggesting that these tools can aid in quicker code reviews and foster higher-level problem-solving discussions among developers. Stay tuned to learn more.Book Recommendations:Turn The Ship Around By David MarquetHelpful Links:Githubkdaigle @ githubThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Maintainable
Jon Moniaci - Can We Draw A Boundary?

Maintainable

Play Episode Listen Later Feb 27, 2024 53:48


Robby speaks to the Senior Software Engineer at Perchwell, Jon Moniaci. They discuss the delicate balance between innovation and stability in software development. Jon emphasizes the importance of fostering an environment where engineers can experiment without fear, advocating for a culture of defensive programming to mitigate the fear of breaking things in production. He shares insights from his experiences, including the challenges of working with legacy code and the importance of testing and QA processes. He also talks about the value of considering software pieces as potential microservices to encourage maintainability and flexibility, even if full microservice architecture isn't implemented. This approach, Jon suggests, allows for more sustainable development practices, ultimately leading to more resilient and adaptable software systems. Tune in for that and so much more!Book Recommendations:Sapiens by Yuval Noah Harari The End of Everything (Astrophysically Speaking) by Katie MackHelpful Links:WebsiteJon on LinkedInPerchwellThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

.NET in pillole
Siamo certi di avere API Rest sicure? Testiamole con RESTler

.NET in pillole

Play Episode Listen Later Feb 26, 2024 11:58


Spesso ci troviamo a testare la logica delle nostre API Rest scrivendo Unit Test, o le andiamo a provare con tool che verificano che rispondano in modo corretto.Oggi vi parlo di un tool, RESTler, capace di verificare che le nostre API Rest siano anche sicure."RESTler is the first stateful REST API fuzzing tool for automatically testing cloud services through their REST APIs and finding security and reliability bugs in these services."https://github.com/microsoft/restler-fuzzerFuzzing to improve the security and reliability of cloud services with RESTlerhttps://youtu.be/FYmiPoRwEbE?si=bWMOP73atXnZ2Bfv

Maintainable
Chad Fowler - How Small Can We Make This Problem

Maintainable

Play Episode Listen Later Feb 20, 2024 58:34


Robby has a candid chat with Chad Fowler, the General Partner & CTO at BlueYard Capital. They delve into the nuances of software maintenance, the evolution and challenges of managing software projects, and insights from Chad's tenure as CTO of Wunderlist. They discuss the importance of building software in small, manageable pieces to facilitate easy updates or replacements, the counterintuitive perspective on unit testing's impact on maintainability, and strategies for keeping software up-to-date by redeploying to new platforms.Additionally, Chad shares his thoughts on the current industry layoff trends, emphasizing the value of adaptability and resilience. The conversation also touches on the relevance of mentoring in the tech industry and the potential implications of AI and large language models on software engineering careers. Chad's philosophy on software development, emphasizing pragmatism, adaptability, and the continuous reevaluation of problems to make them smaller and more manageable, permeates the discussion.Book Recommendations:The E-myth Revisited by Michael E. GerberZen and the Art of Motorcycle Maintenance by Robert M. PirsigHelpful Links:WunderlistThe Passionate Programmer by Chad FowlerChad on X/TwitterChad on LinkedInThe Privacy PodcastBlueYard CapitalThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

Spring Office Hours
Spring Office Hours: S3E6 - Spring Boot Testing with Phillip Riecks

Spring Office Hours

Play Episode Listen Later Feb 13, 2024 57:34


Join Dan Vega and special guest Phillip Riecks as they dive into testing your Spring Boot Applications. This episode will highlight the utilities that Spring Boot provides, what you should be testing, and when to write different types of tests. Tune into our live stream to have your questions answered or catch the replay on your preferred podcast platform.

Beam Radio
Episode 62: Jenny Bramble Talks Testing

Beam Radio

Play Episode Listen Later Oct 27, 2023 52:17


The panel has an enlightening discussion with Jenny Bramble about testers and developers and how to make code better! Find Jenny Bramble at @jennydoesthings on Twitter See Jenny's ElixirConf 2023 talk: Black Box Techniques for Unit Tests (https://youtu.be/Njd8Kj_yGig?si=Ox5RXzqe0Vr_g1zI) We want to connect with you! Twitter: @BeamRadio1 Send us your questions via Twitter @BeamRadio1 #ProcessMailbox Keep up to date with our hosts on Twitter @akoutmos @lawik @meryldakin @RedRapids @smdebenedetto @StevenNunez and on Mastodon @akoutmos@fosstodon.org @lawik@fosstodon.org @redrapids@genserver.social @steven@genserver.social Sponsored by Groxio (https://grox.io) and Underjord (https://underjord.io)

The Laravel Podcast
Volt Breeze, Testing, Traits, & Inheritance

The Laravel Podcast

Play Episode Listen Later Oct 17, 2023 42:02


Join us in this episode as we discuss the new Livewire + Volt Functional API stack for Breeze and its capabilities. We also demystify essential testing best practices to keep your code scandal-free and away from front-page mishaps. Uncover the art of crafting meaningful tests, evaluate the pros and cons of Pest vs. PHPUnit, venture into the realm of traits and inheritance, and determine the optimal number of tests for your project. Tune in for a jam-packed episode brimming with insights and strategies to elevate your testing game.• Taylor Otwell's Twitter - https://twitter.com/taylorotwell• Matt Stauffer's Twitter - https://twitter.com/stauffermatt• Laravel Twitter - https://twitter.com/laravelphp• Laravel Website - https://laravel.com/• Livewire Volt - https://livewire.laravel.com/docs/volt• Laravel Folio - https://laravel.com/docs/10.x/folio• PEST - https://pestphp.com/• Podcast: Pest, With Nuno Maduro: https://laravelpodcast.com/episodes/pest-with-nuno-maduro• PHPUnit - https://phpunit.de/• Inertia - https://inertiajs.com/• Tighten.co - https://tighten.com/• Docker - https://www.docker.com/company/• Vala's Pumpkin Patch - https://www.valaspumpkinpatch.com/----- Editing and transcription sponsored by Tighten.

Maintainable
Dave Bryant Copeland - Quantifying the Carrying Cost

Maintainable

Play Episode Listen Later Oct 3, 2023 42:33


Robby has a chat with the Author of Sustainable Web Development with Ruby on Rails, Dave Bryant Copeland (he/him/his). Dave is a Senior Software Engineer and speaker. Reflecting on his experience, Dave believes that well-maintained software is software that people understand what it does, how it works, and that it can be changed. He starts off by highlighting the challenges that developers face when trying to retrofit software with more testing.He also shares his expert insights on how software engineers can navigate design decisions while ensuring that they speak up if a proposed feature is difficult to build, test, and maintain. When it comes to software engineers getting advice from experienced practitioners, Dave says that the engineers should make sure they understand their own context and biases. He introduces us to his book and shares a very interesting story about the disappointment he got after building and releasing a frontend in Angular. Stay tuned for more!Book Recommendations:The Culture Map by Erin MeyerHelpful Links:Dave's book - Sustainable Web Development with Ruby on RailsWebsite - https://naildrivin5.com/Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

The Artifact
Testing architecture with ArchUnit (with Roland Weisleder)

The Artifact

Play Episode Listen Later Aug 27, 2023 61:18


Roland's LinkedIn profile: https://www.linkedin.com/in/roland-weisleder/ Devoxx talk: https://www.youtube.com/watch?v=ef0lUToWxI8 I happened to see a Devoxx talk a month ago, which was titled Unit Test, your Java Architecture by Roland Weisleder. And I found that to be super interesting because unit testing is something that I've always attributed to code, something more objective. And architecture has always been a little more subjective in my mind. Roland was kind enough to join me and share his experiences with ArchUnit and how it can help with architecture for Java codebases --- Support this podcast: https://podcasters.spotify.com/pod/show/javabrains/support

Maintainable
Naomi Ceder - People-Centric Community Building

Maintainable

Play Episode Listen Later Jul 4, 2023 48:20


Robby has a chat with Independent Python Instructor and Consultant, Naomi Ceder (she/her/hers). Naomi values clear organization, separation of concern and capsulation, visibility instrumentation, and tests when it comes to creating a legacy piece of code that will be continuously useful. She will talk about the importance of weighing up the costs of using 3rd party tools vs rolling your own solution, working in small teams through a career, and what to consider when weighing up a rewrite vs refactoring.They will discuss her involvement in the Python Foundation and what a foundation typically offers to a community on the global and local levels. Naomi will tell us about her book, The Quick Python Book, 3rd edition, and give us an overview of who the ideal audience is for it. For those of you who want to become technical writers, she will share considerations for how you can get more involved in open-source communities.Book Recommendations:Debt: The First 5000 Years by David GraeberPaula by Isabel AllendeHelpful Links:https://www.naomiceder.tech/https://www.manning.com/books/the-quick-python-book-third-editionhttps://www.linkedin.com/in/naomiceder/https://mastodon.art/@naomicederSubscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.

My life as a programmer
Is it normal for new devs to write unit tests for the seniors?

My life as a programmer

Play Episode Listen Later Apr 25, 2023 9:21


Is it normal for new devs to write unit tests for the seniors?

My life as a programmer
Why isn't software charged per unit test?

My life as a programmer

Play Episode Listen Later Apr 12, 2023 8:29


Why isn't software charged per unit test?

My life as a programmer
How likely is a developer who refuse to write unit test to get work?

My life as a programmer

Play Episode Listen Later Apr 10, 2023 9:21


How likely is a developer who refuse to write unit test to get work?

My life as a programmer
When to not write unit test?

My life as a programmer

Play Episode Listen Later Feb 27, 2023 7:08


When to not write unit test?

My life as a programmer
What should be covered in unit test?

My life as a programmer

Play Episode Listen Later Feb 5, 2023 11:55


What should be covered in unit test?

airhacks.fm podcast with adam bien
What does it mean to be a professional programmer?

airhacks.fm podcast with adam bien

Play Episode Listen Later Jan 8, 2023 60:24


An airhacks.fm conversation with Ken Fogel (@omniprof) about: previously Ken on airhacks.fm "#205 Mr. Omni", JavaOne in Las Vegas, What does it mean to be a professional programmer, the engineering principles, building a Las Vegas Conference Management System, the Use Case UML diagram, how to capture requirements, the developer and the client have to have a good idea about the system, the “Transitioning to Java” book, "#103 Unit Testing Considered Harmful", System Tests over Unit Tests, misusing system tests to identify dead code, Unit Test coverage is the false indicator of quality, cypress over selenium, programmer vs. developer, Christopher Alexander and software patterns, GRASP patterns or how to build a maintainable system, write simple code, KISS and YAGNI Ken Fogel on twitter: @omniprof

Engineering Kiosk
#41 SQL Injections - Ein unterschätztes Risiko

Engineering Kiosk

Play Episode Listen Later Oct 18, 2022 68:37


SQL Injections: Eine der weitverbreitetsten Sicherheitslücken im Web, auch im Jahr 2022Der Großteil aller Applikationen interagiert in irgendeiner Art und Weise mit einer Datenbank. Deswegen werden die meisten Entwicklerinnen und Entwickler bereits von der Sicherheitslücke "SQL Injection" gehört haben. Seit 24 Jahren ist dies eine der weitverbreitetsten Sicherheitslücken im Internet und es ist kein Ende in Sicht. Was ist eigentlich eine SQL-Injection im Detail? Welche verschiedenen Arten gibt es? Was ist der Grund, dass uns dieses Einfallstor so lange beschäftigt? Woher kommt diese und wer hat sie entdeckt? Wie kann man sich schützen und seine Anwendung ausreichend testen? All das und noch viel mehr in dieser Episode.Bonus: Der Kontrast zwischen Duisburg und Berlin und wie die SQL-Injektion als Nebenprodukt entdeckt wurde.Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776Gerne behandeln wir auch euer Audio Feedback in einer der nächsten Episoden, einfach Audiodatei per Email oder WhatsApp Voice Message an +49 15678 136776LinksPhrack Magazine Volume 8, Issue 54 Dec 25th, 1998, article 08 of 12: http://www.phrack.org/archives/issues/54/8.txtOWASP Top Ten 2021: https://owasp.org/www-project-top-ten/CVE Details - Security Vulnerabilities Published In 2022(SQL Injection): https://www.cvedetails.com/vulnerability-list/year-2022/opsqli-1/sql-injection.htmlAnalyzing Prepared Statement Performance: https://orangematter.solarwinds.com/2014/11/19/analyzing-prepared-statement-performance/SQL Injection Prevention Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.htmlOWASP Top 10 (2021) - A03:2021 – Injection: https://owasp.org/Top10/A03_2021-Injection/CVE Details - Heartbleed (CVE-2014-0160): https://www.cvedetails.com/cve/CVE-2014-0160/CVE Details - Log4Shell (CVE-2021-44228): https://www.cvedetails.com/cve/CVE-2021-44228/xkcd "Exploits of a Mom": https://xkcd.com/327/HackerOne-Programm von trivago: https://hackerone.com/trivagoOwncloud: https://owncloud.com/TYPO3: https://typo3.org/Wordpress: https://wordpress.com/de/SQL-Proxy: https://github.com/sysown/proxysqlGitHub CodeQL: https://codeql.github.com/sqlmap: https://sqlmap.org/SQLi-Fuzzer: A SQL Injection Vulnerability Discovery Framework Based on Machine Learning: https://ieeexplore.ieee.org/document/9657925OWASP Zed Attack Proxy (ZAP): https://www.zaproxy.org/PlanetScale: https://planetscale.com/Awesome static analysis: https://github.com/analysis-tools-dev/static-analysisSprungmarken(00:00:00) Intro(00:00:42) SQL-Injections aus den 90ern und die Vielfalt in Berlin(00:02:49) Das heutige Thema: Web-Security SQL-Injections in der Tiefe(00:05:07) Was sind SQL-Injections?(00:08:48) Sind SQL-Injections auch im Jahr 2022 noch ein Problem?(00:10:56) Wann gab es die erste SQL-Injection? Woher stammt diese Sicherheitslücke?(00:13:22) Was sind die Gründe, dass SQL-Injections noch so ein großes Problem sind?(00:19:37) Verschiedene Arten von SQL-Injections: Output-Based, Error-Based, Blind-SQL-Injections, Time-Based-SQL-Injections, Out-of-Band-SQL-Injections(00:27:42) Bug Bounty: 2-Channel SQL Injection-Attacke in Kombination mit Cross-Site-Scripting (XSS) bei trivago(00:29:42) Mehrstufige Attacken und Ausnutzung mehrerer Lücken nacheinander(00:33:16) Möglicher Schaden durch eine SQL-Injection: Daten verändern, Befehle auf dem Server ausführen, lokale Dateien lesen und schreiben, SQL-Funktionen ausführen, Denial of Service (DoS)(00:39:09) Gegenmaßnahmen um SQL-Injections zu verhindern: Prepared Statements, Datenbank-Komponenten updaten, limitierte Rechte für Datenbank-User, Web Application Firewalls (WAF)(00:56:42) Möglichkeiten um deine Anwendung automatisch zu testen: Unit-Tests, statische Analyse, dynamische Analyse mit sqlmap und Fuzzing(01:02:51) Maßnahmen um Sicherheit zu gewährleisten von Datenbank as a Service-Providern(01:06:51) OutroHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776

WordPress | Post Status Draft Podcast
Till Krüss on Object Cache Pro, WordPress, plugins, testing, and performance

WordPress | Post Status Draft Podcast

Play Episode Listen Later Oct 14, 2022 55:50


Back in August, I had a long conversation with Till Krüss (edited down to

Programmers Quickie
Apache Spark Unit Tests

Programmers Quickie

Play Episode Listen Later Sep 27, 2022 6:08


Apache spark unit tests

yegor256 podcast
M199: Unit tests are the Safety Net that you can't afford to not use

yegor256 podcast

Play Episode Listen Later Jun 24, 2022 5:38


Coding without unit tests similar to building a house without a safety net: you can do this, but your productivity will be extremely low. You will mostly be driven by fear. Can you afford this? Video is here: https://youtu.be/Y0Zx_sdVG48

Salesforce Way
94. Universal Mock | Suraj Pillai

Salesforce Way

Play Episode Listen Later Apr 7, 2022


Suraj Pillai, who joins to talk about his open-source library Universal Mock, is a Salesforce Architect and Senior Developer at Vertex Computer Systems. Main Points What is mocking in test Salesforce Stub API Suraj's universal mock open-source library Universal mock is a lightweight library The readability of Universal mock library Suraj's Salesforce dev configuration in Vim Links Suraj's LinkedIn Suraj's Twitter Suraj's Universal Mock open-source library Suraj's dotfiles Fzf fussy search Vim fugitive git plugin Salesforce dev discord channel Video Teaser The YouTube Video URL The post 94. Universal Mock | Suraj Pillai appeared first on SalesforceWay.

Iggeret HaLevana ~ the Message of the Moon
Introducing Iggeret HaLevana ... preparing for the Final Exam (High Holidays 5783) with the Unit Test (the Hebrew months of 5782!)

Iggeret HaLevana ~ the Message of the Moon

Play Episode Listen Later Sep 13, 2021 3:38


Hi! Welcome to Iggeret HaLevana, a year-long ~spiritual~ journey with me, Shira. I'm working to tap into the mystical magical energies of the Hebrew months in the hopes that when next year's Rosh Hashanah comes around, I will be set up to ascend into an inspired 5783 (aka next year). The Jewish year is one based upon the cycle of the moon, and we mark the beginning of a new month by the sight of the sliver of a crescent of the new moon the sky. Growing up in a relatively rural area, I always loved gazing at the expansive, fabulously glittering moon and stars. Now I live in New York, and I realize I took that night sky for granted. I may not be able to see the stars now, but thank G-d I can still see the moon! Here we go! I find the High Holidays, Rosh Hashanah and Yom Kippur, to be overwhelming. I dread their arrival every year - particularly Rosh HaShanah, the head of the Hebrew New Year. The Jewish year is a circle, and every Aleph Tishrei, or the first of the Hebrew month of Tishrei, is our January 1st. The idea of Rosh Hashanah is to prepare ourselves for the Yom HaDin, the Day of Judgement - Yom Kippur. On Rosh Hashanah and on Yom Kippur, we chant Unetaneh Tokef, “On Rosh Hashanah it is inscribed and on Yom Kippur it is sealed” - “בְּרֹאשׁ הַשָּׁנָה יִכָּתֵבוּן, וּבְיוֹם צוֹם כִּפּוּר יֵחָתֵמוּן” And what is inscribed / sealed are the events of the coming year - who will live, who will die, who will accrue wealth, who will lose it, who shall rest, and who shall wander, who will be peaceful and who will be tormented. So, Rosh Hashanah and Yom Kippur are big days. And in the same way we'd prepare ourselves for actually standing in a secular court, we prepare ourselves for the High Holidays. But the high holidays stress me out exactly because of this — they feel like a rapidly approaching exam that I can never be prepared enough for. I've never been one for cumulative final exams, but I'm always here for a unit test. They're so much more manageable. I bring up the unit test because, while they are definitely not as important or life-altering as the final test, they still do contribute to your comprehension of the total topic and definitely count for something. Rosh Hashanah is at the tail end of the 12 months and is what I am equating to the Final Exam. Each Hebrew month, then, is the Unit Test — an opportunity to work on a specific unit within the Final Exam (RH 5783). My goal for myself, and for those of you on this journey with me, is that by working hard for the Unit Test every month, we'll be SUPER prepared for the Final Exam, in Rosh Hashanah 5783 (next year)! Enter my new podcast idea - Iggeret HaLevana. Don't be stressed about the Final Exam, when you can focus on the Unit Test for now. Each new month, each new moon has the energy of renewal. Month (חודש / Chodesh) comes from the word for “new” (חדש / Chadesh), and just like we have Rosh HaShanah, the head of the new year, we also have Rosh Chodesh, the head of the new month. An opportunity for renewal every 30 or so days. Join me on this Moon-centric journey! Check the podcast page for episode 1 - the Hebrew month of Tishrei. Email me at shirajkaplan@gmail.com or join my email list here. --- Send in a voice message: https://anchor.fm/iggerethalevana/message

Iggeret HaLevana ~ the Message of the Moon
Ep. 1 // the Hebrew Month of Tishrei and the Mazal / Zodiac Libra

Iggeret HaLevana ~ the Message of the Moon

Play Episode Listen Later Sep 13, 2021 14:54


I find the High Holidays, Rosh Hashanah and Yom Kippur, to be overwhelming. I dread their arrival every year - particularly Rosh HaShanah, the head of the Hebrew New Year. The Jewish year is a circle, and every Aleph Tishrei, or the first of the Hebrew month of Tishrei, is our January 1st. The idea of Rosh Hashanah is to prepare ourselves for the Yom HaDin, the Day of Judgement - Yom Kippur. On Rosh Hashanah and on Yom Kippur, we chant Unetaneh Tokef, “On Rosh Hashanah it is inscribed and on Yom Kippur it is sealed” - “בְּרֹאשׁ הַשָּׁנָה יִכָּתֵבוּן, וּבְיוֹם צוֹם כִּפּוּר יֵחָתֵמוּן” And what is inscribed / sealed are the events of the coming year - who will live, who will die, who will accrue wealth, who will lose it, who shall rest, and who shall wander, who will be peaceful and who will be tormented. So, Rosh Hashanah and Yom Kippur are big days. And in the same way we'd prepare ourselves for actually standing in a secular court, we prepare ourselves for the High Holidays. But the high holidays stress me out exactly because of this — they feel like a rapidly approaching exam that I can never be prepared enough for. I've never been one for cumulative final exams, but I'm always here for a unit test. They're so much more manageable. I bring up the unit test because, while they are definitely not as important or life-altering as the final test, they still do contribute to your comprehension of the total topic and definitely count for something. Rosh Hashanah is at the tail end of the 12 months and is what I am equating to the Final Exam. Each Hebrew month, then, is the Unit Test — an opportunity to work on a specific unit within the Final Exam (RH 5783). My goal for myself, and for those of you on this journey with me, is that by working hard for the Unit Test every month, we'll be SUPER prepared for the Final Exam, in Rosh Hashanah 5783 (next year)! Enter my new podcast idea - Iggeret HaLevana. Don't be stressed about the Final Exam, when you can focus on the Unit Test for now. Each new month, each new moon has the energy of renewal. Month (חודש / Chodesh) comes from the word for “new” (חדש / Chadesh), and just like we have Rosh HaShanah, the head of the new year, we also have Rosh Chodesh, the head of the new month. Join me on this Moon-centric journey! First we will begin with a refresher on the significance of the New Moon - Rosh Chodesh was the first mitzvah, commandment, we were given as a people, as Bnei Yisrael (the Children of Israel) officially — right after our Yetziat Mitzrayim (Exodus from Egypt). Our sages give us many interpretations as to why this is the first and why it's important , but one I will share is that without Rosh Chodesh, we have no structure. Without Rosh Chodesh, we have no calendar. Time has felt so indistinguishable during this difficult past year, but every month, every Rosh Chodesh we have a new energy to tap into, something to break the monotony. There is a magical, mystical piece of Kabbalistic writing called Sefer Yetzirah, the book of creation / formation. Kabbalah, as a refresher, is the ancient Jewish tradition of mystical interpretation of the Torah. Kabbalah is also 100% over my head, beyond me by 100000000 miles, but I like the idea of tapping into ~mystical magical energy~ so I will use the interpretations of people EXPONENTIALLY more learned than m​e, to understand the​ meaning of Sefer Yetzirah​. Sefer Yetzirah, ​as well as its author, are equally mysterious. There have been many attempts to understand or at least categorize its contents. Cont'd… For full text, email me at shirajkaplan@gmail.com or join my email list here. --- Send in a voice message: https://anchor.fm/iggerethalevana/message

Ask an Engineering Manager
010: Engineering: Do you think unit tests are important?

Ask an Engineering Manager

Play Episode Listen Later Aug 22, 2021 18:52


The short answer here is of course "yes". The episode elaborates why, when, and for who unit tests have the greatest value, how to overcome reasons not to write tests, and lists arguments for why unit tests are important.    Note: I mis-spoke in the episode - the full term for TDD is, of course, correctly "Test-driven development", not "test-driven design" - although, since one of the main benefits of TDD is its influence on the technical design, that would work, too.   Want to have your questions answered? Send them to askanengineeringmanager@gmail.com Already listening? Please take 5 minutes to let us know what you think: https://www.surveymonkey.de/r/28SSB95

Last Week in .NET
Remembering the women of École Polytechnique

Last Week in .NET

Play Episode Listen Later Dec 7, 2020 12:38


Normally I'd start this out with some of the funnier things that happened; but before I dive into what happened last week, I want to talk about this week. Warning: death and violence follow. Yesterday was the 31st anniversary of the École Polytechnique massacre. If you're not familiar with this atrocity, let me quote Deb Chachra's chilling telling of the event:  On December 6, 1989, in late afternoon a man had walked into the École Polytechnique, the engineering school of the University of Montreal, carrying a hunting rifle, ammunition, and a knife. He entered a mechanical engineering class of about sixty students, separated out the nine women, and told them, "I am fighting feminism." One of the women, Nathalie Provost, responded, "Look, we are just women studying engineering, not necessarily feminists ready to march on the streets to shout we are against men, just students intent on leading a normal life." She reports that his response was, "You're women, you're going to be engineers. You're all a bunch of feminists. I hate feminists."He then opened fire on the women, killing six of them. Then he went from floor to floor in the building, targeting and shooting women.Fourteen women were killed that day, twelve of them engineering students, one a nursing student, and one a university employee.Here are their names: Anne St-Arneault, Geneviève Bergeron, Hélène Colgan, Nathalie Crotea, Barbara Daigneault, Anne-Marie Edward, Maud Haviernick, Barbara Klueznick, Maryse Laganière, Maryse Leclair, Anne-Marie Lemay, Sonia Pelletier, Michèle Richard, and Annie Turcotte. (Me: You can hear more about these women here.)An additional thirteen people were injured. Nathalie Provost was shot four times, but survived. In the weeks, months, and years that followed, among other responses, Canada implemented stricter gun-control regulations, and began to observe December 6th as a National Day of Remembrance and Action on Violence Against Women. The event remains the worst mass murder in Canadian history.Our industry has problems with sexism, whether latent or outright. While we hope never to have another atrocity like this one; we should strive for equality and justice in our industry. As a white dude in tech, I'll do everything I can; and I ask you to do the same. If you've never had to fear for your life just because you wanted to be an engineer, then you too need to stand up and help stop the sexism in our industry. Now, on to what happened last week in the world of .NET. 

The Polyglot Developer Podcast
TPDP037: Writing Tests in a Development Project

The Polyglot Developer Podcast

Play Episode Listen Later Jun 4, 2020 36:44


In this episode I'm joined by repeat guest, and awesome developer, Corbin Crutchley from Unicorn Utterances. The topic of this episode is on testing, and I'm not talking about taking exams, I'm talking about writing tests in your development projects to produce much better final products. If you've ever been curious about the differences between unit tests, integration tests, end to end tests, and all of the other possible types of tests that might exist, this episode is for you. It doesn't matter if you're a new developer, a casual developer, or expert developer, knowing how to write tests will greatly benefit your career. A brief writeup to this episode can be found via https://www.thepolyglotdeveloper.com/2020/06/tpdp-e37-writing-tests-development-project/