Podcasts about Code coverage

Measure of source code testing

  • 38PODCASTS
  • 53EPISODES
  • 48mAVG DURATION
  • ?INFREQUENT EPISODES
  • Apr 1, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Code coverage

Latest podcast episodes about Code coverage

Engineering Kiosk
#189 Fuzzing: Wenn der Zufall dein bester Tester ist mit Prof. Andreas Zeller

Engineering Kiosk

Play Episode Listen Later Apr 1, 2025 79:38


Fuzzing: Software-Stabilität durch Zufalls-generierte EingabedatenTesten, besonders automatisiertes Testen der eigenen Software, gilt als Best Practice in der Softwareentwicklung. Egal, ob wir dabei von Unit-Testing, Integration Testing, Funktions- oder Akzeptanztesting sprechen. Die Idee dabei ist, dass wir die Fehler in der Software gering halten. Auch wenn deine Tests zu einer 100%igen Code Coverage führen, heißt es nicht, dass dein Programm keine Bugs hat. Denn ein Problem gibt es bei all diesen Test-Arten: Die Input-Parameter sind i.d.R. nach einer gewissen Struktur erstellt worden. Und dies heißt noch lange nicht, dass diese Input-Parameter alle möglichen Fälle abdecken.Und genau da kommt das Thema Fuzzing bzw. Fuzz-Testing ins Spiel. Das Testen deiner Software mit zufällig generierten Input-Parametern. Das klingt erstmal wild, kann aber ganz neue Probleme in deiner Software aufdecken. Und das ist das Thema in dieser Episode.Zu Gast ist Prof. Dr. Andreas Zeller, Forscher im Bereich Softwaretesting und Autor des Fuzzing Books. Mit ihm klären wir, was Fuzzing eigentlich ist, woher es kommt und wie es sich gegenüber anderen Teststrategien, wie Unit-Testing, verhält. Er gibt uns einen Einblick in die Unterschiede von Search-Based Fuzzing, Grammar-Fuzzing, Symbolic Fuzzing sowie spezifikationsbasierten Fuzzern, wie komplexe Systeme mittels metamorphes Testen verbessert werden können, was das Orakel-Problem ist, wie z.B. Datenbanken gefuzzed werden können, aber auch wie das Ganze in der Praxis angewendet werden kann und wie du einfach mit Fuzzing starten kannst.Bonus: Was ein Orakel mit Testing zu tun hat.Unsere aktuellen Werbepartner findest du auf https://engineeringkiosk.dev/partnersDas schnelle Feedback zur Episode:

DotNet & More
DotNet&More #138: Метрики изнутри и не только

DotNet & More

Play Episode Listen Later Nov 15, 2024 110:15


Мы обсуждали смысл метрик качества кода, но как они работают изнутри? Для этого у нас будет специальный гость .... ;)Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:04:10 Code Coverage - лучшая метрика?00:16:00 Изнутри Code Coverage00:29:20 Бесполезные метрики00:36:00 Метрика - количество коммитов00:39:20 Cyclomatic complexity00:51:00 Code Duplication00:58:00 Метрики для менеджеров01:13:00 Отношение разработчика к метрикам01:22:00 Как работает инспекция секретов (паролей)01:25:00 Как внедрять метрики01:31:00 Про SLA и GDCСсылки:- https://en.wikipedia.org/wiki/Cyclomatic_complexity : Cyclomatic complexity - https://www.sonarsource.com/docs/CognitiveComplexity.pdf : Cognitive Complexity от SonarВидео: https://youtube.com/live/nKnJmiH5Ri8Аудио: Скачать:  Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/

DotNet & More
DotNet&More #136: Метрики cyclomatic complexity, code coverage и не только

DotNet & More

Play Episode Listen Later Nov 1, 2024 62:15


В прошлый раз мы разобрали единую метрику от Microsoft, сегодня же пройдемся по другим популярным способам запихнуть качество кода в одно число.Спасибо всем, кто нас слушает. Ждем Ваши комментарии.Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:09:00 Зачем Cyclomatic complexity?00:27:00 Метод, который вызывается один раз, code smell?00:31:55 Code Smells метрика00:43:00 Security метрики00:46:00 Code duplicationСсылки:- https://blog.jetbrains.com/qodana/2023/10/top-6-code-quality-metrics-to-empower-your-team/ : Метрики от JetBrains- https://blog.codacy.com/code-quality-metrics : Метрики от Codacity - https://docs.sonarsource.com/sonarqube/latest/user-guide/code-metrics/metrics-definition/ : Метрики от SonarВидео: https://youtube.com/live/mqFOa9X-rcsСлушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Twitch: https://www.twitch.tv/dotnetmoreОбсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreCopyright: https://creativecommons.org/licenses/by-sa/4.0/

My life as a programmer
Is code coverage a good metric?

My life as a programmer

Play Episode Listen Later Aug 5, 2024 9:00


Is code coverage a good metric?

DotNet & More
DotNet&More #113: Тестирование тестов, code coverage, mutation testing и не только

DotNet & More

Play Episode Listen Later Mar 22, 2024 90:11


Тесты мы написали, но что дальше? Как проверить что мы покрыли все возможные кейсы? А может некоторые тесты вообще бесполезны? Для этого есть несколько инструментов.Спасибо всем кто нас слушает. Ждем Ваши комментарии.Бесплатный открытый курс "Rust для DotNet разработчиков": https://www.youtube.com/playlist?list=PLbxr_aGL4q3S2iE00WFPNTzKAARURZW1ZShownotes: 00:00:00 Вступление00:02:20 Критерии качества тестов00:13:00 Проверка функциональности тестов00:21:00 Line Code Coverage00:31:10 Branch Code Coverage 00:47:30 ExcludeFromCodeCoverageAttribute, как правильно использовать00:55:40 Mutation Testing, "работает" ли в .Net?Ссылки:- https://github.com/coverlet-coverage/coverlet : Coverlet- https://github.com/danielpalme/ReportGenerator : Report Generator- https://stryker-mutator.io/ : Stryker MutatorВидео: https://youtube.com/live/6EAzwRJMIg8 Слушайте все выпуски: https://dotnetmore.mave.digitalYouTube: https://www.youtube.com/playlist?list=PLbxr_aGL4q3R6kfpa7Q8biS11T56cNMf5Обсуждайте:- Telegram: https://t.me/dotnetmore_chatСледите за новостями:– Twitter: https://twitter.com/dotnetmore– Telegram channel: https://t.me/dotnetmoreBackground music: http://freemusicarchive.org/music/Six_Umbrellas/Ad_AstraCopyright: https://creativecommons.org/licenses/by-sa/4.0/

Listen To This Bull LIVE!
What to know about Building Code Coverage & Roof Claims - Part 1

Listen To This Bull LIVE!

Play Episode Listen Later Dec 21, 2023 5:47


Join me and Mike Goldenstein of Roofle as we delve into an in-depth study of Building Codes and Claims.LISTEN TO THIS BULL."Exposing Bull in the Insurance Industry"https://listentothisbull.com/──────────────────LISTEN TO THIS BULL is available on your favorite streaming platform!:▻Apple Podcasts: shorturl.at/bkuP7▻Spotify: shorturl.at/bpEMU▻Google Podcasts: shorturl.at/yAJY5▻Amazon Music: shorturl.at/rxBX9──────��───────────Check out our social media content!▻Facebook:   / listentothisbull  ▻Instagram:   / listentothisbull  ▻Linkedin:   / listento.  .▻TikTok:   / listentothisbull  ──────────────────***Listen to this BULL is intended to educate Contractors, Property Owners, Public Adjusters and Attorneys on the "BULL" they might encounter in an insurance claim. Often people feel like they have no options when going up against the colossus insurance companies but we are here to prove that you DO HAVE OPTIONS! Listen to this Bull is not intended to give legal advice. Please seek the advice of an attorney before acting on any information provided by this video or any opinions offered by Mathew Mulholland, Remington Huggins, or any of Listen to this Bull's guests. All rights reserved.TranscriptSupport the show

Listen To This Bull LIVE!
What to know about Building Code Coverage & Roof Claims - Part 2

Listen To This Bull LIVE!

Play Episode Listen Later Dec 21, 2023 53:18


Join me and Mike Goldenstein of Roofle as we delve into an in-depth study of Building Codes and Claims. - Part 2 after an idiot accidentally killed the feed... MAT!LISTEN TO THIS BULL."Exposing Bull in the Insurance Industry"https://listentothisbull.com/──────────────────LISTEN TO THIS BULL is available on your favorite streaming platform!:▻Apple Podcasts: shorturl.at/bkuP7▻Spotify: shorturl.at/bpEMU▻Google Podcasts: shorturl.at/yAJY5▻Amazon Music: shorturl.at/rxBX9──────��───────────Check out our social media content!▻Facebook:   / listentothisbull  ▻Instagram:   / listentothisbull  ▻Linkedin:   / listento.  .▻TikTok:   / listentothisbull  ──────────────────***Listen to this BULL is intended to educate Contractors, Property Owners, Public Adjusters and Attorneys on the "BULL" they might encounter in an insurance claim. Often people feel like they have no options when going up against the colossus insurance companies but we are here to prove that you DO HAVE OPTIONS! Listen to this Bull is not intended to give legal advice. Please seek the advice of an attorney before acting on any information provided by this video or any opinions offered by Mathew Mulholland, Remington Huggins, or any of Listen to this Bull's guests. All rights reserved.TranscriptSupport the show

Software Engineering Radio - The Podcast for Professional Software Developers
SE Radio 575: Nir Valtman on Pipelineless Security

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Aug 1, 2023 56:49


Nir Valtman, co-Founder and CEO at Arnica, discusses pipelineless security with SE Radio host Priyanka Raghavan. They start by defining pipelines and then consider how to add security. Nir lays out the key challenges in getting good code coverage with the pipeline-based approach, and then describes how to implement a pipelineless approach and the advantages it offers. Priyanka quizzes him on the concept of "zero new hardcoded secrets," as well as some ways to protect GitHub repositories, and Nir shares examples of how a pipelineless approach could help in these scenarios. They then discuss false positives and handling developer fatigue in dealing with alerts. The show ends with some discussion around the product that Arnica offers and how it implements the pipelineless methodology.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Top 5 Python libraries to interpret machine learning models; Infusing 3D worlds into LLMs; Friendly AI chatbots and bioweapons for criminals; ChatGPT on Android!; AI predicts code coverage faster and cheaper

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Jul 31, 2023 12:23


Top 5 Python libraries to interpret machine learning modelsGoogle DeepMind's new system empowers robots with novel tasksThe debate over crippling AI chip exports to China continuesStability AI introduces 2 LLMs close to ChatGPTChatGPT is coming to Android!Meta collabs with Qualcomm to enable on-device AI apps using Llama 2Worldcoin by OpenAI's CEO will confirm your humanityAI predicts code coverage faster and cheaperIntroducing 3D-LLMs: Infusing 3D worlds into LLMsFriendly AI chatbots will be designing bioweapons for criminals 'within years'This podcast is generated using the Wondercraft AI platform (https://www.wondercraft.ai/?via=etienne), a tool that makes it super easy to start your own podcast, by enabling you to use hyper-realistic AI voices as your host. Like mine! Get a 50% discount the first month with the code AIUNRAVELED50Are you eager to expand your understanding of artificial intelligence? Look no further than the essential book "AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence," by Etienne Noumen, now available at Shopify, Apple, Google, or Amazon today!

Tech Lead Journal
#139 - A Developer's Guide to Effective Software Testing - Mauricio Aniche

Tech Lead Journal

Play Episode Listen Later Jul 3, 2023 55:01


“An effective developer is an effective software tester. As a developer, it's your responsibility to make sure what you do works. And automated testing is such an easy and cheap way of doing it." Mauricio Aniche is the author of “Effective Software Testing”. In this episode, Mauricio explained how to become a more effective software developer by using effective and systematic software testing approaches. We discussed several such testing techniques, such as testing pyramid, specification-based testing, boundary testing, structural testing, mutation testing, and property testing. Mauricio also shared his interesting view about test-driven development (TDD) and suggested the one area we can do to improve our test maintainability.   Listen out for: Career Journey - [00:03:43] Winning Teacher of the Year - [00:06:07] An Effective Developer is an Effective Tester - [00:09:33] Reasons for Writing Automated Tests - [00:10:43] Systematic Tester - [00:13:45] Testing Pyramid - [00:17:50] Unit vs Integration Test - [00:20:25] Specification-Based Testing - [00:22:55] Behavior-Driven Design - [00:25:34] Boundary Testing - [00:27:01] Structural Testing & Code Coverage - [00:30:16] Mutation Testing - [00:35:31] Property Testing - [00:38:45] Test-Driven Development - [00:42:00] Test Maintainability - [00:46:03] Growing Object-Oriented Software, Guided by Tests - [00:48:07] 3 Tech Lead Wisdom - [00:49:24] _____ Mauricio Aniche's BioDr. Maurício Aniche's life mission is to help software engineers to become better and more productive. Maurício is a Tech Lead at Adyen, where he heads the Tech Academy team and leads different engineering enablement initiatives. Maurício is also an assistant professor of software engineering at Delft University of Technology in the Netherlands. His teaching efforts in software testing gave him the Computer Science Teacher of the Year 2021 award and the TU Delft Education Fellowship, a prestigious fellowship given to innovative lecturers. He is the author of the “Effective Software Testing: A Developer's Guide”, published by Manning in 2022. He's currently working on a new book entitled “Simple Object-Oriented Design” which should be on the market soon. Follow Mauricio: LinkedIn – linkedin.com/in/mauricioaniche Twitter – @mauricioaniche Website – effective-software-testing.com Newsletter – effectivesoftwaretesting.substack.com _____ Our Sponsors Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/139 Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.

TestGuild News Show
Automation Code Coverage, Audio Video Testing and More TGNS71

TestGuild News Show

Play Episode Listen Later Jan 16, 2023 9:59


How does Code Coverage for Automation work for Game Test Automation? What are some of the top Continuous Automation Testing Platforms? Have you seen this excellent method to help troubleshoot Java Applications? Find out in this episode of the Test Guild DevSecOps New Show for the week of January 15th. So grab your favorite cup of coffee or tea, and let's do this.   Time News Title Rocket Link 0:19 Applitools FREE Account Offer https://applitools.info/joe  0:53 Code Coverage for Automation https://testguild.me/721vh1 2:06 Ensure Seamless Audio-Visual Quality for iOS Apps https://testguild.me/9q5b3v 3:15 LambdaTest announces Test Analytics to improve decision making https://testguild.me/qg2rpw 3:50 My Q4 2022 Forrester Wave evaluates the top 15 CAT platform players https://testguild.me/45tqvr 5:45 Cognizant Intelligent Test Scripter https://testguild.me/urnhjj 6:43 Microsoft's 49% stake worth $10 billion in OpenAI https://testguild.me/3m74bz 7:36 Komodor goes Freemium! https://testguild.me/o8p4qa 8:18 Troubleshooting Java Applications using Dynamic Instrumentation https://testguild.me/devvw2

Azure DevOps Podcast
Marco Rossignoli: Automated Code Coverage Measurement - Episode 227

Azure DevOps Podcast

Play Episode Listen Later Jan 9, 2023 32:53


Marco Rossignoli is a Dev at Microsoft on the .NET Test Platform and Code coverage team. He's also the co-maintainer of the Coverlet Collector NuGet package, which has over 100M downloads.   Topics of Discussion: [1:15] Jeffrey talks about the architect forums he's hosting and facilitating in 2023. You can register here. [2:53] Marco talks about how he got into code coverage. [6:44] Why is code coverage even useful to measure? [12:40] How does Coverlet work and how is it different from the old ones? How do you run it? [20:30] Is there any difference in how it works between Azure Pipelines or GitHub Actions or TeamCity? [21:40] With multiple test suites running, how does Coverlet support pulling all the results together so that you get the one number of code coverage? [23:40] Report generator merges all of the reports. [25:16] What exactly is Cobertura? [26:02] Marco shares why he is excited about Coverlet and the many opportunities it gives us in the future.   Mentioned in this Episode: Architect Tips — New video podcast! Azure DevOps Clear Measure (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's YouTube Jeffrey Palermo's Twitter — Follow to stay informed about future events! Programming with Palermo programming@palermo.network NuGet Gallery GitHub Coverlet Coverage Marco Rossignoli .Net Coverage Code   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

TestGuild News Show
6 Billion Dollar Testing, Code Coverage Danger and More TGNS56

TestGuild News Show

Play Episode Listen Later Aug 29, 2022 9:57


What test tool company was just acquired for 6 billion dollars? Is setting code coverage dangerous? And 97% of software testing pros are using automation. Are you? Find out the answers to these and all other and full pipeline DevOps software testing, automation testing, performance testing, and security testing in 10 minutes or less in this episode of the Test Guild new show for the week of August 28th. So grab your favorite cup of coffee or tea, and let's do this.   Time News Title Rocket Link 0:25 Create a FREE Applitools Account https://rcl.ink/xroZw 0:49 Curiosity SDLC https://testguild.me/w690y0 0:55 WebdriverIO https://testguild.me/ik9ctd 2:29 Don't set Code Coverage goals. It's DANGEROUS. https://testguild.me/ed9nl0 3:23 Micro Focus Point of View - Model-Based Testing https://testguild.me/p6a69l 3:59 OpenText to Acquire Micro Focus International plc https://testguild.me/day3b2 5:10 Report: 97% of software testing pros are using automation https://testguild.me/a9q3m1 6:21 How to perform JSON Schema Validation using Rest-Assured? https://testguild.me/gzc5wn 7:16 How to use Resilience Hub's Fault Injection Experiments https://testguild.me/j91ksm 8:19 Hunting Down System Interrupts https://testguild.me/avymps 8:52 ThreatX  https://testguild.me/00hxo4

todo:cast - Entwickler Podcast
Folge 42: Code Coverage

todo:cast - Entwickler Podcast

Play Episode Listen Later Jul 25, 2022 20:08


Eine gute Testabdeckung des eigenen Codes ist wichtig, aber wie behält man das im Blick und macht es zu einem festen Teil des eigenen DevOps Prozesses? Heute sprechen wie über Code Coverage, wie man diese misst, welche Abdeckung erstrebenswert sind und wo man die Informationen dazu am besten sichtbar macht. Sind 100% Code Coverage wirklich das Ziel? Links: Introduction to Code Coverage: https://www.atlassian.com/continuous-delivery/software-testing/code-coverage Malte auf Twitter: https://twitter.com/MalteLantin Robin-Manuel auf Twitter: https://twitter.com/robinmanuelt Feedback und Anregungen: todopodcast@outlook.com

Lambda3 Podcast
Lambda3 Podcast 302 – Anti Padrões em testes

Lambda3 Podcast

Play Episode Listen Later Jun 3, 2022 92:21


Neste episódio do Podcast, os lambdas Fernando Okuma, Pedro Fernandes e Victor Cavalcante junto do convidado Lucas Teles, falam sobre anti padrões em testes, suas experiências, preferências e mais. Entre no nosso grupo do Telegram e compartilhe seus comentários com a gente: https://lb3.io/telegram Feed do podcast: www.lambda3.com.br/feed/podcast Feed do podcast somente com episódios técnicos: www.lambda3.com.br/feed/podcast-tecnico Feed do podcast somente com episódios não técnicos: www.lambda3.com.br/feed/podcast-nao-tecnico Lambda3 · #302 - Anti Padrões em testes Pauta: O que é Anti padrão de teste Testes que não testam nada (assert true) Nomes de teste que não ajudam a encontrar o que está quebrando no código Variáveis com nomes que não ajudam no entendimento do teste (ex.: var a, var ok, var temp, ...) Testar código de terceiros (framework, biblioteca, ...) Fragilidade de teste Validar vários cenários em um teste só Code Coverage como medida de qualidade de código Testes que não utilizam mocks para controlar as dependências em testes de unidade Teste de integração que não utilizam mocks para serviços de terceiros Testar mocks Dry vs DAMP Não Mockar serviços que não controlamos em testes e2e Teste de integração de repositório de dados com banco de dados em memória Compilador Estático e analise estática é uma ajuda equivalente a teste? Teste de unidade de controller de api faz sentido? Links: Stryker Mutator Pirâmide de testes Respawn Should I test private methods Type Driven Development - Deixe os tipos te guiarem Testes em .NET - Link 1 Testes em .Net - Link 2 Testes em .Net - Link 3 Testes em .Net - Link 4 Podcast Lambda3 #18 - Testes Podcast Lambda3 #186 - Vamos falar sobre testes? Podcast Lambda3 #210 - Teste de Usabilidade Effective software testing Participantes: Lucas Teles - @lteles Fernando Okuma - @feokuma Pedro Fernandes - @pedrofernandesfilho Victor Cavalcante - @vcavalcante Edição: Compasso Coolab Créditos das músicas usadas neste programa: Music by Kevin MacLeod (incompetech.com) licensed under Creative Commons: By Attribution 3.0 - creativecommons.org/licenses/by/3.0

Tech Lead Journal
#90 - Clean Craftsmanship - Robert C. Martin (Uncle Bob)

Tech Lead Journal

Play Episode Listen Later May 30, 2022 61:03


“The simplest way to describe craftsmanship is pride of workmanship. It is the mindset that you are working on something important and you are going to do it well." Robert C. Martin (aka Uncle Bob) is the co-founder of cleancoders.com, an acclaimed speaker at conferences worldwide, and prolific author of multiple best-selling books. In this episode, Uncle Bob shared some insights from his latest book, “Clean Craftsmanship”. He first started by sharing the current major challenge of the software development industry, i.e. as a young discipline, it suffers from the state of perpetual inexperience amid exponential acceleration of demand for programmers, which drove Uncle Bob writing the book to help define disciplines, standards and ethics for software craftsmanship. He then touched on the five key disciplines of clean craftsmanship, specifically focusing on test-driven development and refactoring. Towards the latter half, Uncle Bob described a few essential standards and ethics of clean craftsmanship, such as never ship s**t, always be ready, do no harm, and estimate honestly. Listen out for: Career Journey - [00:07:29] Clean Craftsmanship - [00:10:54] Programmer as a Profession - [00:15:31] Craftsmanship - [00:18:46] Disciplines - [00:22:45] Disciplines: Test-Driven Development - [00:28:50] Disciplines: Refactoring - [00:34:32] Code Coverage - [00:39:02] Standard: Never Ship S**t - [00:42:35] Standard: Always Be Ready - [00:47:16] Ethics: Do No Harm - [00:50:01] Ethics: Estimate Honestly - [00:53:56] 2 Tech Lead Wisdom - [00:57:50] _____ Robert C. Martin's Bio Robert Martin (Uncle Bob) has been a programmer since 1970. He is the co-founder of the online video training company cleancoders.com and founder of Uncle Bob Consulting LLC. He served as Master Craftsman at 8th Light inc and is an acclaimed speaker at conferences worldwide. He is a profilic writer and has published hundreds of articles, papers, blogs, and best-selling books including: “The Clean Coder”, “Clean Code”, “Agile Software Development: Principles, Patterns, and Practices”, and “Clean Architecture”. He also served as the Editor-in-chief of the C++ Report and as the first chairman of the Agile Alliance. Follow Uncle Bob: Twitter – @unclebobmartin Clean Coder – http://cleancoder.com Clean Coders – https://cleancoders.com GitHub – https://github.com/unclebob Our Sponsor Today's episode is proudly sponsored by Skills Matter, the global community and events platform for software professionals. Skills Matter is an easier way for technologists to grow their careers by connecting you and your peers with the best-in-class tech industry experts and communities. You get on-demand access to their latest content, thought leadership insights as well as the exciting schedule of tech events running across all time zones. Head on over to skillsmatter.com to become part of the tech community that matters most to you - it's free to join and easy to keep up with the latest tech trends. Like this episode? Subscribe on your favorite podcast app and submit your feedback. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Pledge your support by becoming a patron. For more info about the episode (including quotes and transcript), visit techleadjournal.dev/episodes/90.

Tech Lead Journal
#58 - Principles for Writing Valuable Unit Tests - Vladimir Khorikov

Tech Lead Journal

Play Episode Listen Later Oct 4, 2021 53:22


“The main goal of unit testing is to enable sustainable growth of your software project that enables you to move faster with a more quality code base." Vladimir Khorikov is the author of “Unit Testing: Principles, Practices, and Patterns” and the founder of Enterprise Craftsmanship blog. In this episode, we discussed in-depth about unit testing. Vladimir broke down the four pillars of unit testing and the anatomy of a good unit test, as well as mentioned a couple of common unit testing anti-patterns. We also discussed topics such as test-driven development, code coverage and other unit testing metrics, test mocks and how to use it properly, and how to be pragmatic when writing unit tests. Listen out for: Career Journey - [00:05:32] Unit Testing - [00:08:20] The Goal of Unit Testing - [00:11:34] Test-Driven Development - [00:12:55] Code Coverage & Other Successful Metrics - [00:17:35] Pragmatic Unit Tests - [00:21:04] 4 Pillars of Unit Testing - [00:23:40] Anatomy of a Good Unit Test - [00:34:01] Test Mocks - [00:38:16] Unit Testing Anti-Patterns - [00:47:05] Tech Lead Wisdom - [00:49:56] _____ Vladimir Khorikov's Bio Vladimir Khorikov is the author of the book “Unit Testing: Principles, Practices, and Patterns”. He has been professionally involved in software development for over 15 years, including mentoring teams on the ins and outs of unit testing. He's also the founder of the Enterprise Craftsmanship blog, where he reaches 500 thousand software developers yearly. Follow Vladimir: LinkedIn – https://www.linkedin.com/in/vladimir-khorikov-bb482653 Twitter – https://twitter.com/vkhorikov Enterprise Craftsmanship – https://enterprisecraftsmanship.com/ Pluralsight – https://app.pluralsight.com/profile/author/vladimir-khorikov Our Sponsor Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags by visiting https://techleadjournal.dev/shop. Like this episode? Subscribe on your favorite podcast app and submit your feedback. Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Pledge your support by becoming a patron. For more info about the episode (including quotes and transcript), visit techleadjournal.dev/episodes/58.

Rustacean Station
Rust Code Coverage with Daniel McKenna

Rustacean Station

Play Episode Listen Later Sep 18, 2021 55:49


Allen Wyma talks with Daniel McKenna, a software enginner, about his code coverage tool for Rust projects, Tarpaulin. Contributing to Rustacean Station Rustacean Station is a community project; get in touch with us if you'd like to suggest an idea for an episode or offer your services as a host or audio editor! Twitter: @rustaceanfm Discord: Rustacean Station Github: @rustacean-station Email: hello@rustacean-station.org Timestamps [@01:35] - LLVM [@05:50] - Vectorcast [@07:00] - Cargo-kcov [@07:38] - Gdb [@07:47] - ptrace.2 [@14:40] - Arduino [@15:47] - Probe-rs [@22:42] - Tarpaulin Crater (tater) [@23:34] - Tarpaulin-viewer [@27:51] - ImGui [@31:00] - Ndarray [@32:09] - Is rust a competitor of Julia and Python in terms of machine learning? [@36:10] - When did Daniel get into programming? [@49:20] - Tips for beginners [@53:53] - FiraCode Other Resources Writing a Debugger Writing a Linux Debugger Setup Awesome Rust Mentors Credits Intro Theme: Aerocity Audio Editing: Plangora Hosting Infrastructure: Jon Gjengset Show Notes: Plangora Hosts: Allen Wyma

FINOS Open Source in Fintech Podcast
The Code Coverage Paradox - Diffblue - Enrico Trentin & Matthew Richards

FINOS Open Source in Fintech Podcast

Play Episode Listen Later Feb 24, 2021 34:18


Season 3, 2nd edition of 2021 of the FINOS Open Source in Fintech Podcast In this podcast, our Director of Community, James McLeod has a chat with Enrico Trentin and Matthew Richards of Diffblue, primarily about the code coverage paradox, the theme of one of our recent Open Source in Fintech Meetups. Enrico is the Diffblue Developer Relations Lead, and Matthew Richards (or Matt since there are two Matthews, and the other happens to be the CEO) is the Diffblue Head of Product. Diffblue is a new silver member of FINOS, and works on automating tedious and repetitive tasks for developers within unit testing, using AI or Artificial intelligence. James is our Director of Community for FINOS, and host for many of our Open Source in Fintech Podcasts and Meetups. During this interview Enrico and Matthew discuss unit testing and test driven development, the tradeoffs in software delivery for both the developer and business sides of the house, code coverage within the software development lifecycle, test quality, identifying which parts of your code introduce risk, and then AI or Artificial Intelligence in software development, driverless cars, Skynet, and the Terminator… hmmm… So please enjoy this interview, check out our previous episodes, and subscribe to the podcast for some great upcoming discussions on open source, financial services, fintech, and how they all fit together. ►►Diffblue - https://www.diffblue.com/ ►►Primer on the Code Coverage Paradox - https://www.finos.org/blog/primer-the-code-coverage-paradox-diffblue ►►The Code Coverage Paradox Meetup Video - https://www.finos.org/blog/enrico-trentin-the-code-coverage-paradox -=-=-=-=- About the Open Source in Fintech Podcast The FINOS Open Source in Fintech Podcast celebrates open source projects and interesting topics at the cross section of financial services and open source. So far, our industry experts have discussed practical applications of and their real-world experiences with a range of open source projects including desktop interoperability, low code platforms, synthetic data, and data modeling. They’ve also discussed best practices for inner source, common myths about open source and why commercial companies choose to introduce open source offerings. Tune in and subscribe to hear what comes next. ►►Visit here for more FINOS Events (https://www.finos.org/hosted-events) ►►Visit the FINOS website (https://www.finos.org/) - and Get Involved (https://www.finos.org/get-involved) ►►Join us for the FINOS & Linux Foundation Open Source Strategy Forum (OSSF - https://events.linuxfoundation.org/open-source-strategy-forum/)

Tool and Library Qualification
Episode 37: Modified Condition/Decision Coverage — MCDC

Tool and Library Qualification

Play Episode Listen Later Oct 19, 2020 24:33


MCDC is a ubiquitous technique used in software testing, but the tools that test it must also be appropriately qualified. In this episode Dr. Oscar Slotosch gives an overview of this code coverage criterion, explains its history and development, and shares his experiences with its advantages and disadvantages for tool qualification. You can find an in-depth description of MCDC in the article Oscar mentions in this episode, and you can learn more about the topic of code coverage in one of our previous episodes, Episode 26: Code Coverage in Qualification. We can be reached through podcast@validas.de and all information about Validas can be found on our website, validas.de.

Python Bytes
#191 Live from the Manning Python Conference

Python Bytes

Play Episode Listen Later Jul 22, 2020 52:33


Special guest: Ines Montani Michael #1: VS Code Device Simulator Want to experiment with MicroPython? Teaching a course with little IoT devices? Circuit Playground Express BBC micro:bit Adafruit CLUE with a screen Get a free VS code extension that adds a high fidelity simulator Easily create the starter code (main.py) Interact with all the sensors (buttons, motion sensors, acceleration detection, device shake detection, etc.) Deploy and debug on a real device when ready Had the team over on Talk Python. Brian #2: pytest 6.0.0rc1 New features You can put configuration in pyproject.toml Inline type annotations. Most user facing API and internal code. New flags - --no-header - --no-summary - --strict-config : error on unknown config key - --code-highlight : turn on/off code highlighting in terminal Recursive comparison for dataclass and attrs Tons of fixes Improved documentation There’s a list of breaking changes and deprications. But really, nothing in the list seems like a big deal to me. Plugin authors, including myself, should go test this. Already found one problem. pytest-check: stop on fail works fine, but failing tests marked with xfail show up as xpass. Gonna have to look into that. And might have to recruit Anthony to help out again. To try it: pip install pytest==6.0.0rc1 I’m currently running through the pytest book to make sure it all still works with pytest 6. So far, so good. The one hiccup I’ve found so far, TinyDB had a breaking change with 4.0, so you need to pip install tinydb==3.15.2 to get the tasks project to run right. I should have pinned that in the original setup.py. However, all of the pytest stuff is still valid. Guido just tweeted: “Yay type annotations in pytest!” Ines #3: TextAttack Python framework for adversarial attacks and data augmentation for natural language processing What are adversarial attacks? You might have seen examples like these: image classifier predicting a cat even if the image is complete noise people at protests wearing shirts and masks with certain patterns to trick facial recognition Google Translate hallucinating bible texts if you feed it nonsense or repetitive syllables What does it mean to "understand" a model? How does it behave in different situations, with unexpected data? We can't just inspect the weights – that's not how neural networks work To understand a model, we need to run it and find behaviours we don't like TextAttack lets you run various different “attacks” from the current academic literature It also lets you create more robust training data using data augmentation, for example, replacing words with synonyms, swapping characters, etc. Michael #4: What is the core of the Python programming language? By Brett Cannon, core developer Brett and I discussed Python implementation for WebAssembly before Get Python into the browser, but with the fact that both iOS and Android support running JavaScript as part of an app it would also get Python on to mobile. We have lived with CPython for so long that I suspect most of us simply think that "Python == CPython". PyPy tries to be so compatible that they will implement implementation details of CPython. Basically most implementations of Python strive to pass CPython's test suite and to be as compatible with CPython as possible. Python’s dynamic nature makes it hard to do outside of an interpreter That has led Brett to contemplate the question of what exactly is Python? How much would one have to implement to compile Python directly to WebAssembly and still be considered a Python implementation? Does Python need a REPL? Could you live without locals()? How much compatibility is necessary to be useful? The answer dictates how hard it is to implement Python and how compatible it would be with preexisting software. [Brett] has no answers It might make sense to develop a compiler that translates Python code directly to WebAssembly and sacrifice some compatibility for performance. It might make sense to develop an interpreter that targets WebAssembly's design but maintains a lot of compatibility with preexisting code. It might make sense to simply support RustPython in their WebAssembly endeavours. Maybe Pyodide will get us there. Michael’s thoughts: How about a Python standard language spec? A standard-library “standard???!?” spec. It’s possible - .NET did it. What would be build if we could build it with web assembly? Interesting options open up, say with NodeJS like capabilities, front-end frameworks This could be MUCH bigger if we got browser makes to support alternative runtimes through WebAssembly Brian #5: Getting started with Pathlib Chris May Blog post: Stop working so hard on paths. Get started with pathlib! PDF “field guide”: Getting started with Pathlib Really great introduction to Pathlib Some of the info This file as a path object: Path(__file__) Parent directory: Path(__file__).parent Absolute path: Path(__file__).parent.resolve() Two levels up: Path(__file__).resolve(strict=True).parents[1] See pdf for explanation. Current working dir: Path.cwd() Path building with / Working with files and folders Using glob Finding parts of paths and file names. Any time spent learning Pathlib is worth it. If I can do it in Pathlib, I do. It makes my code more readable. Ines #6: Data Version Control (DVC) We're currently working on v3.0 of spaCy and one of the big features is going to be a completely new way to train your custom models, manage end-to-end training workflows and make your experiments reproducible It will also integrate with a tool called DVC (short for Data Version Control), which we've started using internally DVC is an open-source tool for version control, specifically for machine learning and data Machine learning = code + data. You can check your code into a Git repo, but you can't really check in your datasets and model weights. So it's very difficult to keep track of changes. You can think of DVC as “Git for data” and the command line usage is actually pretty similar – for example, you run dvc init to initialize a repo and dvc add to start tracking assets DVC lets you track any assets by adding meta files to your repository. So everything, including your data, is versioned, and you can always go back to the commit with the best accuracy It also builds a dependency graph based on the inputs and outputs of each step, so you only have to re-run a step if things changed for example, you might have a preprocessing step that converts your data and then a step that trains your model. If the data hasn't changed, you don't have to re-run the preprocessing step. They recently released a new tool called CML (short for Continuous Machine Learning), which we haven't tried yet. CI for Machine Learning Previews look pretty cool: you can submit a PR with some changes and a GitHub action will run your experiment and auto-comment on the PR with the results, changes in accuracy and some graphs (similar to tools like Code Coverage etc.) Extra Michael: Podcast Python Search API package, by Anton Zhiyanov Mid-string f-string upgrades coming to PyCharm. And Flynt! via Colin Martin Ines: Built-in generic types in 3.9 (PEP 585): you can now write list[str] ! Brian: https://testandcode.com/120: FastAPI & Typer - Sebastián Ramírez Jokes Fast API Job Experience Sebastián Ramírez - @tiangolo I saw a job post the other day. It required 4+ years of experience in FastAPI. I couldn't apply as I only have 1.5+ years of experience since I created that thing. Maybe it's time to re-evaluate that "years of experience = skill level". Defragged Zebra

Test & Code - Python Testing & Development
118: Code Coverage and 100% Coverage

Test & Code - Python Testing & Development

Play Episode Listen Later Jun 26, 2020 42:48


Code Coverage or Test Coverage is a way to measure what lines of code and branches in your code that are utilized during testing. Coverage tools are an important part of software engineering. But there's also lots of different opinions about using it. - Should you try for 100% coverage? - What code can and should you exclude? - What about targets? I've been asked many times what I think about code coverage or test coverage. This episode is a train of thought brain dump on what I think about code coverage. We'll talk about: - how I use code coverage to help me write source code - line coverage and branch coverage - behavior coverage - using tests to ask and answer questions about the system under test - how to target coverage just to the code you care about - excluding code - good reasons and bad reasons to exclude code And also the Pareto Principle or 80/20 rule, and the law of diminishing returns and how that applies (or doesn't) to test coverage.

Creative Engineering
Flutter Testing and AppStore Rejection

Creative Engineering

Play Episode Listen Later Apr 27, 2020 51:20


Follow Up - Rody experiences with apple approval - Rejections - Recourse - Options - 2 million people using flutter Testing - State management and testing trade offs - UI logic and replaying capabilities - Logging - Mocking - Smoke Tests - MVVM - Firebase - Filesystem - Folder Structure / Layers - Packages that can be tested - Code Coverage "flutter test --coverage" https://codemagic.io/start/ https://sentry.io/welcome/ https://pub.dev/packages/mock_cloud_firestore To visually run widget tests: flutter run test/widget_test.dart Suggested finders: Just tap anywhere when running a widget test using flutter run https://pub.dev/packages/device_preview Logging - Sentry - Crashlytics - Flutter Testing - Best Practices - Flutter Driver - Unit Tests - Flutter Octopus - Flutter Interact - Flutter VR Testing - Xcode testing, Android testing - Flutter i18n Localization Norbert Kozsir - @norbertkozsir https://twitter.com/norbertkozsir https://github.com/norbert515 Rody Davis - @rodydavis https://twitter.com/rodydavis https://github.com/rodydavis https://youtube.com/rodydavis https://rodydavis.com Our podcast player: https://rodydavis.github.io/creative_engineering/ Follow on Twitter: https://twitter.com/CreativeEngShow

Tool and Library Qualification
Episode 26: Code Coverage in Qualification

Tool and Library Qualification

Play Episode Listen Later Apr 20, 2020 25:11


In this episode Dr. Oscar Slotosch discusses the role of code coverage in the qualification of tools and libraries. Tune in to learn about how code coverage is applied in a safety analysis, the importance of the modified condition/decision coverage (MCDC) criterion in software testing, and why code coverage analysis is strongly recommended, but almost never required by modern safety standards. To learn more about the applications of code coverage in qualification, listen to our Episode 27: Compiler Qualification, where we showcase an in-depth analysis of the methods, procedures, and pitfalls in the qualification of compilers.

null++: بالعربي
Episode[14]: Unit Testing 101

null++: بالعربي

Play Episode Listen Later Apr 11, 2020 60:48


Because of the high complexity and depth of this episode, we will share with you the episode outline and the topics discussed here as a reference.Episode Outline:What is unit testing?test for the smallest possible pieces of your program.كل حاجة فيها سليمة بس لوحدها - سلطان السكريWhy Unit TestingHelp the developer deeper understand the logic he/she is implementing.Help developer writes more modular, loosely coupled code.Make it faster to develop and debug. (You can fake all the possible scenarios and see how the test is responding to each)Find bugs early.Help with documenting the code you are doing.Help when it comes to refactoring.It helps to automate the development process and decrease deployment-related friction.Unit Testing Best Practices1- Identifying Units:When it comes to the unit you are testing, you need to answer three questions.a. What is the output of this unit?b. What helper functions that this unit is using to achieve that output?c. Are there any side effects resulting from this unit. (it modifies state variables outside its scope).2- Naming & DescriptionWhy? it makes the test easier to read and hence easier to figure out what went wrong.Describe & It. (This thing / should or does something)- The search function should return an array of strings matching the search keyword.- The search function should return an empty array when a keyword is not matching any.- throw an exception if the keyword is an empty string.3- Mocking & Stubs- The unit test is 50% mocking and 50% clean code.Check Martin Fowler's article in the for more depth in mocking Check the resources section. References:Martin Fowler's "Mocks Aren't Stubs".8 Benefits of Unit Testing.The testing introduction I wish I had.Rethinking Unit Test Assertions.Unit Tests: More Readable Describe/It Statements in Protractor/Jasmine. Episode Picks:Alfy: Dokkan Tech.Luay: Rich Dad, Poor Dad Book.

Tool and Library Qualification
Episode 20: SuperTest with Marcel Beemster (Solid Sands)

Tool and Library Qualification

Play Episode Listen Later Jan 15, 2020 24:52


In this episode, Dr. Oscar Slotosch sat down for a conversation with Marcel Beemster, the CTO of Solid Sands— the one-stop-shop for C and C++ compiler and library testing, validation, and safety services. They discuss all the things that can go wrong with a compiler, where the biggest challenges in compiler use come from, and how Marcel and his team use the Solid Sands’ SuperTest validation suite. Tune in to learn about compiler development and testing from Oscar and Marcel. More information on Solid Sands and SuperTest can be found on https://solidsands.com/supertest. For an in-depth look at the qualification strategies for compilers and the use of code coverage in qualification argumentation, listen to two of our newer episodes: Episode 26: Code Coverage in Qualification and Episode 27: Compiler Qualification.

Tool and Library Qualification
Episode 12: Qualification Test Strategies

Tool and Library Qualification

Play Episode Listen Later Jul 29, 2019 19:50


This week’s episode is an introduction to qualification test strategies — join Dr. Oscar Slotosch to learn about qualification testing and how it differs from quality testing, how a test strategy is designed, and what principles should be followed when deciding how many test cases to run. In this episode we will also help you determine the exact number of test cases required for an efficient qualification test. Learn about the basics of qualification testing in Episode 04: Qualification Processes and Episode 26: Code Coverage in Qualification, immerse yourself into a detailed discussion of test strategies for compiler qualification in our Episode 27: Compiler Qualification, or learn about Validas’s own test case generator in Episode 28: ForeC++ — Test Case Generator.

Talent Hub Talk
What really makes a CTA with Steven Herod

Talent Hub Talk

Play Episode Listen Later Jul 17, 2019 43:12


Today we talk to Certified Technical Architect, Steven Herod, co-host of the successful Salesforce Developer Podcast, Code Coverage, as he shares his deep insights into the Salesforce world, his own journey into the Salesforce ecosystem, and what the ecosystem looked like back in 2010. We look at how the roles have changed on Salesforce projects and in Salesforce teams from then, until now. Being a CTA himself, Steven explains the Architecture path and Salesforce certifications, and how these relate to the end goal of becoming a Salesforce Certified Technical Architect. We discuss the difference between a Salesforce Developer and an Architect, and how people can and should, interview Salesforce Developers, and the things to look out for.

airhacks.fm podcast with adam bien
80% Code Coverage is Not Enough

airhacks.fm podcast with adam bien

Play Episode Listen Later May 5, 2019 61:15


An airhacks.fm conversation with James Wilson (@jgwilson42) about: the result of pressing the break button on a BBC computer, and ZX Spectrum, Space Invaders with Basic, extending minecraft with Java, accidental tester career, best interviewees got programmer jobs, hackers and testers, developers like the happy path, unit test coverage is useless without good asserts, is 80% code coverage a valuable target?, code coverage was used as a motivation for writing tests, reflection utilities to increase code coverage, getters / setters never brake, Code as a Crime Scene book, methods longer than a screen are problematic, the ratio between trivial and good asserts, a good javadoc and unit tests follow similar principles, system tests are the most important one, unit testing is good for checking error scenarios, the more tests you have, the easier it is to locate errors, the Law of Triviality requires standard names for test categories, integration testing and system testing, reusing system tests as clients and stress tests, UK retailer goes down, take the max load and double it, jbmc is bytecode verification tool, diffblue cover generates unit tests, generating unit tests quickly for legacy backends, playground, What is the AI in “AI for Code”? blogpost, diffblue blog, @diffbluehq James Wilson on twitter: @jgwilson42. Checkout: javaeetesting.com - the online test about Unit-, Integration-, and Stress Testing and see you at airhacks.com.

Code Coverage - Salesforce Developer Podcast
Episode 44 — Did We Miss Something?

Code Coverage - Salesforce Developer Podcast

Play Episode Listen Later Mar 7, 2019 90:08


Against all odds, it's an episode of Code Coverage. Topics discussed include: * Audio problems, not knowing we were actually having some (doh) * A brief tour or some of the features from Spring '19 * Salesforce and devops, or not * The end of a an era Aura * Lightning Web Components and their likely impact * New flow builder * The latest release of OS/2

Agile Thoughts
021 Code Coverage Scandal ​

Agile Thoughts

Play Episode Listen Later Jan 21, 2019


Note: this IT radio drama starts with episode 14, Why DEvs don’t TDD. Start listening there.  Connect Visit Agile Thoughts and register to receive free development, analysis, or leadership and management materials and learn to excel at developing software. I’ll also send information on my low cost email courses you can take via the internet. ​021 …

Good Day, Sir! Show
Probably Spam

Good Day, Sir! Show

Play Episode Listen Later Nov 14, 2018 100:05


In this episode, we discuss ISV development, certfications, code coverage and quality code, Amazon moving away from Oracle, Zendesk Sunshine, and AI replacing people. Getting to Know Vue.js: Learn to Build Single Page Applications in Vue from Scratch Now Available: Getting to Know Vue.js - WIPDeveloper.com Caffè corretto - Wikipedia Choose Technology Suppliers Carefully – Perspectives 'A.I. is not gonna replace people,' says Salesforce executive SAP buying Qualtrics to face off against Salesforce Break free with Zendesk Sunshine | Zendesk Blog Google signs massive SF office lease at former Salesforce headquarters - SFChronicle.com 95% Code Coverage : salesforce Best practices for migrating an Oracle database to Amazon RDS PostgreSQL or Amazon Aurora PostgreSQL: Migration process and infrastructure considerations | AWS Database Blog

Die Code Coroner - Tech-Podcast für Softwarequalität

Thomas spricht heute mit Richard darüber wie man Software sinnvoll messen kann. Welche Mittel zur Visualisierung von Code gibt es, um Komplexität sichtbar zu machen? Welche Kennzahlen sind aussagekräftig?

Good Day, Sir! Show
Fat Packages

Good Day, Sir! Show

Play Episode Listen Later Aug 9, 2017 105:14


In this episode, we discuss podcasting, managed packages, lightning component development, flows, and agile projects with fellow podcaster and co-host of the Code Coverage podcast, Steven Herod. Code Coverage Podcast Twitter: Steven Herod

soundbite.fm: a podcast network
Merge Conflict 52: Bug Fixes and Improvements

soundbite.fm: a podcast network

Play Episode Listen Later Jul 3, 2017 47:35


Will Frank ever update his apps in the app store? When, how, and what should go into that super important app update that your users are expecting? We investigate what has changed for us and the app stores in the last year. How about open source projects and libraries? Do these fall into the same criteria of an "app release"? We also take a look at how Apple, Google, and awesome CI services have helped us be more successful. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Music : Amethyst Seer - Citrine by Adventureface Proudly recorded on Zencastr

Merge Conflict
Merge Conflict 52: Bug Fixes and Improvements

Merge Conflict

Play Episode Listen Later Jul 3, 2017 47:35


Will Frank ever update his apps in the app store? When, how, and what should go into that super important app update that your users are expecting? We investigate what has changed for us and the app stores in the last year. How about open source projects and libraries? Do these fall into the same criteria of an "app release"? We also take a look at how Apple, Google, and awesome CI services have helped us be more successful. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Music : Amethyst Seer - Citrine by Adventureface Proudly recorded on Zencastr

soundbite.fm: a podcast network
Merge Conflict 42: Code Coverage == Quality

soundbite.fm: a podcast network

Play Episode Listen Later Apr 24, 2017 44:44


It's everyone's favorite topic... TESTING! That's right we tackle the world of unit testing, code coverage, user interface testing, acceptance testing, and so much more. Even though Frank and James are solo developers and there may not always be time to write a full suite of unit tests, that doesn't mean they don't dream of 100% code coverage. What does that even mean though? Does that mean your app will be flawless? Where do you get started and what should be your goals? We discuss on this week's show. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Music : Amethyst Seer - Citrine by Adventureface Proudly recorded on Zencastr

Merge Conflict
Merge Conflict 42: Code Coverage == Quality

Merge Conflict

Play Episode Listen Later Apr 24, 2017 44:44


It's everyone's favorite topic... TESTING! That's right we tackle the world of unit testing, code coverage, user interface testing, acceptance testing, and so much more. Even though Frank and James are solo developers and there may not always be time to write a full suite of unit tests, that doesn't mean they don't dream of 100% code coverage. What does that even mean though? Does that mean your app will be flawless? Where do you get started and what should be your goals? We discuss on this week's show. Follow Us Frank: Twitter, Blog, GitHub James: Twitter, Blog, GitHub Music : Amethyst Seer - Citrine by Adventureface Proudly recorded on Zencastr

Podlodka Podcast
Podlodka #4 - Мутационное тестирование

Podlodka Podcast

Play Episode Listen Later Mar 27, 2017 109:19


Если вы когда-нибудь задавались вопросом “А кто тестит мои тесты?”, то этот выпуск точно для вас. Мы дружно пытаем нашего гостя, Алексея Денисова, на тему того, что такое мутационное тестирование, и пытаемся придумать, как его встроить в процесс разработки софта. Ну и, конечно, обсуждаем Mull - инструмент для создания и расстрела мутантов для LLVM. Содержание: - 00:00 - Приветствие - 00:58 - Знакомство с гостем этого выпуска, Алексеем Денисовым - 06:10 - Про Code Coverage - 21:35 - Что такое мутационное тестирование - 39:10 - Длительность мутационного тестирования - 43:00 - Оптимизация мутационного тестирования - 47:30 - LLVM - 49:38 - Использование Mull в iOS разработке - 59:18 - Continuous Mutation Testing - 01:05:14 - Планы развития Mull - 1:14:00 - Последние новости: Android O, Apple Clips. - 1:24:55 - Ответы на вопросы слушателей из нашего чата: про софт, который используют ведущие, и снова про карьерную лестницу. Полезные ссылки: - Блог Алексея Денисова https://lowlevelbits.org - Доклад про мутационное тестирование на FOSDEM https://www.youtube.com/watch?v=YEgiyiICkpQ - Репозиторий Mull https://github.com/mull-project/mull - LLVM-based Mutation Testing System. Request For Comments http://lowlevelbits.org/llvm-based-mutation-testing-system/ - Новая версия Android O https://tproger.ru/news/android-o-developer-preview/ - Приложение Apple Clips http://www.apple.com/clips/

Fatal Error
9. Getting Started with Testing

Fatal Error

Play Episode Listen Later Nov 21, 2016 35:31


Today on Fatal Error: a crash course on a bunch of useful concepts for testing iOS apps in Swift. Automated Tests as Documentation Code Coverage in Xcode Danger CI & Fastlane View Models: see episodes 2 and 3 Dependency Injection Mocking Classes You Don't Own Protocol-Oriented Programming in Swift (WWDC 2015) Don't mock what you don't own Screenshot testing: Facebook's SnapshotTestCase; objc.io article Working Effectively with Legacy Code by Michael Feathers Testing, for people who hate testing OHHTTPStubs OCMock Other links Chris likes, which we didn't discuss in this episode: 5 Questions Every Unit Test Must Answer Mocks Aren't Stubs When is it safe to introduce test doubles? Test Isolation is about Avoiding Mocks Chris's Pinboard on Testing

Code Coverage - Salesforce Developer Podcast
Episode 33 — Scott Wells author of Illuminated Cloud

Code Coverage - Salesforce Developer Podcast

Play Episode Listen Later Oct 1, 2016 54:37


In this episode of Code Coverage we talk to Scott Wells, the author of the Illumined Cloud IDE for Salesforce. Illuminated cloud is an alternative IDE built on ItelliJ, and sports many features that make life far easier for developers on the platform. Topics we discuss include:   How Scott got started building Illuminated Cloud The feature set of Illuminated Cloud How he implemented the debugger Weirdness with APIs on the platform Dreamforce!   Illuminated Cloud sessions at Dreamforce: Tuesday Morning Chris Fellows - Illuminated Cloud & Lightning Components   Tuesday Afternoon - 5:00pm Scott Wells - Live Demo of Illuminated Cloud

3 Minutes with Kent
How JavaScript Code Coverage Works

3 Minutes with Kent

Play Episode Listen Later Apr 6, 2016 2:53


Just a quick explanation of how tools instrument your JavaScript code to record and report code coverage. Common tools: istanbul (http://npm.im/istanbul) nyc (http://npm.im/nyc) with ava (http://npm.im/ava) are great for coverage of node-running tests (use jsdom (http://npm.im/jsdom) to mock a DOM like I do (http://kcd.im/react-ava-repo)). babel-plugin-__coverage__ (http://npm.im/babel-plugin-__coverage__) karma-coverage (http://npm.im/karma-coverage) with karma (http://npm.im/karma) and the babel plugin are great for coverage of code when you want to run your tests in a browser isparta (http://npm.im/isparta) (no longer maintained, try the babel plugin) You may also be interested in this video (https://youtu.be/P-1ZZkpEmQA) + this repo (https://github.com/kentcdodds/random-user-coverage) where I go through setting up code coverage with ES6, Webpack, and Karma. Just replace where I use isparta with using babel-plugin-__coverage__ (like this (https://github.com/kentcdodds/es6-todomvc/blob/0c1b0afe31bad4713c51f6485cd616730e4ede8b/.babelrc)) and it should work great

All Ruby Podcasts by Devchat.tv
229 RR Adopting New Technology

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Oct 14, 2015 60:43


When is it worthwhile to introduce a new language, tool, or database? And when will it likely bite you in the rearend? 02:43 - Episode Idea Background PolyConf @polyconfhq 04:28 - Implementing Standards and Comparisons Minimize Entry Level / Maximizing Payoff 08:23 - “Dumb Code” and Developer Expectations 10:48 - Code Coverage and Regular Expressions Oniguruma Fizz Buzz Ruby Rogues Episode #120: RR Book Club: Understanding Computation with Tom Stuart 12:49 - Risk Impact/Probability Chart, Risk - Reward Matrix 24:01 - Collaboration, Communication => Constraint Responsibility 30:36 - Bringing It In: Process Databases Demille Quote 38:48 - Why would you want to switch databases and when is it worth it? Eliminating a Technology Peter Seibel: Let a 1,000 flowers bloom. Then rip 999 of them out by the roots. Internal vs External Motivation Redis vs Memcache 46:06 - Success Cases Abstraction Picks OS4W: Open Source for Women (Coraline) Contributor Covenant (Coraline) Camille Fournier: Hopelessness and Confidence in Distributed Systems Design (Jessica) Abby Bobé: From Protesting to Programming: Becoming a Tech Activist (Jessica) Rails Remote Conf (Chuck) TV Fool (Chuck)

Devchat.tv Master Feed
229 RR Adopting New Technology

Devchat.tv Master Feed

Play Episode Listen Later Oct 14, 2015 60:43


When is it worthwhile to introduce a new language, tool, or database? And when will it likely bite you in the rearend? 02:43 - Episode Idea Background PolyConf @polyconfhq 04:28 - Implementing Standards and Comparisons Minimize Entry Level / Maximizing Payoff 08:23 - “Dumb Code” and Developer Expectations 10:48 - Code Coverage and Regular Expressions Oniguruma Fizz Buzz Ruby Rogues Episode #120: RR Book Club: Understanding Computation with Tom Stuart 12:49 - Risk Impact/Probability Chart, Risk - Reward Matrix 24:01 - Collaboration, Communication => Constraint Responsibility 30:36 - Bringing It In: Process Databases Demille Quote 38:48 - Why would you want to switch databases and when is it worth it? Eliminating a Technology Peter Seibel: Let a 1,000 flowers bloom. Then rip 999 of them out by the roots. Internal vs External Motivation Redis vs Memcache 46:06 - Success Cases Abstraction Picks OS4W: Open Source for Women (Coraline) Contributor Covenant (Coraline) Camille Fournier: Hopelessness and Confidence in Distributed Systems Design (Jessica) Abby Bobé: From Protesting to Programming: Becoming a Tech Activist (Jessica) Rails Remote Conf (Chuck) TV Fool (Chuck)

Ruby Rogues
229 RR Adopting New Technology

Ruby Rogues

Play Episode Listen Later Oct 14, 2015 60:43


When is it worthwhile to introduce a new language, tool, or database? And when will it likely bite you in the rearend? 02:43 - Episode Idea Background PolyConf @polyconfhq 04:28 - Implementing Standards and Comparisons Minimize Entry Level / Maximizing Payoff 08:23 - “Dumb Code” and Developer Expectations 10:48 - Code Coverage and Regular Expressions Oniguruma Fizz Buzz Ruby Rogues Episode #120: RR Book Club: Understanding Computation with Tom Stuart 12:49 - Risk Impact/Probability Chart, Risk - Reward Matrix 24:01 - Collaboration, Communication => Constraint Responsibility 30:36 - Bringing It In: Process Databases Demille Quote 38:48 - Why would you want to switch databases and when is it worth it? Eliminating a Technology Peter Seibel: Let a 1,000 flowers bloom. Then rip 999 of them out by the roots. Internal vs External Motivation Redis vs Memcache 46:06 - Success Cases Abstraction Picks OS4W: Open Source for Women (Coraline) Contributor Covenant (Coraline) Camille Fournier: Hopelessness and Confidence in Distributed Systems Design (Jessica) Abby Bobé: From Protesting to Programming: Becoming a Tech Activist (Jessica) Rails Remote Conf (Chuck) TV Fool (Chuck)

Theory and Craft
Software Testing and Test Effectiveness

Theory and Craft

Play Episode Listen Later Aug 23, 2015 42:50


We look at how and how not to use code coverage, and discuss a variety of topics around software testing.

Für's Protokoll
WWDC 2015

Für's Protokoll

Play Episode Listen Later Jun 12, 2015 12:50


Fri, 12 Jun 2015 09:30:00 +0000 https://www.protokollcast.de/23-wwdc-2015 9325f3cdaac05c6977b3fb4fa4e7873b Apple WWDC 2015 Ein schneller Überblick über Themen der WWDC 2015: Swift, watchOS, iOS, Xcode und OS X. Die Links für diese Sendung: WWDC 2015 WWDC 2015 Tweet zu Code Coverage von @designatednerd https://images.podigee.com/0x,snXDXk7ZzEGSzdTbBeFmskduYmipke_BmwIvx_luPR5s=/https://cdn.podigee.com/uploads/u301/1434100960fc9b.jpeg WWDC 2015 https://www.protokollcast.de/23-wwdc-2015 23 full Apple WWDC 2015 no Marc Kalmes

Code Coverage - Salesforce Developer Podcast
Episode 22 - Eliot Harper and ET/Marketing Cloud Journey Builder

Code Coverage - Salesforce Developer Podcast

Play Episode Listen Later Jun 11, 2015 52:13


In this episode of Code Coverage we talk to Eliot Harper (@eliotharper), CTO at Digital Logic, a Marketing Services company based in Melbourne, Australia. Eliot describes himself as a customer journey practitioner and is author of Journey Builder Developer's Guide http://amzn.to/1a9gWNo; an in-depth book on customer journey integration with Salesforce Marketing Cloud.   Eliot orientates us to Journey Builder development, covering:   * A brief orientation to Salesforce Marketing Cloud * The Customer Journey Revolution * Journey Builder * Customer Journey Applications * Interaction Components * Workflow Document Format (WDF) * Custom Activities

Code Coverage - Salesforce Developer Podcast
Episode 16 - Kevin O'Hara on Open Source Development, NodeJS and nForce

Code Coverage - Salesforce Developer Podcast

Play Episode Listen Later Dec 18, 2014 42:54


In this episode of Code Coverage we hear from Kevin O'Hara on why he loves open source development. In particular we discuss what's coming up in nForce, what working on open source is like, and an exciting new developer tool that he's working on.

This Agile Life
Episode 67: Death by Charts (Code Coverage)

This Agile Life

Play Episode Listen Later Nov 23, 2014 51:26


The holy practice of measuring code coverage

Zend Screencasts: Video Tutorials about the Zend PHP Framework  (iphone)
Unit Testing with the Zend Framework with Zend_Test and PHPUnit

Zend Screencasts: Video Tutorials about the Zend PHP Framework (iphone)

Play Episode Listen Later Jun 11, 2009


I have to preface this video by saying that I’m still a bit of a novice when it comes to unit testing (especially in Zend). Also, I feel that I wouldn’t be able to take credit for the whole implementation. Here are some great resources on unit testing in the Zend Framework to beef up…

unit testing zend code coverage phpunit zend framework
Hanselminutes - Fresh Talk and Tech for Developers
Quetzal Bradley on Testing after Unit Tests

Hanselminutes - Fresh Talk and Tech for Developers

Play Episode Listen Later Mar 7, 2008 28:16


In this episode Scott talks with Quetzal Bradley, a Microsoft developer on the Connected Systems Architecture Team, about testing after unit tests. Is 100% Code Coverage enough?