Podcasts about test driven development

Software design using test cases

  • 174PODCASTS
  • 356EPISODES
  • 52mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 18, 2025LATEST
test driven development

POPULARITY

20172018201920202021202220232024


Best podcasts about test driven development

Latest podcast episodes about test driven development

Dev Interrupted
Will AI Finally Make TDD Practical? | Diffblue's Animesh Mishra

Dev Interrupted

Play Episode Listen Later Mar 18, 2025 46:04 Transcription Available


The promise of Test Driven Development (or TDD) remains unfulfilled. Like many other forms of aspirational development, the practice has fallen victim to countless buzzword cycles. What if the answer is already in our toolbox?This week, host Andrew Zigler sits down with Animesh Mishra, Senior Solutions Engineer at Diffblue, to unpack the gap between TDD's theoretical appeal and its practical challenges. Animesh draws from his extensive experience to explain how deterministic AI can address the key challenges of building trust in AI for testing. These aren't LLMs of today, but foundational machine learning models that can evaluate all possible branches of a piece of code to write test coverage for it. Imagine writing two years worth of tests for a legacy codebase… in two hours… with no errors!If you enjoyed this conversation about the gaps between theory and execution in engineering culture, be sure to check out last week's chat with David Mytton about shift left adoption by engineering teams.Check out:Translating DevEx to the Board Beyond the DORA FrameworksIntroducing AI-Powered Code Review with gitStreamFollow the hosts:Follow BenFollow AndrewFollow today's guest(s):www.diffblue.comX: diffbluehqLinkedIn: DiffblueAnimesh MishraSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever

No Nonsense Podcast
#0118 - Test Driven Development with Bryan Finster

No Nonsense Podcast

Play Episode Listen Later Mar 17, 2025 54:16


 In this episode, we talk to Bryan Finster about Test Driven Development.  We discuss what TDD is and why it's essential for high speed, high quality software engineering. Bryan criticizes independent QA teams and we explore how integrated cross-functional teams provide the rapid feedback you need to develop high quality code.  And we discuss how to use value stream mapping to show your leaders the need to make a change.  Tune in to learn all about TDD and BDD from an expert. Listen to the podcast on your favourite podcast app: | Spotify | Apple Podcasts | Google Podcasts | iHeart Radio | PlayerFM | Amazon Music | Listen Notes | TuneIn | Audible | Podchaser |  Deezer | Podcast Addict | Contact Murray on LinkedIn or via email 

Heavybit Podcast Network: Master Feed
Ep. #16, Test-Driven Development Demystified with Jon Jagger

Heavybit Podcast Network: Master Feed

Play Episode Listen Later Feb 20, 2025 41:22


In episode 16 of How It's Tested, Eden speaks with Jon Jagger, Director of Software at Kosli. The conversation dives into Jon's journey of creating Cyber-Dojo, his insights on test-driven development (TDD), and how software testing practices have evolved over the years. They also discuss Jon's current role at Kosli, the philosophy behind effective testing, and how regulated industries like banking can benefit from modern compliance practices.

The Real Python Podcast
Behavior-Driven vs Test-Driven Development & Using Regex in Python

The Real Python Podcast

Play Episode Listen Later Feb 14, 2025 57:03


What is behavior-driven development, and how does it work alongside test-driven development? How do you communicate requirements between teams in an organization? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.

.NET in pillole
266 - Test Driven Development, investimento o costo?

.NET in pillole

Play Episode Listen Later Nov 11, 2024 11:56


Concludiamo il percorso nel modo dei test parlando di TDD (Test Driven Development), una pratica dell'Extreme Programming che si basa sullo scrivere prima i test e poi il codice.https://martinfowler.com/bliki/TestDrivenDevelopment.htmlhttps://learn.microsoft.com/en-us/visualstudio/test/quick-start-test-driven-development-with-test-explorer?view=vs-2022https://en.wikipedia.org/wiki/Test-driven_developmentLibro Unit testing - https://amzn.to/4hztiBL#tdd #unittesting #dotnet

The Agile Embedded Podcast
Exploring Rust for Embedded Systems with Philip Markgraf

The Agile Embedded Podcast

Play Episode Listen Later Oct 30, 2024 50:20


Exploring Rust for Embedded Systems with Philip MarkgrafIn this episode of the Agile Embedded Podcast, hosts Jeff Gable and Luca Ingianni are joined by Philip Markgraf, an experienced software developer and technical leader, to discuss the use of Rust in embedded systems. Philip shares his background in C/C++ development, his journey with Rust, and the advantages he discovered while using it in a large development project. The conversation touches on memory safety, efficient resource management, the benefits of Rust's type system, and the supportive Rust community. They also explore the practical considerations for adopting Rust, including its tooling, ecosystem, and applicability to Agile development. The episode concludes with Philip offering resources for learning Rust and connecting with its community.00:00 Introduction and Guest Welcome00:26 Philip's Journey with Rust01:01 The Evolution of Programming Languages02:27 Evaluating Programming Languages for Embedded Systems06:13 Adopting Rust for a Green Energy Project08:57 Benefits of Using Rust11:24 Rust's Memory Management and Borrow Checker15:50 Comparing Rust and C/C++19:32 Industry Trends and Future of Rust22:30 Rust in Cloud Computing and Embedded Systems23:11 Vendor-Supplied Driver Support and ARM Processors24:09 Open Source Hardware Abstraction Libraries25:52 Advantages of Rust's Memory Model29:32 Test-Driven Development in Rust30:35 Refactoring and Tooling in Rust31:14 Simplicity and Coding Standards in Rust32:14 Error Messages and Linting Tools33:32 Sustainable Pace and Developer Satisfaction36:15 Adoption and Transition to Rust39:37 Hiring Rust Developers42:23 Conclusion and ResourcesResourcesPhil's LinkedinThe Rust LanguageRust chat rooms (at the Awesome Embedded Rust Resources List)The Ferrocene functional-safety qualified Rust compiler  You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click here

Azure DevOps Podcast
Kent Beck: Tidy First - Episode 314

Azure DevOps Podcast

Play Episode Listen Later Sep 9, 2024 39:29


Kent Beck is an original signer of the Agile Manifesto, author of the Extreme Programming book series, rediscoverer of Test-Driven Development, and an inspiring Keynote Speaker. I read his TDD book 20 years ago.   Topics of Discussion: [3:46] What led Kent to extreme programming? [7:52] What critical practices have stood the test of time? [10:58] The role of software design in Agile Development. [13:11] The inspiration behind Tidy First? [16:16] Why software design is both a critical skill and an exercise in human relationships. [22:05] What is “normalizing symmetry”? [25:04] Empirical design. [28:09] Design changes tend to be reversible. [30:41] Experimentation with the GPT phase of AI on publications. [35:13] Advice for young developers and programmers.   Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! KentBeck.com Tidy First? Test-Driven Development Extreme Programming Explained Implementation Patterns   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

Book Overflow
"Working Effectively with Legacy Code" by Michael Feathers (Part 1)

Book Overflow

Play Episode Listen Later Jul 15, 2024 82:02


Carter Morgan and Nathan Toups read and discuss the first half of "Working Effectively with Legacy Code" by Michael Feathers. Join them as they reflect on dependency inversion, the importance of interfaces, and continue their never-ending debate on the pros and cons of Test-Driven Development! (The audio gets a little de-synced in the last three minutes. Carter isn't talking over Nathan on purpose!) Chapter markers: 00:00 Intro  04:51 Thoughts on the book 10:54 Defining Legacy Code 21:53 Quick Break: Pull Requests 22:38 How to change software 44:30 Quick Break: CI/CD 45:15 Testing Legacy Code 1:15:10 Quick Break: Linting 1:16:01 Closing Thoughts 

Engineering Kiosk
#126 Killing the Mutant: Teststrategien mit Sebastian Bergmann

Engineering Kiosk

Play Episode Listen Later Jun 4, 2024 79:39


Testing ist nicht gleich Testing - Ein Deep Dive mit Sebastian BergmannViele Software-Entwickler⋅innen kennen Unit-Tests. Einige schreiben Unit Tests bei der Entwicklung. Wenige machen wirklich Test-Driven-Development. Doch beim Unit-Testing fängt das ganze Thema Testing doch erst an. Wie sieht es denn mit Static Testing, Non-Functional-Testing, White-Box-Testing, End-to-End-Testing, Dynamic Testing oder Integration Testing aus? Und hast du schon mal von Mutanten Testing gehört?Ganz schön viele Buzzwords. Und dabei haben wir noch gar nicht die Fragen beantwortet, was eigentlich gute Tests sind, wie viele Tests genug Tests sind, wie AI uns helfen kann bessere Tests zu schreiben oder ob Testing eigentlich moderner Kram ist oder schon seit Anbeginn des Programmier Zeitalters eine Rolle gespielt hat.In dieser Episode gibt es einen Rundumschlag zum Thema Testing mit Sebastian Bergmann.Bonus: Die Amiga-Szene lebt.Das schnelle Feedback zur Episode:

Cup o' Go
Go, meet hugging face

Cup o' Go

Play Episode Listen Later May 31, 2024 61:44 Transcription Available


Go 1.22.4 & 1.21.11 coming Tuesday, June 4Community eventsGolang Atlanta meetup, June 13Cup o' Go Meetup in Amsterdam, June 19Golang Tilburg meetup, June 20Proposal accepted and implemented: new iterator functions in maps package coming in 1.23Reddit: What software shouldn't you write in Go?Blog: Blazingly Fast Shadow Stacks for Go by Felix GeisendörfBlog: Abusing Go's infrastructure by Pedro VilaçaAd breakEpisode 15, interview with Adelina Simion about her book, Test-Driven Development in GoInterview with Riccardo PinosioHugging Facehugot on GitHubONNXKnights Analytics

Open at Intel
Empowering Developers with AI Tools

Open at Intel

Play Episode Listen Later May 30, 2024 29:49


In this episode, we dive into Codium, an AI-powered coding platform designed to assist developers throughout the software development lifecycle, especially in testing, code review, and documentation. Dedy Kredo, one of Codium's co-founders, explains the unique features and benefits of the platform, comparing it to other tools like GitHub Copilot. The discussion also touches on Codium's adaptability for test-driven development and its flexible deployment options, highlighting the importance of security and configuration. Additionally, the significance of the Intel Ignite startup program and the impact of AI hype on Codium's rapid growth are discussed. Listeners will gain insights into Codium's open-core model and open-source projects, including the Alpha Codium research project. 00:00 Introduction 00:13 What is Codium? 01:35 Comparison with Other AI Coding Tools 03:01 Test-Driven Development and Codium 05:40 Customization and Configuration 08:17 Deployment Options and Security 11:11 Intel Ignite Program Experience 13:45 Impact of AI Hype on Business 17:02 AI-Assisted Development and Semi-Automation 17:43 Improving Code Quality and Productivity 18:33 Challenges and Opportunities in AI for Software Development 20:27 Adopting AI Tools in Development Teams 24:07 Open Source Projects and Community Engagement 28:11 Conclusion and Future Prospects Guest: Dedy Kredo is the Co-Founder and Chief Product Officer of CodiumAI, leading the product and engineering teams to empower developers to build software faster and more accurately through the use of artificial and human intelligence. Before founding CodiumAI, he served as VP of Customer Facing Data Science at Explorium, where he built and led a talented data science team and played a key role in the company's growth from seed to series C. Previously, he was the founder of an online marketing startup, growing it from a bootstrapped venture to millions in revenue. Before that, he spent seven years in Colorado and California as a product line manager at VMware's management business unit. During this time, he worked closely with Fortune 500 companies and successfully launched several new products to market.

Software Engineering Radio - The Podcast for Professional Software Developers

Kent Beck, Chief Scientist at Mechanical Orchard, and inventor of Extreme Programming and Test-Driven Development, joins SE Radio host Giovanni Asproni for a conversation on software design based on his latest book "Tidy First?". The episode starts with exploring the reasons for writing the book, and introducing the concepts of tidying, cohesion, and coupling. It continues with a conversation about software design, and the impact of tidyings. Then Kent and Giovanni discuss how to balance design and code quality decisions with cost, value delivered, and other important aspects. The episode ends with some considerations on the impact of Artificial Intelligence on the software developer's job. Brought to you by IEEE Software and IEEE Computer Society.

Agile Innovation Leaders
(S4) E039 Luke Hohmann on Creating Sustainably Profitable Software-Enabled Solutions

Agile Innovation Leaders

Play Episode Listen Later Apr 28, 2024 70:50


Bio Luke Hohmann is Chief Innovation Officer of Applied Frameworks. Applied Frameworks helps companies create more profitable software-enabled solutions. A serial entrepreneur, Luke founded, bootstrapped, and sold the SaaS B2B collaboration software company Conteneo to Scaled Agile, Inc. Conteneo's Weave platform is now part of SAFe Studio. A SAFe® Fellow, prolific author, and trailblazing innovator, Luke's contributions to the global agile community include contributing to SAFe, five books, Profit Streams™, Innovation Games®, Participatory Budgeting at enterprise scale, and a pattern language for market-driven roadmapping. Luke is also co-founder of Every Voice Engaged Foundation, where he partnered with The Kettering Foundation to create Common Ground for Action, the world's first scalable platform for deliberative decision-making. Luke is a former National Junior Pairs Figure Skating Champion and has an M.S.E. in Computer Science and Engineering from the University of Michigan. Luke loves his wife and four kids, his wife's cooking, and long runs in the California sunshine and Santa Cruz mountains.    Interview Highlights 01:30 Organisational Behaviour & Cognitive Psychology 06:10 Serendipity 09:30 Entrepreneurship 16:15 Applied Frameworks 20:00 Sustainability 20:45 Software Profit Streams 23:00 Business Model Canvas 24:00 Value Proposition Canvas 24:45 Setting the Price 28:45 Customer Benefit Analysis 34:00 Participatory Budgeting 36:00 Value Stream Funding 37:30 The Color of Money 42:00 Private v Public Sector 49:00 ROI Analysis 51:00 Innovation Accounting    Connecting   LinkedIn: Luke Hohmann on LinkedIn Company Website: Applied Frameworks    Books & Resources   ·         Software Profit Streams(TM): A Guide to Designing a Sustainably Profitable Business: Jason Tanner, Luke Hohmann, Federico González ·         Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers (The Strategyzer series): Alexander Osterwalder, Yves Pigneur ·         Value Proposition Design: How to Create Products and Services Customers Want (The Strategyzer Series): Alexander Osterwalder, Yves Pigneur, Gregory Bernarda, Alan Smith, Trish Papadakos ·         Innovation Games: Creating Breakthrough Products Through Collaborative Play: Luke Hohmann ·         The ‘Color of Money' Problem: Additional Guidance on Participatory Budgeting - Scaled Agile Framework ·         The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, Eric Ries ·         Extreme Programming Explained: Embrace Change 2, Kent Beck, Cynthia Andres ·         The Mythical Man-Month: Essays on Software Engineering: Brooks, Frederick Phillips ·         Understanding Comics: The Invisible Art, Scott McCloud ·         Ponyboy: A Novel, Eliot Duncan ·         Lessons in Chemistry: A Novel, Bonnie Garmus, Miranda Raison, Bonnie Garmus, Pandora Sykes ·         What Happened to You?: Conversations on Trauma, Resilience, and Healing, Oprah Winfrey, Bruce D. Perry ·         Training | Applied Frameworks   Episode Transcript Intro: Hello and welcome to the Agile Innovation Leaders podcast. I'm Ula Ojiaku. On this podcast I speak with world-class leaders and doers about themselves and a variety of topics spanning Agile, Lean Innovation, Business, Leadership and much more – with actionable takeaways for you the listener.  Ula Ojiaku   So I have with me Luke Hohmann, who is a four time author, three time founder, serial entrepreneur if I say, a SAFe fellow, so that's a Skilled Agile Framework fellow, keynote speaker and an internationally recognised expert in Agile software development. He is also a proud husband and a father of four. So, Luke, I am very honoured to have you on the Agile Innovation Leaders podcast. Thank you for making the time. Luke Hohmann Thank you so much for having me, I'm very happy to be here, and hi everyone who's listening. Ula Ojiaku Yes, I'm sure they're waving back at you as well. I always start my conversations with my guests to find out about them as individuals, you know, so who is Luke? You have a BSc in Computer Science and an MSc in Computer Science and Engineering, but you also studied Cognitive Psychology and Organisational Behaviour in addition to Data Structures and Artificial Intelligence. AI is now making waves and is kind of at the forefront, which is interesting, you had the foresight to also look into these. So my question is, what took you down this path? Luke Hohmann Sure. I had a humble beginning in the world of technology. I worked for a large company, Electronic Data Systems, and it was founded in the mid 60s by a gentleman named Ross Perot, and it became a very, very large company. So my first job at Electronic Data Systems was working in a data centre, and we know what data centres are, but back then, data centres were different because they were predominantly mainframe-based data centres, and I would crawl underneath the floor, cabling the computers and cabling networking equipment. Now, when we think networking, we're really thinking one of two kinds of networking. We think of wireless networking or we think of some form of internet networking, but back in those days, there were varieties of network protocols, literally the standards that we use now weren't invented yet. So it was mainframe networking protocols and dial ups and other forms of networking protocols. From there, I worked my way from beneath the ground up. I had some great managers who saw someone who was worthy of opportunity and they gave me opportunity and it was great. And then eventually I started working in electronic data systems and there was, the first wave of AI came in the mid 80s and that's when we were doing things like building expert systems, and I managed to create with a colleague of mine, who's emerged as my best friend, a very successful implementation of an expert system, an AI-based expert system at EDS, and that motivated me to finish off my college degree, I didn't have my college degree at the time. So EDS supported me in going to the University of Michigan, where as you said, I picked up my Bachelor's and Master's degree, and my advisor at the time was Elliot Soloway, and he was doing research in how programmers program, what are the knowledge structures, what are the ways in which we think when we're programming, and I picked up that research and built programming environments, along with educational material, trying to understand how programmers program and trying to build educational material to teach programming more effectively. That's important because it ignited a lifelong passion for developing education materials, etc. Now the cognitive psychology part was handled through that vein of work, the organisational behaviour work came as I was a student at Michigan. As many of us are when we're in college, we don't make a lot of money, or at university we're not wealthy and I needed a job and so the School of Organisational Behaviour had published some job postings and they needed programmers to program software for their organisational behaviour research, and I answered those ads and I became friends and did the research for many ground-breaking aspects of organisational behaviour and I programmed, and in the process of programming for the professors who were in the School of Organisational Behaviour they would teach me about organisational behaviour and I learned many things that at the time were not entirely clear to me, but then when I graduated from university and I became a manager and I also became more involved in the Agile movement, I had a very deep foundation that has served me very well in terms of what do we mean when we say culture, or what do we mean when we talk about organisational structures, both in the small and in the large, how do we organise effectively, when should we scale, when should we not scale, etc. So that's a bit about my history that I think in terms of the early days helped inform who I am today. Ula Ojiaku Wow, who would have thought, it just reminds me of the word serendipity, you know, I guess a happy coincidence, quote unquote, and would there be examples of where the cognitive psychology part of it also helped you work-wise? Luke Hohmann Yeah, a way to think about cognitive psychology and the branch that, I mean there's, psychology is a huge branch of study, right? So cognitive psychology tends to relate to how do we solve problems, and it tends to focus on problem solving where n = 1 and what I mean by n is the number of participants, and where n is just me as an individual, how do I solve the problems that I'm facing? How do I engage in de-compositional activities or refinement or sense making? Organisational behaviour deals with n > 1. So it can deal with a team of, a para-bond, two people solving problems. It can deal with a small team, and we know through many, many, many decades of research that optimal team structures are eight people or less. I mean, we've known this for, when I say decades I mean millennia. When you look at military structure and military strategy, we know that people need to be organised into much smaller groups to be effective in problem solving and to move quickly. And then in any organisational structure, there's some notion of a team of teams or team engagement. So cognitive psychology, I think, helps leaders understand individuals and their place within the team. And now we talk about, you know, in the Agile community, we talk about things like, I want T-shaped people, I want people with common skills and their area of expertise and by organising enough of the T's, I can create a whole and complete team. I often say I don't want my database designer designing my user interface and I don't want my user interface designer optimising my back end database queries, they're different skills. They're very educated people, they're very sophisticated, but there's also the natural feeling that you and I have about how do I gain a sense of self, how do I gain a sense of accomplishment, a sense of mastery? Part of gaining a sense of mastery is understanding who you are as a person, what you're good at. In Japanese, they would call that Ikigai, right, what are the intersections of, you know, what do I love, what am I good at, what can I make a living at and what do people need, right? All of these intersections occur on an individual level, and then by understanding that we can create more effective teams. Ula Ojiaku Thank you. I've really learned something key here, the relationship between cognitive psychology and organisational behaviour, so thanks for breaking it down. Now, can we go quickly to your entrepreneurship? So there must be three times you started three times a company and you've been successful in that area. What exactly drives you when it comes to establishing businesses and then knowing when to move on? Luke Hohmann Sure. I think it's a combination of reflecting on my childhood and then looking at how that informs someone when they're older, and then opportunities, like you said, serendipity, I think that's a really powerful word that you introduced and it's a really powerful concept because sometimes the serendipity is associated with just allowing yourself to pursue something that presents itself. But when I was young, my father died and my mum had to raise six kids on her own, so my dad died when I was four, my mum raised six kids on her own. We were not a wealthy family, and she was a school teacher and one of the things that happened was, even though she was a very skilled school teacher, there were budget cuts and it was a unionised structure, and even though she was ranked very highly, she lost her job because she was low on the hiring totem pole in terms of how the union worked. It was very hard and of course, it's always hard to make budget cuts and firing but I remember when I was very young making one of those choices saying, I want to work in a field where we are more oriented towards someone's performance and not oriented on when they were hired, or the colour of their skin, or their gender or other things that to me didn't make sense that people were making decisions against. And while it's not a perfect field for sure, and we've got lots of improvement, engineering in general, and of course software engineering and software development spoke to me because I could meet people who were diverse or more diverse than in other fields and I thought that was really good. In terms of being an entrepreneur, that happened serendipitously. I was at the time, before I became an entrepreneur in my last job, was working for an Israeli security firm, and years and years ago, I used to do software anti-piracy and software security through physical dongles. This was made by a company called Aladdin Knowledge Systems in Israel, and I was the head of Engineering and Product Management for the dongle group and then I moved into a role of Business Development for the company. I had a couple of great bosses, but I also learned how to do international management because I had development teams in Israel, I had development teams in Munich, I had development teams in Portland, Oregon, and in the Bay Area, and this was in the 2000s. This is kind of pre-Agile, pre-Salt Lake City, pre-Agile Manifesto, but we were figuring things out and blending and working together. I thought things were going pretty well and I enjoyed working for the Israelis and what we were doing, but then we had the first Gulf War and my wife and I felt that maybe traveling as I was, we weren't sure what was going to happen in the war, I should choose something different. Unfortunately, by that time, we had been through the dot-bomb crisis in Silicon Valley. So it's about 2002 at the time that this was going on, and there really weren't jobs, it was a very weird time in Silicon Valley. So in late 2002, I sent an email to a bunch of friends and I said, hey, I'm going to be a consultant, who wants to hire me, that was my marketing plan, not very clever, and someone called me and said, hey, I've got a problem and this is the kind of thing that you can fix, come consult with us. And I said, great. So I did that, and that started the cleverly named Luke Hohmann Consulting, but then one thing led to another and consulting led to opportunities and growth and I've never looked back. So I think that there is a myth about people who start companies where sometimes you have a plan and you go execute your plan. Sometimes you find the problem and you're solving a problem. Sometimes the problem is your own problem, as in my case I had two small kids and a mortgage and I needed to provide for my family, and so the best way to do that at the time was to become a consultant. Since then I have engaged in building companies, sometimes some with more planning, some with more business tools and of course as you grow as an entrepreneur you learn skills that they didn't teach you in school, like marketing and pricing and business planning etc. And so that's kind of how I got started, and now I have kind of come full circle. The last company, the second last company I started was Conteneo and we ended up selling that to Scaled Agile, and that's how I joined the Scaled Agile team and that was lovely, moving from a position of being a CEO and being responsible for certain things, to being able to be part of a team again, joining the framework team, working with Dean Leffingwell and other members of the framework team to evolve the SAFe framework, that was really lovely. And then of course you get this entrepreneurial itch and you want to do something else, and so I think it comes and goes and you kind of allow yourself those opportunities. Ula Ojiaku Wow, yours is an inspiring story. And so what are you now, so you've talked about your first two Startups which you sold, what are you doing now? Luke Hohmann Yeah, so where I'm at right now is I am the Chief Innovation Officer for a company, Applied Frameworks. Applied Frameworks is a boutique consulting firm that's in a transition to a product company. So if this arm represents our product revenue and this arm represents our services revenue, we're expanding our product and eventually we'll become a product company. And so then the question is, well, what is the product that we're working on? Well, if you look at the Agile community, we've spent a lot of time creating and delivering value, and that's really great. We have had, if you look at the Agile community, we've had amazing support from our business counterparts. They've shovelled literally millions and millions of dollars into Agile training and Agile tooling and Agile transformations, and we've seen a lot of benefit from the Agile community. And when I say Agile, I don't mean SAFe or Scrum or some particular flavour of Agile, I just mean Agile in general. There's been hundreds of millions of dollars to billions of dollars shoved into Agile and we've created a lot of value for that investment. We've got fewer bugs in our software because we've got so many teams doing XP driven practices like Test Driven Development, we've got faster response times because we've learned that we can create smaller releases and we've created infrastructure that lets us do deployments automatically, even if you're doing embedded systems, we figured out how to do over the air updates, we've figured out how to create infrastructure where the cars we're driving are now getting software updates. So we've created for our business leaders lots of value, but there's a problem in that value. Our business leaders now need us to create a profit, and creating value and creating a profit are two different things. And so in the pursuit of value, we have allowed our Agile community to avoid and or atrophy on skills that are vital to product management, and I'm a classically trained Product Manager, so I've done market segmentation and market valuation and market sizing, I've done pricing, I've done licensing, I've done acquisitions, I've done compliance. But when you look at the traditional definition of a Product Owner, it's a very small subset of that, especially in certain Agile methods where Product Owners are team centric, they're internal centric. That's okay, I'm not criticising that structure, but what's happened is we've got people who no longer know how to price, how to package, how to license products, and we're seeing companies fail, investor money wasted, too much time trying to figure things out when if we had simply approached the problem with an analysis of not just what am I providing to you in terms of value, but what is that value worth, and how do I structure an exchange where I give you value and you give me money? And that's how businesses survive, and I think what's really interesting about this in terms of Agile is Agile is very intimately tied to sustainability. One of the drivers of the Agile Movement was way back in the 2000s, we were having very unsustainable practices. People would be working 60, 80, death march weeks of grinding out programmers and grinding out people, and part of the Agile Movement was saying, wait a minute, this isn't sustainable, and even the notion of what is a sustainable pace is really vital, but a company cannot sustain itself without a profit, and if we don't actually evolve the Agile community from value streams into profit streams, we can't help our businesses survive. I sometimes ask developers, I say, raise your hand if you're really embracing the idea that your job is to make more money for your company than they pay you, that's called a profit, and if that's not happening, your company's going to fail. Ula Ojiaku They'll be out of a job. Luke Hohmann You'll be out of a job. So if you want to be self-interested about your future, help your company be successful, help them make a profit, and so where I'm at right now is Applied Frameworks has, with my co-author, Jason Tanner, we have published a bold and breakthrough new book called Software Profit Streams, and it's a book that describes how to do pricing and packaging for software enabled solutions. When we say software enabled solution, we mean a solution that has software in it somehow, could be embedded software in your microwave oven, it could be a hosted solution, it could be an API for a payment processor, it could be the software in your car that I talked about earlier. So software enabled solutions are the foundation, the fabric of our modern lives. As Mark Andreessen says software is eating the world, software is going to be in everything, and we need to know how to take the value that we are creating as engineers, as developers, and convert that into pricing and licensing choices that create sustainable profits. Ula Ojiaku Wow. It's as if you read my mind because I was going to ask you about your book, Software Profit Streams, A Guide to Designing a Sustainably Profitable Business. I also noticed that, you know, there is the Profit Stream Canvas that you and your co-author created. So let's assume I am a Product Manager and I've used this, let's assume I went down the path of using the Business Model Canvas and there is the Customer Value Proposition. So how do they complement? Luke Hohmann How do they all work together? I'm glad you asked that, I think that's a very insightful question and the reason it's so helpful is because, well partly because I'm also friends with Alex Osterwalder, I think he's a dear, he's a wonderful human, he's a dear friend. So let's look at the different elements of the different canvases, if you will, and why we think that this is needed. The Business Model Canvas is kind of how am I structuring my business itself, like what are my partners, my suppliers, my relationships, my channel strategy, my brand strategy with respect to my customer segments, and it includes elements of cost, which we're pretty good at. We're pretty good at knowing our costs and elements of revenue, but the key assumption of revenue, of course, is the selling price and the number of units sold. So, but if you look at the book, Business Model Generation, where the Business Model Canvas comes from, it doesn't actually talk about how to set the price. Is the video game going to be $49? Is it going to be $59, or £49 or £59? Well, there's a lot of thought that goes into that. Then we have the Value Proposition Canvas, which highlights what are the pains the customer is facing? What are the gains that the customer is facing? What are the jobs to be done of the customer? How does my solution relate to the jobs? How does it help solve the pain the customer is feeling? How does it create gain for the customer? But if you read those books, and both of those books are on my shelf because they're fantastic books, it doesn't talk about pricing. So let's say I create a gain for you. Well, how much can I charge you for the gain that I've created? How do I structure that relationship? And how do I know, going back to my Business Model Canvas, that I've got the right market segment, I've got the right investment strategy, I might need to make an investment in the first one or two releases of my software or my product before I start to make a per unit profit because I'm evolving, it's called the J curve and the J curve is how much money am I investing before I well, I have to be able to forecast that, I have to be able to model that, but the key input to that is what is the price, what is the mechanism of packaging that you're using, is it, for example, is it per user in a SAS environment or is it per company in a SAS environment? Is it a meter? Is it like an API transaction using Stripe or a payment processor, Adyen or Stripe or Paypal or any of the others that are out there? Or is it an API call where I'm charging a fraction of a penny for any API call? All of those elements have to be put into an economic model and a forecast has to be created. Now, what's missing about this is that the Business Model Canvas and the Value Proposition Canvas don't give you the insight on how to set the price, they just say there is a price and we're going to use it in our equations. So what we've done is we've said, look, setting the price is itself a complex system, and what I mean by a complex system is that, let's say that I wanted to do an annual license for a new SAS offering, but I offer that in Europe and now my solution is influenced or governed by GDPR compliance, where I have data retention and data privacy laws. So my technical architecture that has to enforce the license, also has to comply with something in terms of the market in which I'm selling. This complex system needs to be organised, and so what canvases do is in all of these cases, they let us take a complex system and put some structure behind the choices that we're making in that complex system so that we can make better choices in terms of system design. I know how I want this to work, I know how I want this to be structured, and therefore I can make system choices so the system is working in a way that benefits the stakeholders. Not just me, right, I'm not the only stakeholder, my customers are in this system, my suppliers are in this system, society itself might be in the system, depending on the system I'm building or the solution I'm building. So the canvases enable us to make system level choices that are hopefully more effective in achieving our goals. And like I said, the Business Model Canvas, the Value Proposition Canvas are fantastic, highly recommended, but they don't cover pricing. So we needed something to cover the actual pricing and packaging and licensing. Ula Ojiaku Well, that's awesome. So it's really more about going, taking a deeper dive into thoughtfully and structurally, if I may use that word, assessing the pricing. Luke Hohmann Yeah, absolutely. Ula Ojiaku Would you say that in doing this there would be some elements of, you know, testing and getting feedback from actual customers to know what price point makes sense? Luke Hohmann Absolutely. There's a number of ways in which customer engagement or customer testing is involved. The very first step that we advocate is a Customer Benefit Analysis, which is what are the actual benefits you're creating and how are your customers experiencing those benefits. Those experiences are both tangible and intangible and that's another one of the challenges that we face in the Agile community. In general, the Agile community spends a little bit more time on tangible or functional value than intangible value. So we, in terms of if I were to look at it in terms of a computer, we used to say speeds and feeds. How fast is the processor? How fast is the network? How much storage is on my disk space? Those are all functional elements. Over time as our computers have become plenty fast or plenty storage wise for most of our personal computing needs, we see elements of design come into play, elements of usability, elements of brand, and we see this in other areas. Cars have improved in quality so much that many of us, the durability of the car is no longer a significant attribute because all cars are pretty durable, they're pretty good, they're pretty well made. So now we look at brand, we look at style, we look at aesthetics, we look at even paying more for a car that aligns with our values in terms of the environment. I want to get an EV, why, because I want to be more environmentally conscious. That's a value driven, that's an intangible factor. And so our first step starts with Customer Benefit Analysis looking at both functional or tangible value and intangible value, and you can't do that, as you can imagine, you can't do that without having customer interaction and awareness with your stakeholders and your customers, and that also feeds throughout the whole pricing process. Eventually, you're going to put your product in a market, and that's a form itself of market research. Did customers buy, and if they didn't buy, why did they not buy? Is it poorly packaged or is it poorly priced? These are all elements that involve customers throughout the process. Ula Ojiaku If I may, I know we've been on the topic of your latest book Software Profit Streams. I'm just wondering, because I can't help but try to connect the dots and I'm wondering if there might be a connection to one of your books, Innovation Games: Creating Breakthrough Products Through Collaborative Play, something like buy a feature in your book, that kind of came to mind, could there be a way of using that as part of the engagement with customers in setting a pricing strategy? I may be wrong, I'm just asking a question. Luke Hohmann I think you're making a great connection. There's two forms of relationship that Innovation Games and the Innovation Games book have with Software Profit Streams. One is, as you correctly noted, just the basics of market research, where do key people have pains or gains and what it might be worth. That work is also included in Alex Osterwalder's books, Value Proposition Design for example, when I've been doing Value Proposition Design and I'm trying to figure out the customer pains, you can use the Innovation Games Speed Boat. And when I want to figure out the gains, I can use the Innovation Game Product Box. Similarly, when I'm figuring out pricing and licensing, a way, and it's a very astute idea, a way to understand price points of individual features is to do certain kinds of market research. One form of market research you can do is Buy-a-Feature, which gives a gauge of what people are willing or might be willing to pay for a feature. It can be a little tricky because the normal construction of Buy-a-Feature is based on cost. However, your insight is correct, you can extend Buy-a-Feature such that you're testing value as opposed to cost, and seeing what, if you take a feature that costs X, but inflate that cost by Y and a Buy-a-Feature game, if people still buy it, it's a strong signal strength that first they want it, and second it may be a feature that you can, when delivered, would motivate you to raise the price of your offering and create a better profit for your company. Ula Ojiaku Okay, well, thank you. I wasn't sure if I was on the right lines. Luke Hohmann It's a great connection. Ula Ojiaku Thanks again. I mean, it's not original. I'm just piggybacking on your ideas. So with respect to, if we, if you don't mind, let's shift gears a bit because I know that, or I'm aware that whilst you were with Scaled Agile Incorporated, you know, you played a key part in developing some of their courses, like the Product POPM, and I think the Portfolio Management, and there was the concept about Participatory Budgeting. Can we talk about that, please? Luke Hohmann I'd love to talk about that, I mean it's a huge passion of mine, absolutely. So in February of 2018, I started working with the framework team and in December of 2018, we talked about the possibility of what an acquisition might look like and the benefits it would create, which would be many. That closed in May of 2019, and in that timeframe, we were working on SAFe 5.0 and so there were a couple of areas in which I was able to make some contributions. One was in Agile product delivery competency, the other was in lean portfolio management. I had a significant hand in restructuring or adding the POPM, APM, and LPM courses, adding things like solutions by horizons to SAFe, taking the existing content on guardrails, expanding it a little bit, and of course, adding Participatory Budgeting, which is just a huge passion of mine. I've done Participatory Budgeting now for 20 years, I've helped organisations make more than five billions of dollars of investment spending choices at all levels of companies, myself and my colleagues at Applied Frameworks, and it just is a better way to make a shared decision. If you think about one of the examples they use about Participatory Budgeting, is my preferred form of fitness is I'm a runner and so, and my wife is also a fit person. So if she goes and buys a new pair of shoes or trainers and I go and buy a new pair of trainers, we don't care, because it's a small purchase. It's frequently made and it's within the pattern of our normal behaviour. However, if I were to go out and buy a new car without involving her, that feels different, right, it's a significant purchase, it requires budgeting and care, and is this car going to meet our needs? Our kids are older than your kids, so we have different needs and different requirements, and so I would be losing trust in my pair bond with my wife if I made a substantial purchase without her involvement. Well, corporations work the same way, because we're still people. So if I'm funding a value stream, I'm funding the consistent and reliable flow of valuable items, that's what value stream funding is supposed to do. However, if there is a significant investment to be made, even if the value stream can afford it, it should be introduced to the portfolio for no other reason than the social structure of healthy organisations says that we do better when we're talking about these things, that we don't go off on our own and make significant decisions without the input of others. That lowers transparency, that lowers trust. So I am a huge advocate of Participatory Budgeting, I'm very happy that it's included in SAFe as a recommended practice, both for market research and Buy-a-Feature in APM, but also more significantly, if you will, at the portfolio level for making investment decisions. And I'm really excited to share that we've just published an article a few weeks ago about Participatory Budgeting and what's called The Color of Money, and The Color of Money is sometimes when you have constraints on how you can spend money, and an example of a constraint is let's say that a government raised taxes to improve transportation infrastructure. Well, the money that they took in is constrained in a certain way. You can't spend it, for example, on education, and so we have to show how Participatory Budgeting can be adapted to have relationships between items like this item requires this item as a precedent or The Color of Money, constraints of funding items, but I'm a big believer, we just published that article and you can get that at the Scaled Agile website, I'm a big believer in the social power of making these financial decisions and the benefits that accrue to people and organisations when they collaborate in this manner. Ula Ojiaku Thanks for going into that, Luke. So, would there be, in your experience, any type of organisation that's participatory? It's not a leading question, it's just genuine, there are typically outliers and I'm wondering in your experience, and in your opinion, if there would be organisations that it might not work for? Luke Hohmann Surprisingly, no, but I want to add a few qualifications to the effective design of a Participatory Budgeting session. When people hear Participatory Budgeting, there's different ways that you would apply Participatory Budgeting in the public and private sector. So I've done citywide Participatory Budgeting in cities and if you're a citizen of a city and you meet the qualifications for voting within that jurisdiction, in the United States, it's typically that you're 18 years old, in some places you have to be a little older, in some places you might have other qualifications, but if you're qualified to participate as a citizen in democratic processes, then you should be able to participate in Participatory Budgeting sessions that are associated with things like how do we spend taxes or how do we make certain investments. In corporations it's not quite the same way. Just because you work at a company doesn't mean you should be included in portfolio management decisions that affect the entire company. You may not have the background, you may not have the training, you may be what my friends sometimes call a fresher. So I do a lot of work overseas, so freshers, they just may not have the experience to participate. So one thing that we look at in Participatory Budgeting and SAFe is who should be involved in the sessions, and that doesn't mean that every single employee should always be included, because their background, I mean, they may be a technical topic and maybe they don't have the right technical background. So we work a little bit harder in corporations to make sure the right people are there. Now, of course, if we're going to make a mistake, we tend to make the mistake of including more people than excluding, partly because in SAFe Participatory Budgeting, it's a group of people who are making a decision, not a one person, one vote, and that's really profoundly important because in a corporation, just like in a para-bond, your opinion matters to me, I want to know what you're thinking. If I'm looking in, I'll use SAFe terminology, if I'm looking at three epics that could advance our portfolio, and I'm a little unsure about two of those epics, like one of those epics, I'm like, yeah, this is a really good thing, I know a little bit about it, this matters, I'm going to fund this, but the other two I'm not so sure about, well, there's no way I can learn through reading alone what the opinions of other people are, because, again, there's these intangible factors. There's these elements that may not be included in an ROI analysis, it's kind of hard to talk about brand and an ROI analysis - we can, but it's hard, so I want to listen to how other people are talking about things, and through that, I can go, yeah, I can see the value, I didn't see it before, I'm going to join you in funding this. So that's among the ways in which Participatory Budgeting is a little different within the private sector and the public sector and within a company. The only other element that I would add is that Participatory Budgeting gives people the permission to stop funding items that are no longer likely to meet the investment or objectives of the company, or to change minds, and so one of the, again, this is a bit of an overhang in the Agile community, Agile teams are optimised for doing things that are small, things that can fit within a two or three week Sprint. That's great, no criticism there, but our customers and our stakeholders want big things that move the market needle, and the big things that move the market needle don't get done in two or three weeks, in general, and they rarely, like they require multiple teams working multiple weeks to create a really profoundly new important thing. And so what happens though, is that we need to make in a sense funding commitments for these big things, but we also have to have a way to change our mind, and so traditional funding processes, they let us make this big commitment, but they're not good at letting us change our mind, meaning they're not Agile. Participatory Budgeting gives us the best of both worlds. I can sit at the table with you and with our colleagues, we can commit to funding something that's big, but six months later, which is the recommended cadence from SAFe, I can come back to that table and reassess and we can all look at each other, because you know those moments, right, you've had that experience in visiting, because you're like looking around the table and you're like, yeah, this isn't working. And then in traditional funding, we keep funding what's not working because there's no built-in mechanism to easily change it, but in SAFe Participatory Budgeting, you and I can sit at the table and we can look at each other with our colleagues and say, yeah, you know, that initiative just, it's not working, well, let's change our mind, okay, what is the new thing that we can fund? What is the new epic? And that permission is so powerful within a corporation. Ula Ojiaku Thanks for sharing that, and whilst you were speaking, because again, me trying to connect the dots and thinking, for an organisation that has adopted SAFe or it's trying to scale Agility, because like you mentioned, Agile teams are optimised to iteratively develop or deliver, you know, small chunks over time, usually two to three weeks, but, like you said, there is a longer time horizon spanning months, even years into the future, sometimes for those worthwhile, meaty things to be delivered that moves the strategic needle if I may use that buzzword. So, let's say we at that lean portfolio level, we're looking at epics, right, and Participatory Budgeting, we are looking at initiatives on an epic to epic basis per se, where would the Lean Startup Cycle come in here? So is it that Participatory Budgeting could be a mechanism that is used for assessing, okay, this is the MVP features that have been developed and all that, the leading indicators we've gotten, that's presented to the group, and on that basis, we make that pivot or persevere or stop decision, would that fit in? Luke Hohmann Yeah, so let's, I mean, you're close, but let me make a few turns and then it'll click better. First, let's acknowledge that the SAFe approach to the Lean Startup Cycle is not the Eric Ries approach, there are some differences, but let's separate how I fund something from how I evaluate something. So if I'm going to engage in the SAFe Lean Startup Cycle, part of that engagement is to fund an MVP, which is going to prove or disprove a given hypothesis. So that's an expenditure of money. Now there's, if you think about the expenditure of money, there's minimally two steps in this process - there's spending enough money to conduct the experiments, and if those experiments are true, making another commitment to spend money again, that I want to spend it. The reason this is important is, let's say I had three experiments running in parallel and I'm going to use easy round numbers for a large corporation. Let's say I want to run three experiments in parallel, and each experiment costs me a million pounds. Okay. So now let's say that the commercialisation of each of those is an additional amount of money. So the portfolio team sits around the table and says, we have the money, we're going to fund all three. Okay, great. Well, it's an unlikely circumstance, but let's say all three are successful. Well, this is like a venture capitalist, and I have a talk that I give that relates the funding cycle of a venture capitalist to the funding cycle of an LPM team. While it's unlikely, you could have all three become successful, and this is what I call an oversubscribed portfolio. I've got three great initiatives, but I can still only fund one or two of them, I still have to make the choice. Now, of course, I'm going to look at my economics and let's say out of the three initiatives that were successfully proven through their hypothesis, let's say one of them is just clearly not as economically attractive, for whatever reason. Okay, we get rid of that one, now, I've got two, and if I can only fund one of them, and the ROI, the hard ROI is roughly the same, that's when Participatory Budgeting really shines, because we can have those leaders come back into the room, and they can say, which choice do we want to make now? So the evaluative aspect of the MVP is the leading indicators and the results of the proving or disproving of the hypotheses. We separate that from the funding choices, which is where Participatory Budgeting and LPM kick in. Ula Ojiaku Okay. So you've separated the proving or disproving the hypothesis of the feature, some of the features that will probably make up an epic. And you're saying the funding, the decision to fund the epic in the first place is a different conversation. And you've likened it to Venture Capital funding rounds. Where do they connect? Because if they're separate, what's the connecting thread between the two? Luke Hohmann The connected thread is the portfolio process, right? The actual process is the mechanism where we're connecting these things. Ula Ojiaku OK, no, thanks for the portfolio process. But there is something you mentioned, ROI - Return On Investment. And sometimes when you're developing new products, you don't know, you have assumptions. And any ROI, sorry to put it this way, but you're really plucking figures from the air, you know, you're modelling, but there is no certainty because you could hit the mark or you could go way off the mark. So where does that innovation accounting coming into place, especially if it's a product that's yet to make contact with, you know, real life users, the customers. Luke Hohmann Well, let's go back to something you said earlier, and what you talked earlier about was the relationship that you have in market researching customer interaction. In making a forecast, let's go ahead and look at the notion of building a new product within a company, and this is again where the Agile community sometimes doesn't want to look at numbers or quote, unquote get dirty, but we have to, because if I'm going to look at building a new idea, or taking a new idea into a product, I have to have a forecast of its viability. Is it economically viable? Is it a good choice? So innovation accounting is a way to look at certain data, but before, I'm going to steal a page, a quote, from one of my friends, Jeff Patton. The most expensive way to figure this out is to actually build the product. So what can I do that's less expensive than building the product itself? I can still do market research, but maybe I wouldn't do an innovation game, maybe I'd do a formal survey and I use a price point testing mechanism like Van Westendorp Price Point Analysis, which is a series of questions that you ask to triangulate on acceptable price ranges. I can do competitive benchmarking for similar products and services. What are people offering right now in the market? Now that again, if the product is completely novel, doing competitive benchmarking can be really hard. Right now, there's so many people doing streaming that we look at the competitive market, but when Netflix first offered streaming and it was the first one, their best approach was what we call reference pricing, which is, I have a reference price for how much I pay for my DVDs that I'm getting in the mail, I'm going to base my streaming service kind of on the reference pricing of entertainment, although that's not entirely clear that that was the best way to go, because you could also base the reference price on what you're paying for a movie ticket and how many, but then you look at consumption, right, because movie tickets are expensive, so I only go to a movie maybe once every other month, whereas streaming is cheap and so I can change my demand curve by lowering my price. But this is why it's such a hard science is because we have this notion of these swirling factors. Getting specifically back to your question about the price point, I do have to do some market research before I go into the market to get some forecasting and some confidence, and research gives me more confidence, and of course, once I'm in the market, I'll know how effective my research matched the market reality. Maybe my research was misleading, and of course, there's some skill in designing research, as you know, to get answers that have high quality signal strength. Ula Ojiaku Thanks for clarifying. That makes perfect sense to me. Luke Hohmann It's kind of like a forecast saying, like there's a group of Agile people who will say, like, you shouldn't make forecasts. Well, I don't understand that because that's like saying, and people will say, well, I can't predict the future. Well, okay, I can't predict when I'm going to retire, but I'm planning to retire. I don't know the date of my exact retirement, but my wife and I are planning our retirement, and we're saving, we're making certain investment choices for our future, because we expect to have a future together. Now our kids are older than yours. My kids are now in university, and so we're closer to retirement. So what I dislike about the Agile community is people will sometimes say, well, I don't know the certainty of the event, therefore, I can't plan for it. But that's really daft, because there are many places in like, you may not for the listeners, her daughter is a little younger than my kids, but they will be going to university one day, and depending on where they go, that's a financial choice. So you could say, well, I don't know when she's going to university, and I can't predict what university she's going to go to, therefore I'm not going to save any money. Really? That doesn't make no sense. So I really get very upset when you have people in the agile community will say things like road mapping or forecasting is not Agile. It's entirely Agile. How you treat it is Agile or not Agile. Like when my child comes up to me and says, hey, you know about that going to university thing, I was thinking of taking a gap year. Okay, wait a minute, that's a change. That doesn't mean no, it means you're laughing, right? But that's a change. And so we respond to change, but we still have a plan. Ula Ojiaku It makes sense. So the reason, and I completely resonate with everything you said, the reason I raised that ROI and it not being known is that in some situations, people might be tempted to use it to game the budget allocation decision making process. That's why I said you would pluck the ROI. Luke Hohmann Okay, let's talk about that. We actually address this in our recent paper, but I'll give you my personal experience. You are vastly more likely to get bad behaviour on ROI analysis when you do not do Participatory Budgeting, because there's no social construct to prevent bad behaviour. If I'm sitting down at a table and that's virtual or physical, it doesn't matter, but let's take a perfect optimum size for a Participatory Budgeting group. Six people, let's say I'm a Director or a Senior Director in a company, and I'm sitting at a table and there's another Senior Director who's a peer, maybe there's a VP, maybe there's a person from engineering, maybe there's a person from sales and we've got this mix of people and I'm sitting at that table. I am not incented to come in with an inflated ROI because those people are really intelligent and given enough time, they're not going to support my initiative because I'm fibbing, I'm lying. And I have a phrase for this, it's when ROI becomes RO-lie that it's dangerous. And so when I'm sitting at that table, what we find consistently, and one of the clients that we did a fair amount of Participatory Budgeting for years ago with Cisco, what we found was the leaders at Cisco were creating tighter, more believable, and more defensible economic projections, precisely because they knew that they were going to be sitting with their peers, and it didn't matter. It can go both ways. Sometimes people will overestimate the ROI or they underestimate the cost. Same outcome, right? I'm going to overestimate the benefit, and people would be like, yeah, I don't think you can build that product with three teams. You're going to need five or six teams and people go, oh, I can get it done with, you know, 20 people. Yeah, I don't think so, because two years ago, we built this product. It's very similar, and, you know, we thought we could get it done with 20 people and we couldn't. We really needed, you know, a bigger group. So you see the social construct creating a more believable set of results because people come to the Participatory Budgeting session knowing that their peers are in the room. And of course, we think we're smart, so our peers are as smart as we are, we're all smart people, and therefore, the social construct of Participatory Budgeting quite literally creates a better input, which creates a better output. Ula Ojiaku That makes sense, definitely. Thanks for sharing that. I've found that very, very insightful and something I can easily apply. The reasoning behind it, the social pressure, quote unquote, knowing that you're not just going to put the paper forward but you'd have to defend it in a credible, believable way make sense. So just to wrap up now, what books have you found yourself recommending to people the most, and why? Luke Hohmann It's so funny, I get yelled at by my wife for how many books I buy. She'll go like “It's Amazon again. Another book. You know, there's this thing called the library.” Ula Ojiaku You should do Participatory Budgeting for your books then sounds like, sorry. Luke Hohmann No, no, I don't, I'd lose. Gosh, I love so many books. So there's a few books that I consider to be my go-to references and my go-to classics, but I also recommend that people re-read books and sometimes I recommend re-reading books is because you're a different person, and as you age and as you grow and you see things differently and in fact, I'm right now re-reading and of course it goes faster, but I'm re-reading the original Extreme Programming Explained by Kent Beck, a fantastic book. I just finished reading a few new books, but let me let me give you a couple of classics that I think everyone in our field should read and why they should read them. I think everyone should read The Mythical Man-Month by Fred Brooks because he really covers some very profound truths that haven't changed, things like Brooks Law, which is adding programmers to a late project, makes it later. He talks about the structure of teams and how to scale before scaling was big and important and cool. He talks about communication and conceptual integrity and the role of the architect. The other book that I'm going to give, which I hope is different than any book that anyone has ever given you, because it's one of my absolute favourite books and I give them away, is a book called Understanding Comics by Scott McCloud. Comics or graphic novels are an important medium for communication, and when we talk about storytelling and we talk about how to frame information and how to present information, understanding comics is profoundly insightful in terms of how to present, share, show information. A lot of times I think we make things harder than they should be. So when I'm working with executives and some of the clients that I work with personally, when we talk about our epics, we actually will tell stories about the hero's journey and we actually hire comic book artists to help the executives tell their story in a comic form or in a graphic novel form. So I absolutely love understanding comics. I think that that's really a profound book. Of course you mentioned Alex Osterwalder's books, Business Model Generation, Business Model Canvas. Those are fantastic books for Product Managers. I also, just looking at my own bookshelves, of course, Innovation Games for PMs, of course Software Profit Streams because we have to figure out how to create sustainability, but in reality there's so many books that we love and that we share and that we grow together when we're sharing books and I'll add one thing. Please don't only limit your books to technical books. We're humans too. I recently, this week and what I mean recent I mean literally this weekend I was visiting one of my kids in Vermont all the way across the country, and so on the plane ride I finished two books, one was a very profound and deeply written book called Ponyboy. And then another one was a very famous book on a woman protagonist who's successful in the 60s, Lessons in Chemistry, which is a new book that's out, and it was a super fun light read, some interesting lessons of course, because there's always lessons in books, and now if it's okay if I'm not overstepping my boundaries, what would be a book that you'd like me to read? I love to add books to my list. Ula Ojiaku Oh my gosh, I didn't know. You are the first guest ever who's twisted this on me, but I tend to read multiple books at a time. Luke Hohmann Only two. Ula Ojiaku Yeah, so, and I kind of switch, maybe put some on my bedside and you know there's some on my Kindle and in the car, just depending. So I'm reading multiple books at a time, but based on what you've said the one that comes to mind is the new book by Oprah Winfrey and it's titled What Happened to You? Understanding Trauma, because like you said, it's not just about reading technical books and we're human beings and we find out that people behave probably sometimes in ways that are different to us, and it's not about saying what's wrong with you, because there is a story that we might not have been privy to, you know, in terms of their childhood, how they grew up, which affected their worldview and how they are acting, so things don't just suddenly happen. And the question that we have been asked and we sometimes ask of people, and for me, I'm reading it from a parent's perspective because I understand that even more so that my actions, my choices, they play a huge, you know, part in shaping my children. So it's not saying what's wrong with you? You say, you know, what happened to you? And it traces back to, based on research, because she wrote it with a renowned psychologist, I don't know his field but a renowned psychologist, so neuroscience-based psychological research on human beings, attachment theory and all that, just showing how early childhood experiences, even as early as maybe a few months old, tend to affect people well into adulthood. So that would be my recommendation. Luke Hohmann Thank you so much. That's a gift. Ula Ojiaku Thank you. You're the first person to ask me. So, my pleasure. So, before we go to the final words, where can the audience find you, because you have a wealth of knowledge, a wealth of experience, and I am sure that people would want to get in touch with you, so how can they do this please? Luke Hohmann Yeah, well, they can get me on LinkedIn and they can find me at Applied Frameworks. I tell you, I teach classes that are known to be very profound because we always reserve, myself and the instructors at Applied Frameworks, we have very strong commitments to reserving class time for what we call the parking lot or the ask me anything question, which are many times after I've covered the core material in the class, having the opportunity to really frame how to apply something is really important. So I would definitely encourage people to take one of my classes because you'll not get the material, you'll get the reasons behind the material, which means you can apply it, but you'll also be able to ask us questions and our commitment as a company is you can ask us anything and if we don't know the answer, we'll help you find it. We'll help you find the expert or the person that you need talk to, to help you out and be successful. And then, and I think in terms of final words, I will simply ask people to remember that we get to work in the most amazing field building things for other people and it's joyful work, and we, one of my phrases is you're not doing Agile, if you're not having fun at work, there's something really wrong, there's something missing, yeah we need to retrospect and we need to improve and we need to reflect and all those important things, absolutely, but we should allow ourselves to experience the joy of serving others and being of service and building things that matter. Ula Ojiaku I love the concept of joyful Agile and getting joy in building things that matter, serving people and may I add also working together with amazing people, and for me it's been a joyful conversation with you, Luke, I really appreciate you making the time, I am definitely richer and more enlightened as a result of this conversation, so thank you so much once more. Luke Hohmann Thank you so much for having me here, thank you everyone for listening with us. Ula Ojiaku  My pleasure. That's all we have for now. Thanks for listening. If you liked this show, do subscribe at www.agileinnovationleaders.com or your favourite podcast provider. Also share with friends and do leave a review on iTunes. This would help others find this show. I'd also love to hear from you, so please drop me an email at ula@agileinnovationleaders.com Take care and God bless!   

united states god ceo director university amazon netflix california money europe israel business conversations ai master school leadership healing guide lessons action michigan innovation trauma price oregon entrepreneurship safe bachelor startups resilience portland color artificial intelligence silicon valley oprah winfrey mvp sustainability software private engineering cars dvd roi comics paypal designing israelis bay area vermont game changers business development senior director chemistry salt lake city kindle feature ev profitable munich computer science sprint api venture capital agile cisco santa cruz msc pms gdpr product managers product management ro sas stripe agility ikigai bsc serendipity scrum common ground chief innovation officer enabled gulf war xp public sector eds visionaries weave sustainably product owners portfolio management organisational apm eric ries ross perot cognitive psychology business model canvas adyen alan smith bonnie garmus ail agile manifesto ponyboy organisational behaviour scott mccloud hohmann test driven development alexander osterwalder in japanese kent beck lpm saas b2b participatory budgeting data structures understanding comics alex osterwalder roi return on investment entrepreneurs use continuous innovation business model generation create products create radically successful businesses scaled agile interview highlights yves pigneur jeff patton value proposition canvas value proposition design federico gonz mythical man month lean innovation electronic data systems conteneo extreme programming explained jason tanner dean leffingwell innovation games safe fellow
Talking Drupal
Talking Drupal #446 - Test Driven Development

Talking Drupal

Play Episode Listen Later Apr 15, 2024 69:11


Today we are talking about Test Driven Development, Why it's important, and How it improves development with guest Alexey Korepov. We'll also cover Test Helpers as our module of the week. For show notes visit: www.talkingDrupal.com/446 Topics What does the term Test Driven Development (TDD) mean Does Drupal make use of TDD What makes TDD different from other methods of Development Do you have to change your way of thinking What are some good resources to learn TDD Do you have any pointers for teams looking to get started Are certain kinds of projects better suited to TDD How have dev teams adapted to TDD Any advice on environment setup Any special tools Resources Open telemetry QA Engineer Kent Beck Test Driven Development: By Example Needs tests tag Local unit tests PHPUnit Guests Alexey Korepov - korepov.pro Murz Hosts Nic Laflin - nLighteneddevelopment.com nicxvan Martin Anderson-Clutz - mandclu Matt Glaman - mglaman.dev mglaman MOTW Correspondent Martin Anderson-Clutz - mandclu Brief description: Have you ever wanted an API that could dramatically simplify the process of writing Drupal unit tests? There's a module for that. Module name/project name: Test Helpers Brief history How old: created in Sep 2022 by today's guest, Alexey Korepov Versions available: 1.3.0 compatible with versions of Drupal 9.4 or newer, right up to Drupal 11 Maintainership Actively maintained, latest release less than 3 months ago Security coverage Test coverage, would be ironic if it didn't API Documentation is available, linked from the project page Number of open issues: 2 open issues, which are actually feature requests Usage stats: 5 sites officially, but modules or sites can leverage Test Helpers without enabling it, and this usage is recommended, so the number is actually higher Module features and usage Provides a new container that automated tests can leverage to perform common tasks with much less code. For example, you can create a user or a node with a single line of code You can also mock more complex operations like an entityQuery or loadMultiple call, again with a single line of code Traditionally, writing unit tests is more complicated because by design they run without fully bootstrapping Drupal That means that your test needs to mock functions or services in the code you're testing which can result in units tests being much longer than the code they're testing Test Helpers also allows your tests to leverage existing mocks and stubs for popular services The project page also links to the recording and slides for a talk Alexey gave about Test Helpers at DrupalCon Pittsburgh last year, if you want to do a deeper dive

Develpreneur: Become a Better Developer and Entrepreneur
The Importance of Properly Defining Requirements

Develpreneur: Become a Better Developer and Entrepreneur

Play Episode Listen Later Apr 11, 2024 30:25


In this podcast transcript, Rob and Michael delve into the pivotal topic of defining requirements in software development. They emphasize the significance of clear and detailed requirements, underscoring the potential pitfalls of vague or incomplete requirements. Throughout the conversation, they provide insights, anecdotes, and practical strategies for navigating the complexities of requirement gathering and management. Let's dive into the key points discussed by Rob and Michael. Defining Requirements The Importance of Clear Communication Rob and Michael stress the importance of clear communication in understanding and defining project requirements. They highlight the dangers of assumptions and ambiguity, advocating for a thorough exploration of the client's needs and expectations. Drawing from their experience, they emphasize the need for developers to engage in detailed discussions with clients to ensure alignment on project goals and outcomes. Understanding the End Goal A key topic we discuss is the necessity of understanding a project's end goal before delving into its requirements. Rob and Michael illustrate the importance of clarifying objectives and envisioning the desired outcome using the tree swing example. This requires us to ask probing questions and seek clarity on client expectations. By doing so, developers can ensure that the final product meets the intended purpose. Agile Approach to Requirement Management The conversation touches upon the agile approach to requirement management, emphasizing the iterative and adaptable nature of the process. Rob and Michael advocate for regular review and refinement of project requirements, especially in dynamic environments where priorities and circumstances may change over time. They underscore the value of maintaining a flexible backlog and continuously reassessing the relevance and feasibility of pending tasks. Test-Driven Development and Quality Assurance The discussion expands to encompass the role of test-driven development (TDD) and quality assurance (QA) in requirement validation. Rob and Michael highlight the importance of thinking critically about user interactions and anticipated outcomes when refining project requirements. They advocate for a proactive approach to testing and validation, leveraging QA principles to uncover potential issues and ensure the robustness of the final product. In conclusion, Rob and Michael emphasize the ongoing nature of requirement management and the importance of continuous improvement. They encourage developers to adopt a proactive mindset, actively engaging with clients and stakeholders to refine project requirements iteratively. By prioritizing clear communication, understanding the end goal, and embracing agile practices, developers can navigate the challenges of requirement gathering and deliver successful outcomes for their clients. Final Thoughts on Defining Requirements As Rob and Michael wrap up their discussion, they invite listeners to engage with their podcast and provide feedback or topic suggestions at info@develpreneur.com. They reiterate their commitment to delivering valuable insights and practical advice for developers, underscoring the collaborative nature of their community. With a focus on continuous learning and improvement, they invite listeners to join them on their journey of building better developers. By incorporating these key points and insights, developers can enhance their approach to requirement management and contribute to the success of their projects. Whether adopting agile methodologies, leveraging TDD principles, or prioritizing clear communication, a proactive and iterative approach to requirement definition is essential for delivering high-quality software solutions. Additional Resources for Defining Requirements Setting Realistic Expectations In Development Creating Your Product Requirements Changing Requirements – Welcome Them For Competitive Advantage Behind the Scenes Podcast Video

Hacker Public Radio
HPR4091: Test Driven Development Demo

Hacker Public Radio

Play Episode Listen Later Apr 8, 2024


Test Driven Development Demo with PyTest TDD Discussed in hpr4075 Write a new test and run it. It should fail. Write the minimal code that will pass the test Optionally - refactor the code while ensure the tests continue to pass PyTest Framework for writing software tests with python Normally used to test python projects, but could test any software that python can launch return input. if you can write python, you can write tests in PyTest. python assert - check that something is true Test Discovery Files named test* Functions named test* Demo Project Trivial app as a demo Print a summary of the latest HPR Episode Title, Host, Date, Audio File How do we get the latest show data RSS feed Feed parser Feed URL The pytest setup The python script we want to test will be named hpr_info.py The test will be in a file will be named test_hpr_info.py test_hpr_info.py import hpr_info Run pytest ModuleNotFoundError: No module named 'hpr_info' We have written our first failing test. The minimum code to get pytest to pass is to create an empty file touch hpr_info.py Run pytest again pytest ============================= test session starts ============================== platform linux -- Python 3.11.8, pytest-7.4.4, pluggy-1.4.0 rootdir: /tmp/Demo collected 0 items What just happened We created a file named test_hpr_info.py with a single line to import hpr_info We ran pytest and it failed because hpr_info.py did not exist We created hpr_info.py and pytest ran without an error. This means we confirmed: Pytest found the file named test_hpr_info.py and tried to execute its tests The import line is looking for a file named hpr_info.py Python Assert In python, assert tests if a statement is true For example asert 1==1 In pytest, we can use assert to check a function returns a specific value assert module.function() == "Desired Output" Without doing a comparison operator, we can also use assert to check if something exists without specifying a specific value assert dictionary.key Adding a Test Import hpr_info will allow us to test functions inside hpr_info.py We can reference functions inside hpr_info.py by prepending the name with hpr_info. for example hpr_info.HPR_FEED The first step in finding the latest HPR episode is fetching a copy of the feed. Lets add a test to make sure the HPR feed is defined import hpr_info def test_hpr_feed_url(): assert hpr_info.HPR_FEED == "https://hackerpublicradio.org/hpr_ogg_rss.php" pytest again Lets run pytest again and we get the error AttributeError: module 'hpr_info' has no attribute 'HPR_FEED' So lets add the just enough code hpr_info.py to get the test to pass HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" Run pytest again and we get 1 passed indicating the pytest found 1 test which passed Hooray, we are doing TDD Next Test - Parsing the feed lets plan a function that pulls the HPR feed and returns the feed data. We can test that the result of fetching the feed is a HTTP 200 def test_get_show_data(): show_data = hpr_info.get_show_data() assert show_data.status == 200 Now when we run pytest we get 1 failed, 1 passed and we can see the error AttributeError: module 'hpr_info' has no attribute 'get_show_data' Lets write the code to get the new test to pass. We will use the feedparser python module to make it easier to parse the rss feed. After we add the import and the new function, hpr_info.py looks like this import feedparser HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php" def get_show_data(): showdata = feedparser.parse(HPR_FEED) return showdata Lets run pytest again. When I have more than one test, I like to add the -v flag so I can see each test as it runs. test_hpr_info.py::test_hpr_feed_url PASSED [ 50%] test_hpr_info.py::test_get_show_data PASSED [100%] Next Test - Get the most recent episode from the feed Now that we have the feed, lets test getting the first episode. feedparser entries are dictionaries. Lets test what the function returns to make sure it looks like a rss feed entry. def test_get_latest_entry(): latest_entry = hpr_info.get_latest_entry() assert latest_entry["title"] assert latest_entry["published"] After we verify the test fails, we can write the code to rerun the newest entry data to hpr_info.py and pytest -v will show 3 passing tests. def get_latest_entry(): showdata = get_show_data() return showdata["entries"][0] Final Test Lets test a function to see if it returns the values we want to print. We don't test for specific values, just that the data exists. def test_get_entry_data(): entry_data = hpr_info.get_entry_data(hpr_info.get_latest_entry()) assert entry_data["title"] assert entry_data["host"] assert entry_data["published"] assert entry_data["file"] And then code to get the test to pass def get_entry_data(entry): for link in entry["links"]: if link.get("rel") == "enclosure": enclosure = link.get("href") return { "title": entry["title"], "host": entry["authors"][0]["name"], "published": entry["published"], "file": enclosure, } Finish the HPR info script. Now that we have tested that we can, get all the info we want from the most recent episode lets add the last bit of code to hpr_info.py to print the episode info if __name__ == "__main__": most_recent_show = get_entry_data(get_latest_entry()) print() print(f"Most Recent HPR Episode") for x in most_recent_show: print(f"{x}: {most_recent_show.get(x)}") if __name__ == "__main__": ensures code inside this block will only run when the script is called directly, and not when imported by test_hpr_info.py Summary TDD is a programming method where you write tests prior to writing code. TDD forces me to write smaller functions and more modular code. Link to HPR info script and tests - TODO Additional tests to add Check date is the most recent weekday Check this the host is listed on corespondents page Check others. Project Files - https://gitlab.com/norrist/hpr-pytest-demo

Smart Software with SmartLogic
"Testing 1, 2, 3" with Joel Meador and Charles Suggs

Smart Software with SmartLogic

Play Episode Listen Later Mar 21, 2024 45:40


The Elixir Wizards Podcast is back with Season 12 Office Hours, where we talk with the internal SmartLogic team about the stages of the software development lifecycle. For the season premiere, "Testing 1, 2, 3," Joel Meador and Charles Suggs join us to discuss the nuances of software testing. In this episode, we discuss everything from testing philosophies to test driven development (TDD), integration, and end-user testing. Our guests share real-world experiences that highlight the benefits of thorough testing, challenges like test maintenance, and problem-solving for complex production environments. Key topics discussed in this episode: How to find a balance that's cost-effective and practical while testing Balancing test coverage and development speed The importance of clear test plans and goals So many tests: Unit testing, integration testing, acceptance testing, penetration testing, automated vs. manual testing Agile vs. Waterfall methodologies Writing readable and maintainable tests Testing edge cases and unexpected scenarios Testing as a form of documentation and communication Advice for developers looking to improve testing practices Continuous integration and deployment Links mentioned: https://smartlogic.io/ Watch this episode on YouTube! youtu.be/unx5AIvSdc Bob Martin “Clean Code” videos - “Uncle Bob”: http://cleancoder.com/ JUnit 5 Testing for Java and the JVM https://junit.org/junit5/ ExUnit Testing for Elixir https://hexdocs.pm/exunit/ExUnit.html Code-Level Testing of Smalltalk Applications https://www.cs.ubc.ca/~murphy/stworkshop/28-7.html Agile Manifesto https://agilemanifesto.org/ Old Man Yells at Cloud https://i.kym-cdn.com/entries/icons/original/000/019/304/old.jpg TDD: Test Driven Development https://www.agilealliance.org/glossary/tdd/ Perl Programming Language https://www.perl.org/ Protractor Test Framework for Angular and AngularJS protractortest.org/#/ Waterfall Project Management https://business.adobe.com/blog/basics/waterfall CodeSync Leveling up at Bleacher Report A cautionary tale - PETER HASTIE https://www.youtube.com/watch?v=P4SzZCwB8B4 Mix ecto.dump https://hexdocs.pm/ectosql/Mix.Tasks.Ecto.Dump.html Apache JMeter Load Testing in Java https://jmeter.apache.org/ Pentest Tools Collection - Penetration Testing https://github.com/arch3rPro/PentestTools The Road to 2 Million Websocket Connections in Phoenix https://www.phoenixframework.org/blog/the-road-to-2-million-websocket-connections Donate to Miami Indians of Indiana https://www.miamiindians.org/take-action Joel Meador on Tumblr https://joelmeador.tumblr.com/ Special Guests: Charles Suggs and Joel Meador.

Better Software Design
83. O testowaniu systemu end-to-end i Quality Assurance z Arkadiuszem Jelonkiem

Better Software Design

Play Episode Listen Later Mar 19, 2024 64:43


Odpowiedzialność za zapewnienie jakości w projekcie nie spoczywa na pojedynczej osobie, tylko na całym zespole. A rola QA nie sprowadza się tylko i wyłącznie do projektowania i implementacji przypadków testowych w procesie inspekcji systemu, ale także na byciu adwokatem jakości w projekcie, i czasem zadawaniu trudnych pytań o to, dlaczego pewne funkcjonalności są zrobione w ten, a nie inny sposób.Do tej pory temat jakości oprogramowania przewijał się głównie z perspektywy developerskiej w rozmowach o testach jednostkowych, Test Driven Development czy różnych odmianach piramidy testów. Na strukturę testów w projekcie warto też spojrzeć z jej drugiej strony, black-boxowych testów end-to-end całego systemu.Gościem w podkaście jest dziś Arkadiusz Jelonek, pracujący na co dzień jako Senior Quality Assurance Engineer w eSky Group. A rozmawiać będziemy nie tylko i testach E2E systemu, ale także o tym, jaką rolę QA pełni w projekcie i dlaczego QA warto czasem przetłumaczyć jako Questions Asker.W dzisiejszym odcinku wspólnie z Arkiem rozmawiamy między innymi o:roli Quality Assurance w projekciezdobywaniu doświadczenia pracując w Software House, dojrzałej firmie produktowej i startupiepytaniach, jakie warto zadać w zespole wchodząc w rolę Questions Askerroli testów end-to-end w projekcieklasyfikacji, różnicach i doborze właściwych narzędzi wspierających automatyzację testówpowstaniu Playwrighta i problemach, które to narzędzie rozwiązujetestach regresji wizualnejsposobach unikania kruchości w testach E2ETen odcinek jest także pierwszym z mini-serii rozmów poświęconych rozwojowi własnej kariery w IT, nie tylko na stanowisku developera. Myślisz o tym, aby pracować w tej branży np. jako Solution Architect, Engineering Manager, Chief Technology Officer czy zostać konsultantem? To tylko niektóre role i stanowiska, które pojawią się w przyszłych odcinkach Better Software Design…Materiały dodatkowe:The Evolution of Browser Automation, artykuł i prezentacja Christiana Bromanna z Sauce LabsZawód tester - Od decyzji do zdobycia doświadczenia, książka Radosława Smilgina, dla osób początkującychPasja testowania (wydanie 2, rozszerzone), książka Krzysztofa Jadczyka, także dla początkujących

Azure DevOps Podcast
Kent Beck: Tidy First - Episode 285

Azure DevOps Podcast

Play Episode Listen Later Feb 19, 2024 40:42


Original signer of the Agile Manifesto, author of the Extreme Programming book series, rediscoverer of Test-Driven Development, and inspiring Keynote Speaker. I read his TDD book 20 years ago.   Topics of Discussion: [4:06] What led Kent into extreme programming, and realizing that technical mastery alone is not enough for project success. [6:24] The significance of extreme programming. [9:15] The Agile Manifesto. [10:46] The importance of taking responsibility seriously. [14:06] What was the inspiration behind Tidy First? [16:27] Why software design is an important skill. [17:31] The human aspect dominates in design. [19:40] You can make large changes in small safe steps. [23:09] Normalizing symmetry. [30:17] Preserving flexibility in design through empirical and reversible changes rather than rather than speculative or reactive design. [31:51] Kent's experimentation with the GPT phase of AI on publications. [32:11] Rent-A-Kent to get better answers around software development. [37:19] Advice for young programmers.   Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon! Jeffrey Palermo's Twitter — Follow to stay informed about future events! Rent-A-Kent Tidy First? by Kent Beck Test Driven Development, by Kent Beck Extreme Programming Explained, by Kent Beck with Cynthia Andres Implementation Patterns, by Kent Beck   Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.

Test & Code - Python Testing & Development
212: Canon TDD - by Kent Beck

Test & Code - Python Testing & Development

Play Episode Listen Later Jan 13, 2024 7:54


In 2002, Kent Beck released a book called  "Test Driven Development by Example".In December of 2023, Kent wrote an article called "Canon TDD".With Kent's permission, this episode contains the full content of the article.Brian's commentary is saved for a followup episode.Links:Canon TDDTest Driven Development by Example The Complete pytest CourseLevel up your testing skills and save time during coding and maintenance.Check out courses.pythontest.com

Ruby on Rails Podcast
Episode 501: Ruby For All Crossover!

Ruby on Rails Podcast

Play Episode Listen Later Jan 3, 2024 26:54


I joined Julie and Andrew from Ruby For All to talk about Test Driven Development, attending conferences, and using TDD as a thinking tool.. This episode was recorded at RubyConf in San Diego. Show Notes [Ruby For All] - https://www.rubyforall.com/ Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!

DevTalles
145-¿Qué es y para qué sirve un TDD o Test Driven Development?

DevTalles

Play Episode Listen Later Dec 17, 2023 16:32


El test driven development (TDD) o en español desarrollo guiado por pruebas, es un enfoque de programación que se utiliza durante el desarrollo de software en el que se realizan pruebas unitarias antes de escribir el código. --- Support this podcast: https://podcasters.spotify.com/pod/show/fernando-her85/support

Embedded
466: Attacked by a Goose on the Way to the Office

Embedded

Play Episode Listen Later Dec 14, 2023 68:19


Ralph Hempel spoke with us about the development of Lego Mindstorms from hacking the initial interface to running Debian Linux as well as programming Mindstorms in Python. Happy 25th birthday to Lego Mindstorms! Pybricks is a MicroPython based coding environment that works across all Lego PoweredUp hubs and on the latest Mindstorms elements. The creators are David Lechner and Laurens Valk. Ralph was the first person to boot a full Debian Linux distro on the brick, see EV3Dev, a Debian Linux for Lego Mindstorms EV3.  BrickLink was originally a site for third party resellers of new and used Lego sets and elements. The site was purchased by the Lego Group a few years ago. It's still a great place to buy individual parts - for example a 4 port PoweredUp hub to run the new PyBricks on :-) ReBrickable is a site dedicated to taking off-the-shelf Lego sets, and creating something new with the set. In particular see the MOCs Designed by LUCAMOCS, fantastic Technic vehicles as well as interesting designs for vehicle subsystems. Yoshihito ISOGAWA - YouTube is an absolute genius at coming up with practical applications of new LEGO Elements. Ralph recommends his books as “awesome to read”. LEGO uses 18 Cucumbers to build real Log House  Ralph highly recommends Test Driven Development for Embedded C  by James Grenning (who has been on the show: 270: Broccoli is Good Too, 109: Resurrection of Extreme Programming, and 30: Eventually Lightning Strikes). Origami Simulator and Elecia's origami generating python code on github Transcript Nordic Semiconductor empowers wireless innovation, by providing hardware, software, tools and services that allow developers to create the IoT products of tomorrow. Learn more about Nordic Semiconductor at nordicsemi.com, check out the DevAcademy at academy.nordicsemi.com and interact with the Nordic Devzone community at devzone.nordicsemi.com.

Smart Software with SmartLogic
Web Development Frameworks: Elixir and Phoenix vs. Ruby on Rails with Owen Bickford & Dan Ivovich

Smart Software with SmartLogic

Play Episode Listen Later Dec 7, 2023 41:41


On today's episode, Elixir Wizards Owen Bickford and Dan Ivovich compare notes on building web applications with Elixir and the Phoenix Framework versus Ruby on Rails. They discuss the history of both frameworks, key differences in architecture and approach, and deciding which programming language to use when starting a project. Both Phoenix and Rails are robust frameworks that enable developers to build high-quality web apps—Phoenix leverages functional programming in Elixir and Erlang's networking for real-time communication. Rails follows object-oriented principles and has a vast ecosystem of plug-ins. For data-heavy CRUD apps, Phoenix's immutable data pipelines provide some advantages. Developers can build great web apps with either Phoenix or Rails. Phoenix may have a slight edge for new projects based on its functional approach, built-in real-time features like LiveView, and ability to scale efficiently. But, choosing the right tech stack depends heavily on the app's specific requirements and the team's existing skills. Topics discussed in this episode: History and evolution of Phoenix Framework and Ruby on Rails Default project structure and code organization preferences in each framework Comparing object-oriented vs functional programming paradigms CRUD app development and interaction with databases Live reloading capabilities in Phoenix LiveView vs Rails Turbolinks Leveraging WebSockets for real-time UI updates Testing frameworks like RSpec, Cucumber, Wallaby, and Capybara Dependency management and size of standard libraries Scalability and distribution across nodes Readability and approachability of object-oriented code Immutability and data pipelines in functional programming Types, specs, and static analysis with Dialyzer Monkey patching in Ruby vs extensible core language in Elixir Factors to consider when choosing between frameworks Experience training new developers on Phoenix and Rails Community influences on coding styles Real-world project examples and refactoring approaches Deployment and dev ops differences Popularity and adoption curves of both frameworks Ongoing research into improving Phoenix and Rails Links Mentioned in this Episode: SmartLogic.io (https://smartlogic.io/) Dan's LinkedIn (https://www.linkedin.com/in/divovich/) Owen's LinkedIn (https://www.linkedin.com/in/owen-bickford-8b6b1523a/) Ruby https://www.ruby-lang.org/en/ Rails https://rubyonrails.org/ Sams Teach Yourself Ruby in 21 Days (https://www.overdrive.com/media/56304/sams-teach-yourself-ruby-in-21-days) Learn Ruby in 7 Days (https://www.thriftbooks.com/w/learn-ruby-in-7-days---color-print---ruby-tutorial-for-guaranteed-quick-learning-ruby-guide-with-many-practical-examples-this-ruby-programming-book--to-build-real-life-software-projects/18539364/#edition=19727339&idiq=25678249) Build Your Own Ruby on Rails Web Applications (https://www.thriftbooks.com/w/build-your-own-ruby-on-rails-web-applications_patrick-lenz/725256/item/2315989/?utm_source=google&utm_medium=cpc&utm_campaign=low_vol_backlist_standard_shopping_customer_acquisition&utm_adgroup=&utm_term=&utm_content=593118743925&gad_source=1&gclid=CjwKCAiA1MCrBhAoEiwAC2d64aQyFawuU3znN0VFgGyjR0I-0vrXlseIvht0QPOqx4DjKjdpgjCMZhoC6PcQAvD_BwE#idiq=2315989&edition=3380836) Django https://github.com/django Sidekiq https://github.com/sidekiq Kafka https://kafka.apache.org/ Phoenix Framework https://www.phoenixframework.org/ Phoenix LiveView https://hexdocs.pm/phoenixliveview/Phoenix.LiveView.html#content Flask https://flask.palletsprojects.com/en/3.0.x/ WebSockets API https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API WebSocket connection for Phoenix https://github.com/phoenixframework/websock Morph Dom https://github.com/patrick-steele-idem/morphdom Turbolinks https://github.com/turbolinks Ecto https://github.com/elixir-ecto Capybara Testing Framework https://teamcapybara.github.io/capybara/ Wallaby Testing Framework https://wallabyjs.com/ Cucumber Testing Framework https://cucumber.io/ RSpec https://rspec.info/

No Plans to Merge
This is an Ad for John's Board Game

No Plans to Merge

Play Episode Listen Later Nov 30, 2023 1:43


Test & Code - Python Testing & Development
210: TDD - Refactor while green

Test & Code - Python Testing & Development

Play Episode Listen Later Nov 30, 2023 18:19


Test Driven Development. Red, Green, Refactor. Do we have to do the refactor part? Does the refactor at the end include tests? Or can I refactor the tests at any time?Why is refactor at the end? This episode is to talk about this with a an example. Sponsored by PyCharm ProUse code PYTEST for 20% off PyCharm Professional at jetbrains.com/pycharmFirst 10 to sign up this month get a free month of AI AssistantSee how easy it is to run pytest from PyCharm at pythontest.com/pycharmThe Complete pytest CourseFor the fastest way to learn pytest, go to courses.pythontest.comWhether your new to testing or pytest, or just want to maximize your efficiency and effectiveness when testing.

No Plans to Merge
WIRECON

No Plans to Merge

Play Episode Listen Later Nov 29, 2023 104:58


Daniel and Caleb wax nostalgic about the various eras of Laravel, their long and eventfull friendship, Laracon talk nerves, and a tentative plan for WIRECON.

Rails with Jason
198 - TDD with Wisen Tanasa

Rails with Jason

Play Episode Listen Later Oct 9, 2023 49:24


On this episode, Wisen Tanasa joins me to talk Test Driven Development.  We discuss why TDD is intuitive, translating specifications into tests, the balance between design and execution, developing a walking skeleton, the value of learning design principles and UX, minimizing the need to use willpower with positive feedback loops, and understanding what TDD is.Growing Object-Oriented Software Guided by Tests by Steve Freeman and Nat PryceThe Non-Designer's Design Book by Robin WilliamsWisen Tanasa on TwitterWisen Tanasa on LinkedInWisen Tanasa's Newsletter Quantum Steps

Scrum Master Toolbox Podcast
Using Experiments To Drive Agile Change, Lessons from a Test Automation Initiative | Lorraine Chambers

Scrum Master Toolbox Podcast

Play Episode Listen Later Sep 27, 2023 10:02


Lorraine Chambers: Using Experiments To Drive Agile Change, Lessons from a Test Automation Initiative Read the full Show Notes and search through the world's largest audio library on Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes. This story starts with an agile transformation featuring a shift-left initiative. The team faced challenges in implementing test automation due to unclear policies and time allocation. Recognizing the challenges faced by the teams, Lorraine engaged with managers and leaders, advocating to give teams the support they needed. Through that, it was possible to help the teams with guidance on Test-Driven Development and support in using an internal testing tool. When it comes to helping teams adopt new practices, Lorraine advises identifying policy and decision-makers, gathering relevant data, and proposing time-limited experiments for major changes, culminating in retrospective evaluations.   [IMAGE HERE] As Scrum Master we work with change continuously! Do you have your own change framework that provides the guidance, and queues you need when working with change? The Lean Change Management framework is a fully defined, lean-startup inspired change framework that can be used as the backbone of any change process! You can buy Lean Change Management the book at Amazon. Also available in French, Spanish, German and Portuguese.   About Lorraine Chambers Lorraine's vision of excellence is summed up in the words of philosopher, Lao Tzu -- “A leader is best when people barely know he exists ... " She's held several roles in the Fintech industry, including Product Owner and Quality Assurance. She's a native New Yorker that loves travel, music and museums. You can link with Lorraine Chambers on LinkedIn and connect with Lorraine Chambers on Instagram.   

Empower Apps
It Depends with Brandon Williams

Empower Apps

Play Episode Listen Later Aug 9, 2023 41:00


Brandon Williams from Point-Free comes on to talk about what dependencies are and managing then whether in testing or dealing with scaling.Guest Brandon Williams @mbrandonw Mastodon @mbrandonw@hachyderm.io Point-Free Point-Free @ Github Related Episodes Episode 80 - A Tour of Software Testing with Christina Moulton Episode 144 - Yak Shaving with Tim Mitra Episode 137 - Humane Development with Jill Scott Episode 133 - The Composable Architecture with Zev Eisenberg Episode 123 - Microapps Architecture with Majid Jabrayilov Episode 93 - Test-Driven Development in Swift with Gio Lodi Episode 107 - Expert Swift with Shai Mishali Related Links  Swift AST Explorer NYSwifty 23 | Take control of your dependencies, don't let them control you Social MediaEmailleo@brightdigit.comGitHub - @brightdigitTwitter BrightDigit - @brightdigitLeo - @leogdionLinkedInBrightDigitLeoInstagram - @brightdigitPatreon - empowerappshowCreditsMusic from https://filmmusic.io"Blippy Trance" by Kevin MacLeod (https://incompetech.com)License: CC BY (http://creativecommons.org/licenses/by/4.0/) (00:00) - EmpowerApps.Show (00:03) - What's a dependency (03:28) - Testing and Dependencies (07:45) - Mocking Dependencies (12:16) - Testing VS Persistance (15:31) - Testing and the Community (18:34) - Simulator and Dependencies (21:18) - Testing Spectrum (23:00) - Safety and Ergonomics (33:11) - WWDC 2023 ★ Support this podcast on Patreon ★

Cup o' Go

Today we're joined by guest co-host, Adelina Simion! Adelina works at Form3, co-organizer of Women Who Go, London and London Gophers, and is the author of Test-Driven Development in Go.

Tech Lead Journal
#139 - A Developer's Guide to Effective Software Testing - Mauricio Aniche

Tech Lead Journal

Play Episode Listen Later Jul 3, 2023 55:01


“An effective developer is an effective software tester. As a developer, it's your responsibility to make sure what you do works. And automated testing is such an easy and cheap way of doing it." Mauricio Aniche is the author of “Effective Software Testing”. In this episode, Mauricio explained how to become a more effective software developer by using effective and systematic software testing approaches. We discussed several such testing techniques, such as testing pyramid, specification-based testing, boundary testing, structural testing, mutation testing, and property testing. Mauricio also shared his interesting view about test-driven development (TDD) and suggested the one area we can do to improve our test maintainability.   Listen out for: Career Journey - [00:03:43] Winning Teacher of the Year - [00:06:07] An Effective Developer is an Effective Tester - [00:09:33] Reasons for Writing Automated Tests - [00:10:43] Systematic Tester - [00:13:45] Testing Pyramid - [00:17:50] Unit vs Integration Test - [00:20:25] Specification-Based Testing - [00:22:55] Behavior-Driven Design - [00:25:34] Boundary Testing - [00:27:01] Structural Testing & Code Coverage - [00:30:16] Mutation Testing - [00:35:31] Property Testing - [00:38:45] Test-Driven Development - [00:42:00] Test Maintainability - [00:46:03] Growing Object-Oriented Software, Guided by Tests - [00:48:07] 3 Tech Lead Wisdom - [00:49:24] _____ Mauricio Aniche's BioDr. Maurício Aniche's life mission is to help software engineers to become better and more productive. Maurício is a Tech Lead at Adyen, where he heads the Tech Academy team and leads different engineering enablement initiatives. Maurício is also an assistant professor of software engineering at Delft University of Technology in the Netherlands. His teaching efforts in software testing gave him the Computer Science Teacher of the Year 2021 award and the TU Delft Education Fellowship, a prestigious fellowship given to innovative lecturers. He is the author of the “Effective Software Testing: A Developer's Guide”, published by Manning in 2022. He's currently working on a new book entitled “Simple Object-Oriented Design” which should be on the market soon. Follow Mauricio: LinkedIn – linkedin.com/in/mauricioaniche Twitter – @mauricioaniche Website – effective-software-testing.com Newsletter – effectivesoftwaretesting.substack.com _____ Our Sponsors Are you looking for a new cool swag? Tech Lead Journal now offers you some swags that you can purchase online. These swags are printed on-demand based on your preference, and will be delivered safely to you all over the world where shipping is available. Check out all the cool swags available by visiting techleadjournal.dev/shop. And don't forget to brag yourself once you receive any of those swags. Like this episode? Show notes & transcript: techleadjournal.dev/episodes/139 Follow @techleadjournal on LinkedIn, Twitter, and Instagram. Buy me a coffee or become a patron.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Debugging the Internet with AI agents – with Itamar Friedman of Codium AI and AutoGPT

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 25, 2023 62:36


We are hosting the AI World's Fair in San Francisco on June 8th! You can RSVP here. Come meet fellow builders, see amazing AI tech showcases at different booths around the venue, all mixed with elements of traditional fairs: live music, drinks, games, and food! We are also at Amplitude's AI x Product Hackathon and are hosting our first joint Latent Space + Practical AI Podcast Listener Meetup next month!We are honored by the rave reviews for our last episode with MosaicML! They are also welcome on Apple Podcasts and Twitter/HN/LinkedIn/Mastodon etc!We recently spent a wonderful week with Itamar Friedman, visiting all the way from Tel Aviv in Israel: * We first recorded a podcast (releasing with this newsletter) covering Codium AI, the hot new VSCode/Jetbrains IDE extension focused on test generation for Python and JS/TS, with plans for a Code Integrity Agent. * Then we attended Agent Weekend, where the founders of multiple AI/agent projects got together with a presentation from Toran Bruce Richards on Auto-GPT's roadmap and then from Itamar on Codium's roadmap* Then some of us stayed to take part in the NextGen Hackathon and won first place with the new AI Maintainer project.So… that makes it really hard to recap everything for you. But we'll try!Podcast: Codium: Code Integrity with Zero BugsWhen it launched in 2021, there was a lot of skepticism around Github Copilot. Fast forward to 2023, and 40% of all code is checked in unmodified from Copilot. Codium burst on the scene this year, emerging from stealth with an $11m seed, their own foundation model (TestGPT-1) and a vision to revolutionize coding by 2025.You might have heard of "DRY” programming (Don't Repeat Yourself), which aims to replace repetition with abstraction. Itamar came on the pod to discuss their “extreme DRY” vision: if you already spent time writing a spec, why repeat yourself by writing the code for it? If the spec is thorough enough, automated agents could write the whole thing for you.Live Demo Video SectionThis is referenced in the podcast about 6 minutes in.Timestamps, show notes, and transcript are below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Auto-GPT: A Roadmap To The Future of WorkMaking his first public appearance, Toran (perhaps better known as @SigGravitas on GitHub) presented at Agents Weekend:Lightly edited notes for those who want a summary of the talk:* What is AutoGPT?AutoGPT is an Al agent that utilizes a Large Language Model to drive its actions and decisions. It can be best described as a user sitting at a computer, planning and interacting with the system based on its goals. Unlike traditional LLM applications, AutoGPT does not require repeated prompting by a human. Instead, it generates its own 'thoughts', criticizes its own strategy and decides what next actions to take.* AutoGPT was released on GitHub in March 2023, and went viral on April 1 with a video showing automatic code generation. 2 months later it has 132k+ stars, is the 29th highest ranked open-source project of all-time, a thriving community of 37.5k+ Discord members, 1M+ downloads.* What's next for AutoGPT? The initial release required users to know how to build and run a codebase. They recently announced plans for a web/desktop UI and mobile app to enable nontechnical/everyday users to use AutoGPT. They are also working on an extensible plugin ecosystem called the Abilities Hub also targeted at nontechnical users.* Improving Efficacy. AutoGPT has many well documented cases where it trips up. Getting stuck in loops, using instead of actual content incommands, and making obvious mistakes like execute_code("writea cookbook"'. The plan is a new design called Challenge Driven Development - Challenges are goal-orientated tasks or problems thatAuto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks. (AI Maintainer was born out of one such challenge). Itamar compared this with Software 1.0 (Test Driven Development), and Software 2.0 (Dataset Driven Development).* Self-Improvement. Auto-GPT will analyze its own codebase and contribute to its own improvement. AI Safety (aka not-kill-everyone-ists) people like Connor Leahy might freak out at this, but for what it's worth we were pleasantly surprised to learn that Itamar and many other folks on the Auto-GPT team are equally concerned and mindful about x-risk as well.The overwhelming theme of Auto-GPT's roadmap was accessibility - making AI Agents usable by all instead of the few.Podcast Timestamps* [00:00:00] Introductions* [00:01:30] Itamar's background and previous startups* [00:03:30] Vision for Codium AI: reaching “zero bugs”* [00:06:00] Demo of Codium AI and how it works* [00:15:30] Building on VS Code vs JetBrains* [00:22:30] Future of software development and the role of developers* [00:27:00] The vision of integrating natural language, testing, and code* [00:30:00] Benchmarking AI models and choosing the right models for different tasks* [00:39:00] Codium AI spec generation and editing* [00:43:30] Reconciling differences in languages between specs, tests, and code* [00:52:30] The Israeli tech scene and startup culture* [01:03:00] Lightning RoundShow Notes* Codium AI* Visualead* AutoGPT* StarCoder* TDD (Test-Driven Development)* AST (Abstract Syntax Tree)* LangChain* ICON* AI21TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Today we have a special guest, Tamar Friedman, all the way from Tel Aviv, CEO and co-founder of Codium AI. Welcome.Itamar: Hey, great being here. Thank you for inviting me.Swyx: You like the studio? It's nice, right?Itamar: Yeah, they're awesome.Swyx: So I'm gonna introduce your background a little bit and then we'll learn a bit more about who you are. So you graduated from Teknion Israel Institute of Technology's kind of like the MIT of of Israel. You did a BS in CS, and then you also did a Master's in Computer Vision, which is kind of relevant.You had other startups before this, but your sort of claim to fame is Visualead, which you started in 2011 and got acquired by Alibaba Group You showed me your website, which is the sort of QR codes with different forms of visibility. And in China that's a huge, huge deal. It's starting to become a bigger deal in the west. My favorite anecdote that you told me was something about how much sales use you saved or something. I forget what the number was.Itamar: Generally speaking, like there's a lot of peer-to-peer transactions going on, like payments and, and China with QR codes. So basically if for example 5% of the scanning does not work and with our scanner we [00:01:30] reduce it to 4%, that's a lot of money. Could be tens of millions of dollars a day.Swyx: And at the scale of Alibaba, it serves all of China. It's crazy. You did that for seven years and you're in Alibaba until 2021 when you took some time off and then hooked up with Debbie, who you've known for 25 years, to start Codium AI and you just raised your $11 million seed rounds with TlB Partners and Vine. Congrats. Should we go right into Codium? What is Codium?Itamar: So we are an AI coding assistant / agent to help developers reaching zero bugs. We don't do that today. Right now, we help to reduce the amount of bugs. Actually you can see people commenting on our marketplace page saying that they found bugs with our tool, and that's like our premise. Our vision is like for Tesla zero emission or something like that, for us it's zero bugs.We started with building an IDE extension either in VS Code or in JetBrains. And that actually works alongside the main panel where you write your code and I can show later what we do is analyze the code, whether you started writing it or you completed it.Like you can go both TDD (Test-Driven Development) or classical coding. And we offer analysis, tests, whether they pass or not, we further self debug [00:03:00] them and make suggestions eventually helping to improve the code quality specifically on code logic testing.Alessio: How did you get there? Obviously it's a great idea. Like, what was the idea, maze? How did you get here?Itamar: I'll go back long. So, yes I was two and a half times a CTO, VC backed startup CTO where we talked about the last one that I sold to Alibaba. But basically I'm like, it's weird to say by 20 years already of R&D manager, I'm not like the best programmer because like you mentioned, I'm coming more from the machine learning / computer vision side, one, one of the main application, but a lot of optimization. So I'm not necessarily the best coder, but I am like 20 year R&D manager. And I found that verifying code logic is very hard thing. And one of the thing that really makes it difficult to increase the development velocity.So you have tools related to checking performance.You have tools for vulnerabilities and security, Israelis are really good at that. But do you have a tool that actually helps you test code logic? I think what we have like dozens or hundreds, even thousands that help you on the end to end, maybe on the microservice integration system. But when you talk about code level, there isn't anything.So that was the pain I always had, especially when I did have tools for that, for the hardware. Like I worked in Mellanox to be sold to Nvidia as a student, and we had formal tools, et cetera. [00:04:30] So that's one part.The second thing is that after being sold to Alibaba, the team and I were quite a big team that worked on machine learning, large language model, et cetera, building developer tools relate with, with LLMs throughout the golden years of. 2017 to 2021, 2022. And we saw how powerful they became.So basically, if I frame it this way, because we develop it for so many use cases, we saw that if you're able to take a problem put a framework of a language around it, whether it's analyzing browsing behavior, or DNA, or etc, if you can put a framework off a language, then LLMs take you really far.And then I thought this problem that I have with code logic testing is basically a combination of a few languages: natural language, specification language, technical language. Even visual language to some extent. And then I quit Alibaba and took a bit of time to maybe wrap things around and rest a bit after 20 years of startup and corporate and joined with my partner Dedy Kredo who was my ever first employee.And that's how we like, came to this idea.Alessio: The idea has obviously been around and most people have done AST analysis, kinda like an abstract syntax tree, but it's kind of hard to get there with just that. But I think these models now are getting good enough where you can mix that and also traditional logical reasoning.Itamar: Exactly.Alessio: Maybe talk a little bit more about the technical implementation of it. You mentioned the agent [00:06:00] part. You mentioned some of the model part, like what happens behind the scenes when Codium gets in your code base?Itamar: First of all, I wanna mention I think you're really accurate.If you try to take like a large language model as is and try to ask it, can you like, analyze, test the code, etc, it'll not work so good. By itself it's not good enough on the other side, like all the traditional techniques we already started to invent since the Greek times. You know, logical stuff, you mentioned ASTs, but there's also dynamic code analysis, mutation testing, etc. There's a lot of the techniques out there, but they have inefficiencies.And a lot of those inefficiencies are actually matching with AI capabilities. Let me give you one example. Let's say you wanna do fuzzy testing or mutation testing.Mutation testing means that you either mutate the test, like the input of the test, the code of the test, etc or you mutate the code in order to check how good is your test suite.For example, if I mutate some equation in the application code and the test finds a bug and it does that at a really high rate, like out of 100 mutation, I [00:07:30] find all of the 100 problems in the test. It's probably a very strong test suite.Now the problem is that there's so many options for what to mutate in the data, in the test. And this is where, for example, AI could help, like pointing out where's the best thing that you can mutate. Actually, I think it's a very good use case. Why? Because even if AI is not 100% accurate, even if it's 80% accurate, it could really take you quite far rather just randomly selecting things.So if I wrap up, just go back high level. I think LLM by themselves cannot really do the job of verifying code logic and and neither can the traditional ones, so you need to merge them. But then one more thing before maybe you tell me where to double click. I think with code logic there's also a philosophy question here.Logic different from performance or quality. If I did a three for in loop, like I loop three things and I can fold them with some vector like in Python or something like that. We need to get into the mind of the developer. What was the intention? Like what is the bad code? Not what is the code logic that doesn't work. It's not according to the specification. So I think like one more thing that AI could really help is help to match, like if there is some natural language description of the code, we can match it. Or if there's missing information in natural language that needs [00:09:00] to be asked for the AI could help asking the user.It's not like a closed solution. Rather open and leaving the developer as the lead. Just like moving the developer from, from being the coder to actually being like a pilot that that clicks button and say, ah, this is what I meant, or this is the fix, rather actually writing all the code.Alessio: That makes sense. I think I talked about it on the podcast before, but like the switch from syntax to like semantics, like developers used to be focused on the syntax and not the meaning of what they're writing. So now you have the models that are really good at the syntax and you as a human are supposed to be really good at the semantics of what you're trying to build.How does it practically work? So I'm a software developer, I want to use Codium, like how do I start and then like, how do you make that happen in the, in the background?Itamar: So, like I said, Codium right now is an IDE extension. For example, I'm showing VS code. And if you just install it, like you'll have a few access points to start Codium AI, whether this sidebar or above every component or class that we think is very good to check with Codium.You'll have this small button. There's other way you can mark specific code and right click and run code. But this one is my favorite because we actually choose above which components we suggest to use code. So once I click it code, I starts analyzing this class. But not only this class, but almost everything that is [00:10:30] being used by the call center class.But all and what's call center is, is calling. And so we do like a static code analysis, et cetera. What, what we talked about. And then Codium provides with code analysis. It's right now static, like you can't change. It can edit it, and maybe later we'll talk about it. This is what we call the specification and we're going to make it editable so you can add additional behaviors and then create accordingly, test that will not pass, and then the code will, will change accordingly. So that's one entrance point, like via natural language description. That's one of the things that we're working on right now. What I'm showing you by the way, could be downloaded as is. It's what we have in production.The second thing that we show here is like a full test suite. There are six tests by default but you can just generate more almost as much as you want every time. We'll try to cover something else, like a happy pass edge case et cetera. You can talk with specific tests, okay? Like you can suggest I want this in Spanish or give a few languages, or I want much more employees.I didn't go over what's a call center, but basically it manages like call center. So you can imagine, I can a ask to make it more rigorous, etc, but I don't wanna complicate so I'm keeping it as is.I wanna show you the next one, which is run all test. First, we verify that you're okay, we're gonna run it. I don't know, maybe we are connected to the environment that is currently [00:12:00] configured in the IDE. I don't know if it's production for some reason, or I don't know what. Then we're making sure that you're aware we're gonna run the code that and then once we run, we show if it pass or fail.I hope that we'll have one fail. But I'm not sure it's that interesting. So I'll go like to another example soon, but, but just to show you what's going on here, that we actually give an example of what's a problem. We give the log of the error and then you can do whatever you want.You can fix it by yourself, or you can click reflect and fix, and what's going on right now is a bit a longer process where we do like chain of thought or reflect and fix. And we can suggest a solution. You can run it and in this case it passes. Just an example, this is a very simple example.Maybe later I'll show you a bug. I think I'll do that and I'll show you a bug and how we recognize actually the test. It's not a problem in the test, it's a problem in the code and then suggest you fix that instead of the code. I think you see where I'm getting at.The other thing is that there are a few code suggestion, and there could be a dozen of, of types that could be related to performance modularity or I see this case there is a maintainability.There could also be vulnerability or best practices or even suggestion for bugs. Like if we noticed, if we think one of the tests, for example, is failing because of a bug. So just code presented in the code suggestion. Probably you can choose a few, for example, if you like, and then prepare a code change like I didn't show you which exactly.We're making a diff now that you can apply on your code. So basically what, what we're seeing here is that [00:13:30] there are three main tabs, the code, the test and the code analysis. Let's call spec.And then there's a fourth tab, which is a code suggestion, if you wanna look at analytics, etc. Mm-hmm. Right now code okay. This is the change or quite a big change probably clicked on something. So that's the basic demo.Right now let's be frank. Like I wanted to show like a simple example. So it's a call center. All the inputs to the class are like relatively simple. There is no jsm input, like if you're Expedia or whatever, you have a J with the hotels, Airbnb, you know, so the test will be almost like too simple or not covering enough.Your code, if you don't provide it with some input is valuable, like adjacent with all information or YAMA or whatever. So you can actually add input data and the AI or model. It's actually by the way, a set of models and algorithms that will use that input to create interesting tests. And another thing is many people have some reference tests that they already made. It could be because they already made it or because they want like a very specific they have like how they imagine the test. So they just write one and then you add a reference and that will inspire all the rest of the tests. And also you can give like hints. [00:15:00] This is by the way plan to be like dynamic hints, like for different type of code.We will provide different hints. So we can help you become a bit more knowledgeable about how to test your code. So you can ask for like having a, a given one then, or you can have like at a funny private, like make different joke for each test or for example,Swyx: I'm curious, why did you choose that one? This is the pirate one. Yeah.Itamar: Interesting choice to put on your products. It could be like 11:00 PM of people sitting around. Let's choose one funny thingSwyx: and yeah. So two serious ones and one funny one. Yeah. Just for the listening audience, can you read out the other hints that you decided on as well?Itamar: Yeah, so specifically, like for this case, relatively very simple class, so there's not much to do, but I'm gonna go to one more thing here on the configuration. But it basically is given when then style, it's one of the best practices and tests. So even when I report a bug, for example, I found a bug when someone else code, usually I wanna say like, given, use this environment or use that this way when I run this function, et cetera.Oh, then it's a very, very full report. And it's very common to use that in like in unit test and perform.Swyx: I have never been shown this format.Itamar: I love that you, you mentioned that because if you go to CS undergrad you take so many courses in development, but none of them probably in testing, and it's so important. So why would you, and you don't go to Udemy or [00:16:30] whatever and, and do a testing course, right? Like it's, it's boring. Like people either don't do component level testing because they hate it or they do it and they hate it. And I think part of it it's because they're missing tool to make it fun.Also usually you don't get yourself educated about it because you wanna write your code. And part of what we're trying to do here is help people get smarter about testing and make it like easy. So this is like very common. And the idea here is that for different type of code, we'll suggest different type of hints to make you more knowledgeable.We're doing it on an education app, but we wanna help developers become smarter, more knowledgeable about this field. And another one is mock. So right now, our model decided that there's no need for mock here, which is a good decision. But if we would go to real world case, like, I'm part of AutoGPT community and there's all of tooling going on there. Right? And maybe when I want to test like a specific component, and it's relatively clear that going to the web and doing some search and coming back, I don't really need to do that. Like I know what I expect to do and so I can mock that part of using to crawl the web.A certain percentage of accuracy, like around 90, we will decide this is worth mocking and we will inject it. I can click it now and force our system to mock this. But you'll see like a bit stupid mocking because it really doesn't make sense. So I chose this pirate stuff, like add funny pirate like doc stringing make a different joke for each test.And I forced it to add mocks, [00:18:00] the tests were deleted and now we're creating six new tests. And you see, here's the shiver me timbers, the test checks, the call successful, probably there's some joke at the end. So in this case, like even if you try to force it to mock it didn't happen because there's nothing but we might find here like stuff that it mock that really doesn't make sense because there's nothing to mock here.So that's one thing I. I can show a demo where we actually catch a bug. And, and I really love that, you know how it is you're building a developer tools, the best thing you can see is developers that you don't know giving you five stars and sharing a few stuff.We have a discord with thousands of users. But I love to see the individual reports the most. This was one of my favorites. It helped me to find two bugs. I mentioned our vision is to reach zero bugs. Like, if you may say, we want to clean the internet from bugs.Swyx: So debugging the internet. I have my podcast title.Itamar: So, so I think like if we move to another exampleSwyx: Yes, yes, please, please. This is great.Itamar: I'm moving to a different example, it is the bank account. By the way, if you go to ChatGPT and, and you can ask me what's the difference between Codium AI and using ChatGPT.Mm-hmm. I'm, I'm like giving you this hard question later. Yeah. So if you ask ChatGPT give me an example to test a code, it might give you this bank account. It's like the one-on-one stuff, right? And one of the reasons I gave it, because it's easy to inject bugs here, that's easy to understand [00:19:30] anyway.And what I'm gonna do right now is like this bank account, I'm gonna change the deposit from plus to minus as an example. And then I'm gonna run code similarly to how I did before, like it suggests to do that for the entire class. And then there is the code analysis soon. And when we announce very soon, part of this podcast, it's going to have more features here in the code analysis.We're gonna talk about it. Yep. And then there is the test that I can run. And the question is that if we're gonna catch the bag, the bugs using running the test, Because who knows, maybe this implementation is the right one, right? Like you need to, to converse with the developer. Maybe in this weird bank, bank you deposit and, and the bank takes money from you.And we could talk about how this happens, but actually you can see already here that we are already suggesting a hint that something is wrong here and here's a suggestion to put it from minus to to plus. And we'll try to reflect and, and fix and then we will see actually the model telling you, hey, maybe this is not a bug in the test, maybe it's in the code.Swyx: I wanna stay on this a little bit. First of all, this is very impressive and I think it's very valuable. What user numbers can you disclose, you launched it and then it's got fairly organic growth. You told me something off the air, but you know, I just wanted to show people like this is being adopted in quite a large amount.Itamar:  [00:21:00] First of all, I'm a relatively transparent person. Like even as a manager, I think I was like top one percentile being transparent in Alibaba. It wasn't five out of five, which is a good thing because that's extreme, but it was a good, but it also could be a bad, some people would claim it's a bad thing.Like for example, if my CTO in Alibaba would tell me you did really bad and it might cut your entire budget by 30%, if in half a year you're not gonna do like much better and this and that. So I come back to a team and tell 'em what's going on without like trying to smooth thing out and we need to solve it together.If not, you're not fitting in this team. So that's my point of view. And the same thing, one of the fun thing that I like about building for developers, they kind of want that from you. To be transparent. So we are on the high numbers of thousands of weekly active users. Now, if you convert from 50,000 downloads to high thousands of weekly active users, it means like a lot of those that actually try us keep using us weekly.I'm not talking about even monthly, like weekly. And that was like one of their best expectations because you don't test your code every day. Right now, you can see it's mostly focused on testing. So you probably test it like once a week. Like we wanted to make it so smooth with your development methodology and development lifecycle that you use it every day.Like at the moment we hope it to be used weekly. And that's what we're getting. And the growth is about like every two, three weeks we double the amount of weekly and downloads. It's still very early, like seven weeks. So I don't know if it'll keep that way, but we hope so. Well [00:22:30] actually I hope that it'll be much more double every two, three weeks maybe. Thanks to the podcast.Swyx: Well, we, yeah, we'll, we'll add you know, a few thousand hopefully. The reason I ask this is because I think there's a lot of organic growth that people are sharing it with their friends and also I think you've also learned a lot from your earliest days in, in the private beta test.Like what have you learned since launching about how people want to use these testing tools?Itamar: One thing I didn't share with you is like, when you say virality, there is like inter virality and intra virality. Okay. Like within the company and outside the company. So which teams are using us? I can't say, but I can tell you that a lot of San Francisco companies are using us.And one of the things like I'm really surprised is that one team, I saw one user two weeks ago, I was so happy. And then I came yesterday and I saw 48 of that company. So what I'm trying to say to be frank is that we see more intra virality right now than inter virality. I don't see like video being shared all around Twitter. See what's going on here. Yeah. But I do see, like people share within the company, you need to use it because it's really helpful with productivity and it's something that we will work about the [00:24:00] inter virality.But to be frank, first I wanna make sure that it's helpful for developers. So I care more about intra virality and that we see working really well, because that means that tool is useful. So I'm telling to my colleague, sharing it on, on Twitter means that I also feel that it will make me cool or make me, and that's something maybe we'll need, still need, like testing.Swyx: You know, I don't, well, you're working on that. We're gonna announce something like that. Yeah. You are generating these tests, you know, based on what I saw there. You're generating these tests basically based on the name of the functions. And the doc strings, I guess?Itamar:So I think like if you obfuscate the entire code, like our accuracy will drop by 50%. So it's right. We're using a lot of hints that you see there. Like for example, the functioning, the dog string, the, the variable names et cetera. It doesn't have to be perfect, but it has a lot of hints.By the way. In some cases, in the code suggestion, we will actually suggest renaming some of the stuff that will sync, that will help us. Like there's suge renaming suggestion, for example. Usually in this case, instead of calling this variable is client and of course you'll see is “preferred client” because basically it gives a different commission for that.So we do suggest it because if you accept it, it also means it will be easier for our model or system to keep improving.Swyx: Is that a different model?Itamar: Okay. That brings a bit to the topic of models properties. Yeah. I'll share it really quickly because Take us off. Yes. It's relevant. Take us off. Off. Might take us off road.I think [00:25:30] like different models are better on different properties, for example, how obedient you are to instruction, how good you are to prompt forcing, like to format forcing. I want the results to be in a certain format or how accurate you are or how good you are in understanding code.There's so many calls happening here to models by the way. I. Just by clicking one, Hey Codium AI. Can you help me with this bank account? We do a dozen of different calls and each feature you click could be like, like with that reflect and fix and then like we choose the, the best one.I'm not talking about like hundreds of models, but we could, could use different APIs of open AI for example, and, and other models, et cetera. So basically like different models are better on different aspect. Going back to your, what we talked about, all the models will benefit from having those hints in, in the code, that rather in the code itself or documentation, et cetera.And also in the code analysis, we also consider the code analysis to be the ground truth to some extent. And soon we're also going to allow you to edit it and that will use that as well.Alessio: Yeah, maybe talk a little bit more about. How do I actually get all these models to work together? I think there's a lot of people that have only been exposed to Copilot so far, which is one use case, just complete what I'm writing. You're doing a lot more things here. A lot of people listening are engineers themselves, some of them build these tools, so they would love to [00:27:00] hear more about how do you orchestrate them, how do you decide which model the what, stuff like that.Itamar: So I'll start with the end because that is a very deterministic answer, is that we benchmark different models.Like every time this there a new model in, in town, like recently it's already old news. StarCoder. It's already like, so old news like few days ago.Swyx: No, no, no. Maybe you want to fill in what it is StarCoder?Itamar: I think StarCoder is, is a new up and coming model. We immediately test it on different benchmark and see if, if it's better on some properties, et cetera.We're gonna talk about it like a chain of thoughts in different part in the chain would benefit from different property. If I wanna do code analysis and, and convert it to natural language, maybe one model would be, would be better if I want to output like a result in, in a certain format.Maybe another model is better in forcing the, a certain format you probably saw on Twitter, et cetera. People talk about it's hard to ask model to output JSON et cetera. So basically we predefine. For different tasks, we, we use different models and I think like this is for individuals, for developers to check, try to sync, like the test that now you are working on, what is most important for you to get, you want the semantic understanding, that's most important? You want the output, like are you asking for a very specific [00:28:30] output?It's just like a chat or are you asking to give a output of code and have only code, no description. Or if there's a description of the top doc string and not something else. And then we use different models. We are aiming to have our own models in in 2024. Being independent of any other third party, like OpenAI or so, but since our product is very challenging, it has UI/UX challenges, engineering challenge, statical and dynamical analysis, and AI.As entrepreneur, you need to choose your battles. And we thought that it's better for us to, to focus on everything around the model. And one day when we are like thinking that we have the, the right UX/UI engineering, et cetera, we'll focus on model building. This is also, by the way, what we did in in Alibaba.Even when I had like half a million dollar a month for trading one foundational model, I would never start this way. You always try like first using the best model you can for your product. Then understanding what's the glass ceiling for that model? Then fine tune a foundation model, reach a higher glass ceiling and then training your own.That's what we're aiming and that's what I suggest other developers like, don't necessarily take a model and, and say, oh, it's so easy these days to do RLHF, et cetera. Like I see it's like only $600. Yeah, but what are you trying to optimize for? The properties. Don't try to like certain models first, organize your challenges.Understand the [00:30:00] properties you're aiming for and start playing with that. And only then go to train your own model.Alessio: Yeah. And when you say benchmark, you know, we did a one hour long episode, some benchmarks, there's like many of them. Are you building some unique evals to like your own problems? Like how are you doing that? And that's also work for your future model building, obviously, having good benchmarks. Yeah.Itamar:. Yeah. That's very interesting. So first of all, with all the respect, I think like we're dealing with ML benchmark for hundreds of years now.I'm, I'm kidding. But like for tens of years, right? Benchmarking statistical creatures is something that, that we're doing for a long time. I think what's new here is the generative part. It's an open challenge to some extent. And therefore, like maybe we need to re rethink some of the way we benchmark.And one of the notions that I really believe in, I don't have a proof for that, is like create a benchmark in levels. Let's say you create a benchmark from level one to 10, and it's a property based benchmark. Let's say I have a WebGPT ask something from the internet and then it should fetch it for me.So challenge level one could be, I'm asking it and it brings me something. Level number two could be I'm asking it and it has a certain structure. Let's say for example, I want to test AutoGPT. Okay. And I'm asking it to summarize what's the best cocktail I could have for this season in San Francisco.So [00:31:30] I would expect, like, for example, for that model to go. This is my I what I think to search the internet and do a certain thing. So level number three could be that I want to check that as part of this request. It uses a certain tools level five, you can add to that. I expect that it'll bring me back something like relevance and level nine it actually prints the cocktail for me I taste it and it's good. So, so I think like how I see it is like we need to have data sets similar to before and make sure that we not fine tuning the model the same way we test it. So we have one challenges that we fine tune over, right? And few challenges that we don't.And the new concept may is having those level which are property based, which is something that we know from software testing and less for ML. And this is where I think that these two concepts merge.Swyx: Maybe Codium can do ML testing in the future as well.Itamar: Yeah, that's a good idea.Swyx: Okay. I wanted to cover a little bit more about Codium in the present and then we'll go into the slides that you have.So you have some UI/UX stuff and you've obviously VS Code is the majority market share at this point of IDE, but you also have IntelliJ right?Itamar: Jet Brains in general.Swyx: Yeah. Anything that you learned supporting JetBrains stuff? You were very passionate about this one user who left you a negative review.What is the challenge of that? Like how do you think about the market, you know, maybe you should focus on VS Code since it's so popular?Itamar: Yeah. [00:33:00] So currently the VS Code extension is leading over JetBrains. And we were for a long time and, and like when I tell you long time, it could be like two or three weeks with version oh 0.5, point x something in, in VS code, although oh 0.4 or so a jet brains, we really saw the difference in, in the how people react.So we also knew that oh 0.5 is much more meaningful and one of the users left developers left three stars on, on jet brands and I really remember that. Like I, I love that. Like it's what do you want to get at, at, at our stage? What's wrong? Like, yes, you want that indication, you know, the worst thing is getting nothing.I actually, not sure if it's not better to get even the bad indication, only getting good ones to be re frank like at, at, at least in our stage. So we're, we're 9, 10, 10 months old startup. So I think like generally speaking We find it easier and fun to develop in vs code extension versus JetBrains.Although JetBrains has like very nice property, when you develop extension for one of the IDEs, it usually works well for all the others, like it's one extension for PyCharm, and et cetera. I think like there's even more flexibility in the VS code. Like for example, this app is, is a React extension as opposed that it's native in the JetBrains one we're using. What I learned is that it's basically is almost like [00:34:30] developing Android and iOS where you wanna have a lot of the best practices where you have one backend and all the software development like best practices with it.Like, like one backend version V1 supports both under Android and iOS and not different backends because that's crazy. And then you need all the methodology. What, what means that you move from one to 1.1 on the backend? What supports whatnot? If you don't what I'm talking about, if you developed in the past, things like that.So it's important. And then it's like under Android and iOS and, and you relatively want it to be the same because you don't want one developer in the same team working with Jet Brains and then other VS code and they're like talking, whoa, that's not what I'm seeing. And with code, what are you talking about?And in the future we're also gonna have like teams offering of collaboration Right now if you close Codium Tab, everything is like lost except of the test code, which you, you can, like if I go back to a test suite and do open as a file, and now you have a test file with everything that you can just save, but all the goodies here it's lost. One day we're gonna have like a platform you can save all that, collaborate with people, have it part of your PR, like have suggested part of your PR. And then you wanna have some alignment. So one of the challenges, like UX/UI, when you think about a feature, it should, some way or another fit for both platforms be because you want, I think by the way, in iOS and Android, Android sometimes you don't care about parity, but here you're talking about developers that might be on the same [00:36:00] team.So you do care a lot about that.Alessio: Obviously this is a completely different way to work for developers. I'm sure this is not everything you wanna build and you have some hint. So maybe take us through what you see the future of software development look like.Itamar: Well, that's great and also like related to our announcement, what we're working on.Part of it you already start seeing in my, in my demo before, but now I'll put it into a framework. I'll be clearer. So I think like the software development world in 2025 is gonna look very different from 2020. Very different. By the way. I think 2020 is different from 2000. I liked the web development in 95, so I needed to choose geocities and things like that.Today's much easier to build a web app and whatever, one of the cloud. So, but I think 2025 is gonna look very different in 2020 for the traditional coding. And that's like a paradigm I don't think will, will change too much in the last few years. And, and I'm gonna go over that when I, when I'm talking about, so j just to focus, I'm gonna show you like how I think the intelligence software development world look like, but I'm gonna put it in the lens of Codium AI.We are focused on code integrity. We care that with all this advancement of co-generation, et cetera, we wanna make sure that developers can code fast with confidence. That they have confidence on generated code in the AI that they are using that. That's our focus. So I'm gonna put, put that like lens when I'm going to explain.So I think like traditional development. Today works like creating some spec for different companies, [00:37:30] different development teams. Could mean something else, could be something on Figma, something on Google Docs, something on Jira. And then usually you jump directly to code implementation. And then if you have the time or patience, or will, you do some testing.And I think like some people would say that it's better to do TDD, like not everyone. Some would say like, write spec, write your tests, make sure they're green, that they do not pass. Write your implementation until your test pass. Most people do not practice it. I think for just a few, a few reason, let them mention two.One, it's tedious and I wanna write my code like before I want my test. And I don't think, and, and the second is, I think like we're missing tools to make it possible. And what we are advocating, what I'm going to explain is actually neither. Okay. It's very, I want to say it's very important. So here's how we think that the future of development pipeline or process is gonna look like.I'm gonna redo it in steps. So, first thing I think there do I wanna say that they're gonna be coding assistance and coding agents. Assistant is like co-pilot, for example, and agents is something that you give it a goal or a task and actually chains a few tasks together to complete your goal.Let's have that in mind. So I think like, What's happening right now when you saw our demo is what I presented a few minutes ago, is that you start with an implementation and we create spec for you and test for you. And that was like a agent, like you didn't converse with it, you just [00:39:00] click a button.And, and we did a, a chain of thought, like to create these, that's why it's it's an agent. And then we gave you an assistant to change tests, like you can converse it with it et cetera. So that's like what I presented today. What we're announcing is about a vision that we called the DRY. Don't repeat yourself. I'm gonna get to that when I'm, when I'm gonna show you the entire vision. But first I wanna show you an intermediate step that what we're going to release. So right now you can write your code. Or part of it, like for example, just a class abstract or so with a coding assistant like copilot and maybe in the future, like a Codium AI coding assistant.And then you can create a spec I already presented to you. And the next thing is that you going to have like a spec assistant to generate technical spec, helping you fill it quickly focused on that. And this is something that we're working on and, and going to release the first feature very soon as part of announcement.And it's gonna be very lean. Okay? We're, we're a startup that going bottom up, like lean features going to more and more comprehensive one. And then once you have the spec and implementation, you can either from implementation, have tests, and then you can run the test and fix them like I presented to you.But you can also from spec create tests, okay? From the spec directly to tests. [00:40:30]So then now you have a really interesting thing going on here is that you can start from spec, create, test, create code. You can start from test create code. You can start from a limitation. From code, create, spec and test. And actually we think the future is a very flexible one. You don't need to choose what you're practicing traditional TDD or whatever you wanna start with.If you have already some spec being created together with one time in one sprint, you decided to write a spec because you wanted to align about it with your team, et cetera, and now you can go and create tests and implementation or you wanted to run ahead and write your code. Creating tests and spec that aligns to it will be relatively easy.So what I'm talking about is extreme DRY concept; DRY is don't repeat yourself. Until today when we talked about DRY is like, don't repeat your code. I claim that there is a big parts of the spec test and implementation that repeat himself, but it's not a complete repetition because if spec was as detailed as the implementation, it's actually the implementation.But the spec is usually in different language, could be natural language and visual. And what we're aiming for, our vision is enabling the dry concept to the extreme. With all these three: you write your test will help you generate the code and the spec you write your spec will help you doing the test and implementation.Now the developers is the driver, okay? You'll have a lot [00:42:00] of like, what do you think about this? This is what you meant. Yes, no, you wanna fix the coder test, click yes or no. But you still be the driver. But there's gonna be like extreme automation on the DRY level. So that's what we're announcing, that we're aiming for as our vision and what we're providing these days in our product is the middle, is what, what you see in the middle, which is our code integrity agents working for you right now in your id, but soon also part of your Github actions, et cetera, helping you to align all these three.Alessio: This is great. How do you reconcile the difference in languages, you know, a lot of times the specs is maybe like a PM or it's like somebody who's more at the product level.Some of the implementation details is like backend developers for something. Frontend for something. How do you help translate the language between the two? And then I think in the one of the blog posts on your blog, you mentioned that this is also changing maybe how programming language themselves work. How do you see that change in the future? Like, are people gonna start From English, do you see a lot of them start from code and then it figures out the English for them?Itamar: Yeah. So first of all, I wanna say that although we're working, as we speak on managing we front-end frameworks and languages and usage, we are currently focused on the backend.So for example, as the spec, we won't let you input Figma, but don't be surprised if in 2024 the input of the spec could be a Figma. Actually, you can see [00:43:30] demos of that on a pencil drawing from OpenAI and when he exposed the GPT-4. So we will have that actually.I had a blog, but also I related to two different blogs. One, claiming a very knowledgeable and respectful, respectful person that says that English is going to be the new language program language and, and programming is dead. And another very respectful person, I think equally said that English is a horrible programming language.And actually, I think both of are correct. That's why when I wrote the blog, I, I actually related, and this is what we're saying here. Nothing is really fully redundant, but what's annoying here is that to align these three, you always need to work very hard. And that's where we want AI to help with. And if there is inconsistency will raise a question, what do, which one is true?And just click yes or no or test or, or, or code that, that what you can see in our product and we'll fix the right one accordingly. So I think like English and, and visual language and code. And the test language, let's call it like, like that for a second. All of them are going to persist. And just at the level of automation aligning all three is what we're aiming for.Swyx: You told me this before, so I I'm, I'm just actually seeing Alessio's reaction to it as a first time.Itamar: Yeah, yeah. Like you're absorbing like, yeah, yeah.Swyx: No, no. This is, I mean, you know, you can put your VC hat on or like compare, like what, what is the most critical or unsolved question presented by this vision?Alessio: A lot of these tools, especially we've seen a lot in the past, it's like the dynamic nature of a lot of this, you know?[00:45:00] Yeah. Sometimes, like, as you mentioned, sometimes people don't have time to write the test. Sometimes people don't have time to write the spec. Yeah. So sometimes you end up with things. Out of sync, you know? Yeah. Or like the implementation is moving much faster than the spec, and you need some of these agents to make the call sometimes to be like, no.Yeah, okay. The spec needs to change because clearly if you change the code this way, it needs to be like this in the future. I think my main question as a software developer myself, it's what is our role in the future? You know? Like, wow, how much should we intervene, where should we intervene?I've been coding for like 15 years, but if I've been coding for two years, where should I spend the next year? Yeah. Like focus on being better at understanding product and explain it again. Should I get better at syntax? You know, so that I can write code. Would love have any thoughts.Itamar: Yeah. You know, there's gonna be a difference between 1, 2, 3 years, three to six, six to 10, and 10 to 20. Let's for a second think about the idea that programming is solved. Then we're talking about a machine that can actually create any piece of code and start creating, like we're talking about singularity, right?Mm-hmm. If the singularity happens, then we're talking about this new set of problems. Let's put that aside. Like even if it happens in 2041, that's my prediction. I'm not sure like you should aim for thinking what you need to do, like, or not when the singularity happens. So I, [00:46:30] I would aim for mm-hmm.Like thinking about the future of the next five years or or, so. That's my recommendation because it's so crazy. Anyway. Maybe not the best recommendation. Take that we're for grain of salt. And please consult with a lawyer, at least in the scope of, of the next five years. The idea that the developers is the, the driver.It actually has like amazing team members. Agents that working for him or her and eventually because he or she's a driver, you need to understand especially what you're trying to achieve, but also being able to review what you get. The better you are in the lower level of programming in five years, it it mean like real, real program language.Then you'll be able to develop more sophisticated software and you will work in companies that probably pay more for sophisticated software and the more that you're less skilled in, in the actual programming, you actually would be able to be the programmer of the new era, almost a creator. You'll still maybe look on the code levels testing, et cetera, but what's important for you is being able to convert products, requirements, et cetera, to working with tools like Codium AI.So I think like there will be like degree of diff different type developers now. If you think about it for a second, I think like it's a natural evolution. It's, it's true today as well. Like if you know really good the Linux or assembly, et cetera, you'll probably work like on LLVM Nvidia [00:48:00] whatever, like things like that.Right. And okay. So I think it'll be like the next, next step. I'm talking about the next five years. Yeah. Yeah. Again, 15 years. I think it's, it's a new episode if you would like to invite me. Yeah. Oh, you'll be, you'll be back. Yeah. It's a new episode about how, how I think the world will look like when you really don't need a developer and we will be there as Cody mi like you can see.Mm-hmm.Alessio: Do we wanna dive a little bit into AutoGPT? You mentioned you're part of the community. Yeah.Swyx: Obviously Try, Catch, Finally, Repeat is also part of the company motto.Itamar: Yeah. So it actually really. Relates to what we're doing and there's a reason we have like a strong relationship and connection with the AutoGPT community and us being part part of it.So like you can see, we're talking about agent for a few months now, and we are building like a designated, a specific agent because we're trying to build like a product that works and gets the developer trust to have developer trust us. We're talking about code integrity. We need it to work. Like even if it will not put 100% it's not 100% by the way our product at all that UX/UI should speak the language of, oh, okay, we're not sure here, please take the driving seat.You want this or that. But we really not need, even if, if we're not close to 100%, we still need to work really well just throwing a number. 90%. And so we're building a like really designated agents like those that from code, create tests.So it could create tests, run them, fix them. It's a few tests. So we really believe in that we're [00:49:30] building a designated agent while Auto GPT is like a swarm of agents, general agents that were supposedly you can ask, please make me rich or make me rich by increase my net worth.Now please be so smart and knowledgeable to use a lot of agents and the tools, et cetera, to make it work. So I think like for AutoGPT community was less important to be very accurate at the beginning, rather to show the promise and start building a framework that aims directly to the end game and start improving from there.While what we are doing is the other way around. We're building an agent that works and build from there towards that. The target of what I explained before. But because of this related connection, although it's from different sides of the, like the philosophy of how you need to build those things, we really love the general idea.So we caught it really early that with Toran like building it, the, the maker of, of AutoGPT, and immediately I started contributing, guess what, what did I contribute at the beginning tests, right? So I started using Codium AI to build tests for AutoGPT, even, even finding problems this way, et cetera.So I become like one of the, let's say 10 contributors. And then in the core team of the management, I talk very often with with Toran on, on different aspects. And we are even gonna have a workshop,Swyx: a very small [00:49:00] meetingItamar: work meeting workshop. And we're going to compete together in a, in a hackathons.And to show that AutoGPT could be useful while, for example, Codium AI is creating the test for it, et cetera. So I'm part of that community, whether is my team are adding tests to it, whether like advising, whether like in in the management team or whether to helping Toran. Really, really on small thing.He is the amazing leader like visionaire and doing really well.Alessio: What do you think is the future of open source development? You know, obviously this is like a good example, right? You have code generating the test and in the future code could actually also implement the what the test wanna do. So like, yeah.How do you see that change? There's obviously not enough open source contributors and yeah, that's one of the, the main issue. Do you think these agents are maybe gonna help us? Nadia Eghbal has this  great book called like Working in Public and there's this type of projects called Stadium model, which is, yeah, a lot of people use them and like nobody wants to contribute to them.I'm curious about, is it gonna be a lot of noise added by a lot of these agents if we let them run on any repo that is open source? Like what are the contributing guidelines for like humans versus agents? I don't have any of the answers, but like some of the questions that I've been thinking about.Itamar: Okay. So I wanna repeat your question and make sure I understand you, but like, if they're agents, for example, dedicated for improving code, why can't we run them on, mm-hmm.Run them on like a full repository in, in fixing that? The situation right now is that I don't think that right now Auto GPT would be able to do that for you. Codium AI might but it's not open sourced right now. And and like you can see like in the months or two, you will be able to like running really quickly like development velocity, like our motto is moving fast with confidence by the way.So we try to like release like every day or so, three times even a day in the backend, et cetera. And we'll develop more feature, enable you, for example, to run an entire re, but, but it's not open source. So about the open source I think like AutoGPT or LangChain, you can't really like ask please improve my repository, make it better.I don't think it will work right now because because let me like. Softly quote Ilya from Open AI. He said, like right now, let's say that a certain LLM is 95% accurate. Now you're, you're concatenating the results. So the accuracy is one point like it's, it's decaying. And what you need is like more engineering frameworks and work to be done there in order to be able to deal with inaccuracies, et cetera.And that's what we specialize in Codium, but I wanna say that I'm not saying that Auto GPT won't be able to get there. Like the more tools and that going to be added, the [00:52:30] more prompt engineering that is dedicated for this, this idea will be added by the way, where I'm talking with Toran, that Codium, for example, would be one of the agents for Auto GPT.Think about it AutoGPT is not, is there for any goal, like increase my net worth, though not focused as us on fixing or improving code. We might be another agent, by the way. We might also be, we're working on it as a plugin for ChatGPT. We're actually almost finished with it. So that's like I think how it's gonna be done.Again, open opensource, not something we're thinking about. We wanted to be really good before weSwyx: opensource it. That was all very impressive. Your vision is actually very encouraging as well, and I, I'm very excited to try it out myself. I'm just curious on the Israel side of things, right? Like you, you're visiting San Francisco for a two week trip for this special program you can tell us about. But also I think a lot of American developers have heard that, you know, Israel has a really good tech scene. Mostly it's just security startups. You know, I did some, I was in some special unit in the I D F and like, you know, I come out and like, I'm doing the same thing again, but like, you know, for enterprises but maybe just something like, describe for, for the rest of the world.It's like, What is the Israeli tech scene like? What is this program that you're on and what shouldItamar: people know? So I think like Israel is the most condensed startup per capita. I think we're number one really? Or, or startup pair square meter. I think, I think we're number one as well because of these properties actually there is a very strong community and like everyone are around, like are [00:57:00] working in a.An entrepreneur or working in a startup. And when you go to the bar or the coffee, you hear if it's 20, 21, people talking about secondary, if it's 2023 talking about like how amazing Geni is, but everyone are like whatever are around you are like in, in the scene. And, and that's like a lot of networking and data propagation, I think.Somehow similar here to, to the Bay Area in San Francisco that it helps, right. So I think that's one of our strong points. You mentioned some others. I'm not saying that it doesn't help. Yes. And being in the like idf, the army, that age of 19, you go and start dealing with technology like very advanced one, that, that helps a lot.And then going back to the community, there's this community like is all over the world. And for example, there is this program called Icon. It's basically Israelis and in the Valley created a program for Israelis from, from Israel to come and it's called Silicon Valley 1 0 1 to learn what's going on here.Because with all the respect to the tech scene in Israel here, it's the, the real thing, right? So, so it's an non-profit organization by Israelis that moved here, that brings you and, and then brings people from a 16 D or, or Google or Navon or like. Amazing people from unicorns or, or up and coming startup or accelerator, and give you up-to-date talks and, and also connect you to relevant people.And that's, that's why I'm here in addition to to, you know, to [00:58:30] me and, and participate in this amazing podcast, et cetera.Swyx: Yeah. Oh, well, I, I think, I think there's a lot of exciting tech talent, you know, in, in Tel Aviv, and I, I'm, I'm glad that your offer is Israeli.Itamar: I, I think one of thing I wanted to say, like yeah, of course, that because of what, what what we said security is, is a very strong scene, but a actually water purification agriculture attack, there's a awful other things like usually it's come from necessity.Yeah. Like, we have big part of our company of our state is like a desert. So there's, there's other things like ai by the way is, is, is big also in Israel. Like, for example, I think there's an Israeli competitor to open ai. I'm not saying like it's as big, but it's ai 21, I think out of 10.Yeah. Out. Oh yeah. 21. Is this really? Yeah. Out of 10 like most, mm-hmm. Profound research labs. Research lab is, for example, I, I love, I love their. Yeah. Yeah.Swyx: I, I think we should try to talk to one of them. But yeah, when you and I met, we connected a little bit Singapore, you know, I was in the Singapore Army and Israeli army.We do have a lot of connections between countries and small countries that don't have a lot of natural resources that have to make due in the world by figuring out some other services. I think the Singapore startup scene has not done as well as the Israeli startup scene. So I'm very interested in, in how small, small countries can have a world impact essentially.Itamar: It's a question we're being asked a lot, like why, for example, let's go to the soft skills. I think like failing is a bad thing. Yeah. Like, okay. Like sometimes like VCs prefer to [01:00:00] put money on a, on an entrepreneur that failed in his first startup and actually succeeded because now that person is knowledgeable, what it mean to be, to fail and very hungry to, to succeed.So I think like generally, like there's a few reason I think it's hard to put the finger exactly, but we talked about a few things. But one other thing I think like failing is not like, this is my fourth company. I did one as, it wasn't a startup, it was a company as a teenager. And then I had like my first startup, my second company that like, had a amazing run, but then very beautiful collapse.And then like my third company, my second startup eventually exit successfully to, to Alibaba. So, so like, I think like it's there, there are a lot of trial and error, which is being appreciated, not like suppressed. I guess like that's one of the reason,Alessio: wanna jump into lightning round?Swyx: Yes. I think we send you into prep, but there's just three questions now.We've, we've actually reduced it quite a bit, but you have it,Alessio: so, and we can read them that you can take time and answer. You don't have to right away. First question, what is a already appin in AI that Utah would take much longer than an sItamar: Okay, so I have to, I hope it doesn't sound like arrogant,

Cup o' Go
What the ʕ◔ϖ◔ʔ? New merch, TDD book interview with Adelina Simion, and more

Cup o' Go

Play Episode Listen Later May 8, 2023 60:40


Check out our new Merch store and buy your very own Cup o' Go coffee mug or sticker!Go 1.20.4 and Go 1.19.9 are releasedConferences:Go Conference 2023 Japan, Online June 2GothamGo, New York City June 9ProposalsLikely decline: Add new testing/cmp packageRetracted: Add .ʕ◔ϖ◔ʔ as an alternate spelling of .go in file namesOngoing discussion: Add new package cmp, with Ordered, Min, MaxBlog post: Template rendering in Go: a software optimization taleAutomatic test runner: GokiburiAnd the older project, GoConveyBlog post: The Bubbletea (TUI) State Machine patternNew projject: Bunnify, a library for publishing and consuming events for AMQPInterview with Adelina SimionBuy the book: Test-Driven Development in GoBlog: adelinasimion.devConnect on LinkedIn or TwitterMeetups: Women Who Go (London) and London GophersSpeaking at GopherCon UK, August 16-18

No Plans to Merge
Event sourcing should be named something cooler

No Plans to Merge

Play Episode Listen Later Apr 20, 2023 118:47


PodRocket - A web development podcast from LogRocket
Writing (really) good tests with Markus Oberlehner

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Apr 4, 2023 27:53


Markus Oberlehner, software architect, speaker, and open source contributor, comes onto the show to talk about how to better write tests with test-driven development. Links https://twitter.com/MaOberlehner https://markus.oberlehner.net https://markus.oberlehner.net/blog http://twitch.tv/webdevexplorer https://github.com/maoberlehner https://goodvuetests.com Tell us what you think of PodRocket We want to hear from you! We want to know what you love and hate about the podcast. What do you want to hear more about? Who do you want to see on the show? Our producers want to know, and if you talk with us, we'll send you a $25 gift card! If you're interested, schedule a call with us (https://podrocket.logrocket.com/contact-us) or you can email producer Kate Trahan at kate@logrocket.com (mailto:kate@logrocket.com) Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Markus Oberlehner.

No Plans to Merge
Alpine is faster than React

No Plans to Merge

Play Episode Listen Later Mar 31, 2023 112:40


No Plans to Merge
PHP attributes are insanely cool

No Plans to Merge

Play Episode Listen Later Feb 24, 2023 118:24


This week we talk about whether to leave eloquent support in Livewire and have some crazy fun brain blasts about PHP attributes and how we can use and abuse them.

No Plans to Merge
Writing is so good

No Plans to Merge

Play Episode Listen Later Dec 15, 2022 104:50


This week is jam-stack-packed full of fun meanderings all culminating in a passionate conversation about the virtues of writing. It's a good one.