POPULARITY
In this episode of the Mob Mentality Show, we are excited to have Dave Copeland share his experiences in the world of agile development, focusing on the critical nuances behind well-known principles such as SRP (Single Responsibility Principle), YAGNI (You Aren't Gonna Need It), DRY (Don't Repeat Yourself), and the often-debated #NoEstimates approach. Drawing from his journey transitioning from government waterfall projects to agile methodologies at a startup, Dave kicks off a discussion with invaluable lessons on how teams can avoid misunderstanding and misapplying agile aphorisms, or avoid the pitfalls of following agile aphorisms too woodenly. ### The Dangers of Following the Literal Words of Agile Aphorisms? Have you ever seen a team stuck arguing over what SRP or SOLID (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, Dependency Inversion) truly means? Dave explains how teams can misinterpret these catchy phrases, leading to confusion, low cohesion, improper coupling, and poor decision-making. We dive into real-world examples from Dave's experience, discussing when it's okay to duplicate code and when it's not, the delicate balance between over-engineering and under-engineering, and the importance of nuance in agile practices. ### "Just Sharing" vs. Universal Recommendations Is it wise to blindly follow every "recommendation" from an agile coach, or is there room for discussion, experimentation, and adaptation? With Dave we tackle the common issue of semantic diffusion. We explore how teams can navigate complex situations and adapt agile and lean principles to their unique contexts. ### Organizational Change and Safe Learning Environments We bring in the “Reading Rainbow” analogy and other examples to illustrate how organizational change needs to be gradual, allowing for nuanced learning. We also emphasize the importance of creating an environment where team members can safely fail while being guided by experienced developers in real-world contexts. Whether you're scaling a team or trying to stack the deck with the right mix of skills, actionable strategies for fostering growth are discussed. ### Estimates, #NoEstimates, and Dealing with Uncertainty The conversation gets even more interesting as we delve into the jarring #NoEstimates and its sometimes misunderstood implications. Dave brings up valid situations with real deadlines (e.g., seasonal deliveries, regulations) and we weigh-in on ways to handle them with or without estimates that are less likely to lead to self-sabotage. We also discuss the impact of automation on estimates, and what terms like "estimate" really mean. Continuous Delivery (CD) also takes center stage as we discuss examples of how it and the practice of "don't sell what you don't already have built" fosters trust and reduces uncertainty. We touch on the various unknowns that can arise in development, how CD can help mitigate them, and whether teams can benefit from an "#OptionalEstimates" mindset. Throughout, Dave provides practical advice on aligning practices with business goals and managing risks effectively. ### Coaching, Coding, and Higher-Level Roles Finally, we explore Dave's thoughts on balancing hands-on coding with coaching responsibilities, especially in higher-level roles. How do you set expectations for coaches, and how can team composition shape the effectiveness of good practices? Whether you're actively writing code or stepping back to guide others, Dave shares examples for making both approaches work. Don't miss this episode packed with deep dives into agile practices, team dynamics, and nuanced leadership. Be sure to subscribe to the Mob Mentality Show on your favorite platform to catch this episode and others like it! Video and Show Notes: https://youtu.be/IPFYe_oOFtI
Meet two of Jam's newest engineers Max & Aidan! In this episode they discuss what engineering onboarding is like at Jam, plus all the “boring” parts of software development. Internal docs, codebase organization, local environments… As a company totally focused on bug capture; at Jam, these topics are our Jam!We discuss:(00:13) Why we love talking about the “boring” parts of eng (they're not boring to us!)(02:32) Finding alignment as the codebase and team get bigger(05:06) A sneak peek at a new feature Aidan just built(10:42) Max and Aidan debate how much DRY (Don't Repeat Yourself) is too much(12:53) Dani shares the 7 different versions of Jam's UI before we found PMF!
This week, we're proud to share a special episode of the CLIMB by VSC podcast with Jay Kapoor, featuring his interview with Sunil Nagaraj. CLIMB is a weekly podcast where Jay and VSC Ventures co-founder Vijay Chattha speak with leading investors, founders, and industry experts about lessons and perspectives on company-building in the world of climate tech and climate adaptation. From CLIMB by VSC:We're honored to host Sunil Nagaraj, a visionary investor in the intersection of deep tech and climate innovation, and founding partner of Ubiquity Ventures. Nagaraj's unique perspective on "software beyond the screen" and his commitment to supporting early-stage, tech-driven startups are at the forefront of our discussion. We dive into the essence of nurturing technologies with disciplined financing and the strategic scaling of startups to mitigate climate change.Our conversation spans a variety of topics, from the complexities of venture capital in the climate tech space to the pivotal role of startups in driving substantial change. Sunil shares insights on the importance of focusing on high-impact areas and the potential pitfalls in the startup ecosystem, particularly emphasizing the significance of avoiding the "moonshot culture" that often hinders the progress of deep tech ventures.About VSC Ventures:VSC Ventures is an early-stage venture capital fund led by Jay Kapoor and Vijay Chattha where we invest in early-stage startups and support our founders with hands-on PR, storytelling, and go-to-market work with help from our award-winning PR agency VSC.Links discussed in this episode:Moonshot Culture is Strangling Deeptech article: https://www.ubqt.vc/p/moonshot-culture-is-strangling-deeptech Ubiquity University: https://university.ubiquity.vc/Episode Highlights00:00 Conor Gaughan introduces CLIMB by VSC02:42 Trailer 03:17 Introduction04:01 What defines a startup as truly 'Nerdy and Early'?"07:11 What's the first question investors make?11:30 How do investors navigate unfamiliar markets?16:17 The pros and cons of a singular investment strategy21:49 How do 'crazy' ideas become genius investments26:45 The 'Don't Repeat Yourself' mantra in Venture Capital29:35 The cost of keeping secrets in Venture Capital31:57 How trust accelerates organizational velocity39:30 Should investors be more involved in early stage startups?42:47 How can startups become better companies47:19 The dangers of Moonshot Culture in Deep Tech52:39 What's the one thing your startup should focus on?55:07 Can technology truly turn the tide against climate change?We'll be back next week with a brand new episode of Consensus in Conversation. In the meantime, make sure to follow the CLIMB by VSC podcast for more great perspectives on company-building. Connect with Conor Gaughan on linkedin.com/in/ckgone and threads.net/@ckgoneHave questions, or a great idea for a potential guest? Email us at CiC@consensus-digital.comConsensus in Conversation is a podcast by Consensus Digital Media produced in association with Reasonable Volume. Hosted on Acast. See acast.com/privacy for more information.
In this week's episode, Rick & Kincy dive into the world of best practices and uncover some of the ways they can lead teams astray. From the pitfalls of autonomy, the misunderstood mantra of 'the customer is always right,' and the nuance of Don't Repeat Yourself, get ready for a discussion that challenges conventional wisdom!
In this episode, we dive deep into the world of clean coding with none other than the master and pioneer of the field, Uncle Bob. We explore the nuances and the art behind writing effective and efficient scripts. This conversation covers the nitty-gritty of writing and editing scripts, from understanding how to break down large functions, to discussing principles like 'Single Responsibility Principle', 'Dependency Inversion Principle' and how to balance the 'DRY' (Don't Repeat Yourself) principles. Uncle Bob also shares valuable insights on testing, handling errors, naming conventions and how to work with different types of duplication in coding. He shares recommended resources and books that every coder should read. Chapters: 00:00 Introduction and Welcome 00:06 The Importance of Code Quality 00:29 Introducing Robert Martin (Uncle Bob) 01:39 Uncle Bob's Journey in Programming 02:34 Discussion on Functional Design and New Book 03:52 The Evolution of Software Development 04:28 Revisiting the Clean Code Book 04:49 The Impact of Hardware Changes on Software 06:13 The Evolution of Programming Languages 07:33 The Importance of Code Structure and Organization 09:07 The Impact of Microservices and Open Source 11:14 The Role of Modular Programming 22:07 The Importance of Naming in Code 26:31 The Role of Functions in Code 34:12 The Role of Switch Statements in Code 42:36 The Importance of Immutability 51:00 Dealing with Complex Steps in Programming 51:21 Implementing State Machines in Programming 51:46 The Pragmatic Approach to Programming 53:01 Understanding Error Handling in Programming 54:08 The Challenge of Exception Handling 57:27 The Importance of Log Messages in Debugging 01:03:05 The Dilemma of Code Duplication 01:05:51 The Intricacies of Error Handling 01:07:40 The Role of Abstraction in Programming 01:13:55 The Importance of Testing in Programming 01:19:43 The Challenges of Mocking in Testing 01:25:11 The Essence of Programming: Discipline, Ethics, and Standards Book Recommendations: Tidy First: https://www.oreilly.com/library/view/tidy-first/9781098151232/ Design Patterns: https://www.amazon.de/-/en/Erich-Gamma/dp/0201633612 Analysis Pattern: https://martinfowler.com/books/ap.html Structured Analysis and System Specification: https://www.amazon.de/-/en/Tom-Demarco/dp/0138543801 Fundamental Algorithms: https://www.amazon.com/Art-Computer-Programming-Vol-Fundamental/dp/0201896834 Sorting and Searching: https://www.amazon.de/-/en/Donald-Knuth/dp/0201896850 Structure and Interpretation of Computer Programs: https://web.mit.edu/6.001/6.037/sicp.pdf =============================================================================== For discount on the below courses: Appsync: https://appsyncmasterclass.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Testing serverless: https://testserverlessapps.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Production-Ready Serverless: https://productionreadyserverless.com/?affiliateId=41c07a65-24c8-4499-af3c-b853a3495003 Use the button, Add Discount and enter "geeknarrator" discount code to get 20% discount. =============================================================================== Follow me on Linkedin and Twitter: https://www.linkedin.com/in/kaivalyaapte/ and https://twitter.com/thegeeknarrator If you like this episode, please hit the like button and share it with your network. Also please subscribe if you haven't yet. Database internals series: https://youtu.be/yV_Zp0Mi3xs Popular playlists: Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA- Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17 Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_d Modern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsN Stay Curios! Keep Learning!
Robby has a conversation with John Nunemaker, the Owner at "Box Out Sports" and "Fewer & Faster". They dive into the basics of maintaining software projects, highlighting the crucial importance of keeping dependencies and versions up to date. John shares his wealth of experience from his time at GitHub, shedding light on the delicate balance between exploring new architecture patterns and adhering to existing ones. They explore practical approaches to software challenges, emphasizing tools like Dependabot for efficient dependency management and the significance of evaluating the potential risks associated with changes in dependencies. John also provides valuable insights into the release of open source libraries, emphasizing the need for clear communication of expectations from the community and personal visions for the project. The discussion spans topics ranging from navigating the challenges of legacy code reviews to the gratification derived from seeking and improving the darker corners of a codebase. The episode culminates with a discussion on personal satisfaction in project selection and the art of effectively marketing open source projects.In essence, this episode of Maintainable not only unveils the intricacies of maintaining software projects but also offers practical wisdom on navigating challenges related to dependencies, legacy code, and personal project satisfaction. Listeners gain valuable insights into the strategic use of tools, the thoughtful release of open source projects, and the importance of continual improvement in the ever-evolving landscape of software development. If you're a software engineer seeking tangible approaches to enhance the maintainability of your projects, then don't miss this episode. Stay tuned!Book Recommendations:Hell Yeah or No: What's Worth Doing by Derek DiversHelpful Links:Flipper RubygemFlipper CloudDon't Repeat Yourself, Repeat Others Slidedeck From RailsConf 2010The World Runs on Bad Software by Brandon KeepersThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.
Charlotte Mason told us to read a lesson or give a directive only once. Is this unrealistic? Is it ever okay to repeat yourself? Don't Repeat Yourself originally appeared on Simply Charlotte Mason.
Charlotte Mason told us to read a lesson or give a directive only once. Is this unrealistic? Is it ever okay to repeat yourself? Don't Repeat Yourself originally appeared on Simply Charlotte Mason.
In the last remind{h}er, I referenced Psalm 136 and the refrain used 27 times in the 26 verses of the psalm. But I didn't get a chance to read all 26 verses, and today, I'd like to take a few minutes to do just that. Sometimes I think it's worthwhile for me to simply offer words from scripture and consider what God might be inviting us into through these words crafted thousands of years ago. There is something important and meaningful about considering how God is still at work through the words of scripture today, as we seek to engage and connect with God through it. So, I hope you'll listen in and I hope Psalm 136 serves you well :) remind{h}er 103: Repeat Yourself www.remindherpodcast.com
Summary For business analytics the way that you model the data in your warehouse has a lasting impact on what types of questions can be answered quickly and easily. The major strategies in use today were created decades ago when the software and hardware for warehouse databases were far more constrained. In this episode Maxime Beauchemin of Airflow and Superset fame shares his vision for the entity-centric data model and how you can incorporate it into your own warehouse design. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) Your host is Tobias Macey and today I'm interviewing Max Beauchemin about the concept of entity-centric data modeling for analytical use cases Interview Introduction How did you get involved in the area of data management? Can you describe what entity-centric modeling (ECM) is and the story behind it? How does it compare to dimensional modeling strategies? What are some of the other competing methods Comparison to activity schema What impact does this have on ML teams? (e.g. feature engineering) What role does the tooling of a team have in the ways that they end up thinking about modeling? (e.g. dbt vs. informatica vs. ETL scripts, etc.) What is the impact on the underlying compute engine on the modeling strategies used? What are some examples of data sources or problem domains for which this approach is well suited? What are some cases where entity centric modeling techniques might be counterproductive? What are the ways that the benefits of ECM manifest in use cases that are down-stream from the warehouse? What are some concrete tactical steps that teams should be thinking about to implement a workable domain model using entity-centric principles? How does this work across business domains within a given organization (especially at "enterprise" scale)? What are the most interesting, innovative, or unexpected ways that you have seen ECM used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on ECM? When is ECM the wrong choice? What are your predictions for the future direction/adoption of ECM or other modeling techniques? Contact Info mistercrunch (https://github.com/mistercrunch) on GitHub LinkedIn (https://www.linkedin.com/in/maximebeauchemin/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Entity Centric Modeling Blog Post (https://preset.io/blog/introducing-entity-centric-data-modeling-for-analytics/?utm_source=pocket_saves) Max's Previous Apperances Defining Data Engineering with Maxime Beauchemin (https://www.dataengineeringpodcast.com/episode-3-defining-data-engineering-with-maxime-beauchemin) Self Service Data Exploration And Dashboarding With Superset (https://www.dataengineeringpodcast.com/superset-data-exploration-episode-182) Exploring The Evolving Role Of Data Engineers (https://www.dataengineeringpodcast.com/redefining-data-engineering-episode-249) Alumni Of AirBnB's Early Years Reflect On What They Learned About Building Data Driven Organizations (https://www.dataengineeringpodcast.com/airbnb-alumni-data-driven-organization-episode-319) Apache Airflow (https://airflow.apache.org/) Apache Superset (https://superset.apache.org/) Preset (https://preset.io/) Ubisoft (https://www.ubisoft.com/en-us/) Ralph Kimball (https://en.wikipedia.org/wiki/Ralph_Kimball) The Rise Of The Data Engineer (https://www.freecodecamp.org/news/the-rise-of-the-data-engineer-91be18f1e603/) The Downfall Of The Data Engineer (https://maximebeauchemin.medium.com/the-downfall-of-the-data-engineer-5bfb701e5d6b) The Rise Of The Data Scientist (https://flowingdata.com/2009/06/04/rise-of-the-data-scientist/) Dimensional Data Modeling (https://www.thoughtspot.com/data-trends/data-modeling/dimensional-data-modeling) Star Schema (https://en.wikipedia.org/wiki/Star_schema) Database Normalization (https://en.wikipedia.org/wiki/Database_normalization) Feature Engineering (https://en.wikipedia.org/wiki/Feature_engineering) DRY == Don't Repeat Yourself (https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) Activity Schema (https://www.activityschema.com/) Podcast Episode (https://www.dataengineeringpodcast.com/narrator-exploratory-analytics-episode-234/) Corporate Information Factory (https://amzn.to/3NK4dpB) (affiliate link) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
We are hosting the AI World's Fair in San Francisco on June 8th! You can RSVP here. Come meet fellow builders, see amazing AI tech showcases at different booths around the venue, all mixed with elements of traditional fairs: live music, drinks, games, and food! We are also at Amplitude's AI x Product Hackathon and are hosting our first joint Latent Space + Practical AI Podcast Listener Meetup next month!We are honored by the rave reviews for our last episode with MosaicML! They are also welcome on Apple Podcasts and Twitter/HN/LinkedIn/Mastodon etc!We recently spent a wonderful week with Itamar Friedman, visiting all the way from Tel Aviv in Israel: * We first recorded a podcast (releasing with this newsletter) covering Codium AI, the hot new VSCode/Jetbrains IDE extension focused on test generation for Python and JS/TS, with plans for a Code Integrity Agent. * Then we attended Agent Weekend, where the founders of multiple AI/agent projects got together with a presentation from Toran Bruce Richards on Auto-GPT's roadmap and then from Itamar on Codium's roadmap* Then some of us stayed to take part in the NextGen Hackathon and won first place with the new AI Maintainer project.So… that makes it really hard to recap everything for you. But we'll try!Podcast: Codium: Code Integrity with Zero BugsWhen it launched in 2021, there was a lot of skepticism around Github Copilot. Fast forward to 2023, and 40% of all code is checked in unmodified from Copilot. Codium burst on the scene this year, emerging from stealth with an $11m seed, their own foundation model (TestGPT-1) and a vision to revolutionize coding by 2025.You might have heard of "DRY” programming (Don't Repeat Yourself), which aims to replace repetition with abstraction. Itamar came on the pod to discuss their “extreme DRY” vision: if you already spent time writing a spec, why repeat yourself by writing the code for it? If the spec is thorough enough, automated agents could write the whole thing for you.Live Demo Video SectionThis is referenced in the podcast about 6 minutes in.Timestamps, show notes, and transcript are below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Auto-GPT: A Roadmap To The Future of WorkMaking his first public appearance, Toran (perhaps better known as @SigGravitas on GitHub) presented at Agents Weekend:Lightly edited notes for those who want a summary of the talk:* What is AutoGPT?AutoGPT is an Al agent that utilizes a Large Language Model to drive its actions and decisions. It can be best described as a user sitting at a computer, planning and interacting with the system based on its goals. Unlike traditional LLM applications, AutoGPT does not require repeated prompting by a human. Instead, it generates its own 'thoughts', criticizes its own strategy and decides what next actions to take.* AutoGPT was released on GitHub in March 2023, and went viral on April 1 with a video showing automatic code generation. 2 months later it has 132k+ stars, is the 29th highest ranked open-source project of all-time, a thriving community of 37.5k+ Discord members, 1M+ downloads.* What's next for AutoGPT? The initial release required users to know how to build and run a codebase. They recently announced plans for a web/desktop UI and mobile app to enable nontechnical/everyday users to use AutoGPT. They are also working on an extensible plugin ecosystem called the Abilities Hub also targeted at nontechnical users.* Improving Efficacy. AutoGPT has many well documented cases where it trips up. Getting stuck in loops, using instead of actual content incommands, and making obvious mistakes like execute_code("writea cookbook"'. The plan is a new design called Challenge Driven Development - Challenges are goal-orientated tasks or problems thatAuto-GPT has difficulty solving or has not yet been able to accomplish. These may include improving specific functionalities, enhancing the model's understanding of specific domains, or even developing new features that the current version of Auto-GPT lacks. (AI Maintainer was born out of one such challenge). Itamar compared this with Software 1.0 (Test Driven Development), and Software 2.0 (Dataset Driven Development).* Self-Improvement. Auto-GPT will analyze its own codebase and contribute to its own improvement. AI Safety (aka not-kill-everyone-ists) people like Connor Leahy might freak out at this, but for what it's worth we were pleasantly surprised to learn that Itamar and many other folks on the Auto-GPT team are equally concerned and mindful about x-risk as well.The overwhelming theme of Auto-GPT's roadmap was accessibility - making AI Agents usable by all instead of the few.Podcast Timestamps* [00:00:00] Introductions* [00:01:30] Itamar's background and previous startups* [00:03:30] Vision for Codium AI: reaching “zero bugs”* [00:06:00] Demo of Codium AI and how it works* [00:15:30] Building on VS Code vs JetBrains* [00:22:30] Future of software development and the role of developers* [00:27:00] The vision of integrating natural language, testing, and code* [00:30:00] Benchmarking AI models and choosing the right models for different tasks* [00:39:00] Codium AI spec generation and editing* [00:43:30] Reconciling differences in languages between specs, tests, and code* [00:52:30] The Israeli tech scene and startup culture* [01:03:00] Lightning RoundShow Notes* Codium AI* Visualead* AutoGPT* StarCoder* TDD (Test-Driven Development)* AST (Abstract Syntax Tree)* LangChain* ICON* AI21TranscriptAlessio: [00:00:00] Hey everyone. Welcome to the Latent Space podcast. This is Alessio, Partner and CTO-in-Residence at Decibel Partners. I'm joined by my co-host, Swyx, writer and editor of Latent Space.Swyx: Today we have a special guest, Tamar Friedman, all the way from Tel Aviv, CEO and co-founder of Codium AI. Welcome.Itamar: Hey, great being here. Thank you for inviting me.Swyx: You like the studio? It's nice, right?Itamar: Yeah, they're awesome.Swyx: So I'm gonna introduce your background a little bit and then we'll learn a bit more about who you are. So you graduated from Teknion Israel Institute of Technology's kind of like the MIT of of Israel. You did a BS in CS, and then you also did a Master's in Computer Vision, which is kind of relevant.You had other startups before this, but your sort of claim to fame is Visualead, which you started in 2011 and got acquired by Alibaba Group You showed me your website, which is the sort of QR codes with different forms of visibility. And in China that's a huge, huge deal. It's starting to become a bigger deal in the west. My favorite anecdote that you told me was something about how much sales use you saved or something. I forget what the number was.Itamar: Generally speaking, like there's a lot of peer-to-peer transactions going on, like payments and, and China with QR codes. So basically if for example 5% of the scanning does not work and with our scanner we [00:01:30] reduce it to 4%, that's a lot of money. Could be tens of millions of dollars a day.Swyx: And at the scale of Alibaba, it serves all of China. It's crazy. You did that for seven years and you're in Alibaba until 2021 when you took some time off and then hooked up with Debbie, who you've known for 25 years, to start Codium AI and you just raised your $11 million seed rounds with TlB Partners and Vine. Congrats. Should we go right into Codium? What is Codium?Itamar: So we are an AI coding assistant / agent to help developers reaching zero bugs. We don't do that today. Right now, we help to reduce the amount of bugs. Actually you can see people commenting on our marketplace page saying that they found bugs with our tool, and that's like our premise. Our vision is like for Tesla zero emission or something like that, for us it's zero bugs.We started with building an IDE extension either in VS Code or in JetBrains. And that actually works alongside the main panel where you write your code and I can show later what we do is analyze the code, whether you started writing it or you completed it.Like you can go both TDD (Test-Driven Development) or classical coding. And we offer analysis, tests, whether they pass or not, we further self debug [00:03:00] them and make suggestions eventually helping to improve the code quality specifically on code logic testing.Alessio: How did you get there? Obviously it's a great idea. Like, what was the idea, maze? How did you get here?Itamar: I'll go back long. So, yes I was two and a half times a CTO, VC backed startup CTO where we talked about the last one that I sold to Alibaba. But basically I'm like, it's weird to say by 20 years already of R&D manager, I'm not like the best programmer because like you mentioned, I'm coming more from the machine learning / computer vision side, one, one of the main application, but a lot of optimization. So I'm not necessarily the best coder, but I am like 20 year R&D manager. And I found that verifying code logic is very hard thing. And one of the thing that really makes it difficult to increase the development velocity.So you have tools related to checking performance.You have tools for vulnerabilities and security, Israelis are really good at that. But do you have a tool that actually helps you test code logic? I think what we have like dozens or hundreds, even thousands that help you on the end to end, maybe on the microservice integration system. But when you talk about code level, there isn't anything.So that was the pain I always had, especially when I did have tools for that, for the hardware. Like I worked in Mellanox to be sold to Nvidia as a student, and we had formal tools, et cetera. [00:04:30] So that's one part.The second thing is that after being sold to Alibaba, the team and I were quite a big team that worked on machine learning, large language model, et cetera, building developer tools relate with, with LLMs throughout the golden years of. 2017 to 2021, 2022. And we saw how powerful they became.So basically, if I frame it this way, because we develop it for so many use cases, we saw that if you're able to take a problem put a framework of a language around it, whether it's analyzing browsing behavior, or DNA, or etc, if you can put a framework off a language, then LLMs take you really far.And then I thought this problem that I have with code logic testing is basically a combination of a few languages: natural language, specification language, technical language. Even visual language to some extent. And then I quit Alibaba and took a bit of time to maybe wrap things around and rest a bit after 20 years of startup and corporate and joined with my partner Dedy Kredo who was my ever first employee.And that's how we like, came to this idea.Alessio: The idea has obviously been around and most people have done AST analysis, kinda like an abstract syntax tree, but it's kind of hard to get there with just that. But I think these models now are getting good enough where you can mix that and also traditional logical reasoning.Itamar: Exactly.Alessio: Maybe talk a little bit more about the technical implementation of it. You mentioned the agent [00:06:00] part. You mentioned some of the model part, like what happens behind the scenes when Codium gets in your code base?Itamar: First of all, I wanna mention I think you're really accurate.If you try to take like a large language model as is and try to ask it, can you like, analyze, test the code, etc, it'll not work so good. By itself it's not good enough on the other side, like all the traditional techniques we already started to invent since the Greek times. You know, logical stuff, you mentioned ASTs, but there's also dynamic code analysis, mutation testing, etc. There's a lot of the techniques out there, but they have inefficiencies.And a lot of those inefficiencies are actually matching with AI capabilities. Let me give you one example. Let's say you wanna do fuzzy testing or mutation testing.Mutation testing means that you either mutate the test, like the input of the test, the code of the test, etc or you mutate the code in order to check how good is your test suite.For example, if I mutate some equation in the application code and the test finds a bug and it does that at a really high rate, like out of 100 mutation, I [00:07:30] find all of the 100 problems in the test. It's probably a very strong test suite.Now the problem is that there's so many options for what to mutate in the data, in the test. And this is where, for example, AI could help, like pointing out where's the best thing that you can mutate. Actually, I think it's a very good use case. Why? Because even if AI is not 100% accurate, even if it's 80% accurate, it could really take you quite far rather just randomly selecting things.So if I wrap up, just go back high level. I think LLM by themselves cannot really do the job of verifying code logic and and neither can the traditional ones, so you need to merge them. But then one more thing before maybe you tell me where to double click. I think with code logic there's also a philosophy question here.Logic different from performance or quality. If I did a three for in loop, like I loop three things and I can fold them with some vector like in Python or something like that. We need to get into the mind of the developer. What was the intention? Like what is the bad code? Not what is the code logic that doesn't work. It's not according to the specification. So I think like one more thing that AI could really help is help to match, like if there is some natural language description of the code, we can match it. Or if there's missing information in natural language that needs [00:09:00] to be asked for the AI could help asking the user.It's not like a closed solution. Rather open and leaving the developer as the lead. Just like moving the developer from, from being the coder to actually being like a pilot that that clicks button and say, ah, this is what I meant, or this is the fix, rather actually writing all the code.Alessio: That makes sense. I think I talked about it on the podcast before, but like the switch from syntax to like semantics, like developers used to be focused on the syntax and not the meaning of what they're writing. So now you have the models that are really good at the syntax and you as a human are supposed to be really good at the semantics of what you're trying to build.How does it practically work? So I'm a software developer, I want to use Codium, like how do I start and then like, how do you make that happen in the, in the background?Itamar: So, like I said, Codium right now is an IDE extension. For example, I'm showing VS code. And if you just install it, like you'll have a few access points to start Codium AI, whether this sidebar or above every component or class that we think is very good to check with Codium.You'll have this small button. There's other way you can mark specific code and right click and run code. But this one is my favorite because we actually choose above which components we suggest to use code. So once I click it code, I starts analyzing this class. But not only this class, but almost everything that is [00:10:30] being used by the call center class.But all and what's call center is, is calling. And so we do like a static code analysis, et cetera. What, what we talked about. And then Codium provides with code analysis. It's right now static, like you can't change. It can edit it, and maybe later we'll talk about it. This is what we call the specification and we're going to make it editable so you can add additional behaviors and then create accordingly, test that will not pass, and then the code will, will change accordingly. So that's one entrance point, like via natural language description. That's one of the things that we're working on right now. What I'm showing you by the way, could be downloaded as is. It's what we have in production.The second thing that we show here is like a full test suite. There are six tests by default but you can just generate more almost as much as you want every time. We'll try to cover something else, like a happy pass edge case et cetera. You can talk with specific tests, okay? Like you can suggest I want this in Spanish or give a few languages, or I want much more employees.I didn't go over what's a call center, but basically it manages like call center. So you can imagine, I can a ask to make it more rigorous, etc, but I don't wanna complicate so I'm keeping it as is.I wanna show you the next one, which is run all test. First, we verify that you're okay, we're gonna run it. I don't know, maybe we are connected to the environment that is currently [00:12:00] configured in the IDE. I don't know if it's production for some reason, or I don't know what. Then we're making sure that you're aware we're gonna run the code that and then once we run, we show if it pass or fail.I hope that we'll have one fail. But I'm not sure it's that interesting. So I'll go like to another example soon, but, but just to show you what's going on here, that we actually give an example of what's a problem. We give the log of the error and then you can do whatever you want.You can fix it by yourself, or you can click reflect and fix, and what's going on right now is a bit a longer process where we do like chain of thought or reflect and fix. And we can suggest a solution. You can run it and in this case it passes. Just an example, this is a very simple example.Maybe later I'll show you a bug. I think I'll do that and I'll show you a bug and how we recognize actually the test. It's not a problem in the test, it's a problem in the code and then suggest you fix that instead of the code. I think you see where I'm getting at.The other thing is that there are a few code suggestion, and there could be a dozen of, of types that could be related to performance modularity or I see this case there is a maintainability.There could also be vulnerability or best practices or even suggestion for bugs. Like if we noticed, if we think one of the tests, for example, is failing because of a bug. So just code presented in the code suggestion. Probably you can choose a few, for example, if you like, and then prepare a code change like I didn't show you which exactly.We're making a diff now that you can apply on your code. So basically what, what we're seeing here is that [00:13:30] there are three main tabs, the code, the test and the code analysis. Let's call spec.And then there's a fourth tab, which is a code suggestion, if you wanna look at analytics, etc. Mm-hmm. Right now code okay. This is the change or quite a big change probably clicked on something. So that's the basic demo.Right now let's be frank. Like I wanted to show like a simple example. So it's a call center. All the inputs to the class are like relatively simple. There is no jsm input, like if you're Expedia or whatever, you have a J with the hotels, Airbnb, you know, so the test will be almost like too simple or not covering enough.Your code, if you don't provide it with some input is valuable, like adjacent with all information or YAMA or whatever. So you can actually add input data and the AI or model. It's actually by the way, a set of models and algorithms that will use that input to create interesting tests. And another thing is many people have some reference tests that they already made. It could be because they already made it or because they want like a very specific they have like how they imagine the test. So they just write one and then you add a reference and that will inspire all the rest of the tests. And also you can give like hints. [00:15:00] This is by the way plan to be like dynamic hints, like for different type of code.We will provide different hints. So we can help you become a bit more knowledgeable about how to test your code. So you can ask for like having a, a given one then, or you can have like at a funny private, like make different joke for each test or for example,Swyx: I'm curious, why did you choose that one? This is the pirate one. Yeah.Itamar: Interesting choice to put on your products. It could be like 11:00 PM of people sitting around. Let's choose one funny thingSwyx: and yeah. So two serious ones and one funny one. Yeah. Just for the listening audience, can you read out the other hints that you decided on as well?Itamar: Yeah, so specifically, like for this case, relatively very simple class, so there's not much to do, but I'm gonna go to one more thing here on the configuration. But it basically is given when then style, it's one of the best practices and tests. So even when I report a bug, for example, I found a bug when someone else code, usually I wanna say like, given, use this environment or use that this way when I run this function, et cetera.Oh, then it's a very, very full report. And it's very common to use that in like in unit test and perform.Swyx: I have never been shown this format.Itamar: I love that you, you mentioned that because if you go to CS undergrad you take so many courses in development, but none of them probably in testing, and it's so important. So why would you, and you don't go to Udemy or [00:16:30] whatever and, and do a testing course, right? Like it's, it's boring. Like people either don't do component level testing because they hate it or they do it and they hate it. And I think part of it it's because they're missing tool to make it fun.Also usually you don't get yourself educated about it because you wanna write your code. And part of what we're trying to do here is help people get smarter about testing and make it like easy. So this is like very common. And the idea here is that for different type of code, we'll suggest different type of hints to make you more knowledgeable.We're doing it on an education app, but we wanna help developers become smarter, more knowledgeable about this field. And another one is mock. So right now, our model decided that there's no need for mock here, which is a good decision. But if we would go to real world case, like, I'm part of AutoGPT community and there's all of tooling going on there. Right? And maybe when I want to test like a specific component, and it's relatively clear that going to the web and doing some search and coming back, I don't really need to do that. Like I know what I expect to do and so I can mock that part of using to crawl the web.A certain percentage of accuracy, like around 90, we will decide this is worth mocking and we will inject it. I can click it now and force our system to mock this. But you'll see like a bit stupid mocking because it really doesn't make sense. So I chose this pirate stuff, like add funny pirate like doc stringing make a different joke for each test.And I forced it to add mocks, [00:18:00] the tests were deleted and now we're creating six new tests. And you see, here's the shiver me timbers, the test checks, the call successful, probably there's some joke at the end. So in this case, like even if you try to force it to mock it didn't happen because there's nothing but we might find here like stuff that it mock that really doesn't make sense because there's nothing to mock here.So that's one thing I. I can show a demo where we actually catch a bug. And, and I really love that, you know how it is you're building a developer tools, the best thing you can see is developers that you don't know giving you five stars and sharing a few stuff.We have a discord with thousands of users. But I love to see the individual reports the most. This was one of my favorites. It helped me to find two bugs. I mentioned our vision is to reach zero bugs. Like, if you may say, we want to clean the internet from bugs.Swyx: So debugging the internet. I have my podcast title.Itamar: So, so I think like if we move to another exampleSwyx: Yes, yes, please, please. This is great.Itamar: I'm moving to a different example, it is the bank account. By the way, if you go to ChatGPT and, and you can ask me what's the difference between Codium AI and using ChatGPT.Mm-hmm. I'm, I'm like giving you this hard question later. Yeah. So if you ask ChatGPT give me an example to test a code, it might give you this bank account. It's like the one-on-one stuff, right? And one of the reasons I gave it, because it's easy to inject bugs here, that's easy to understand [00:19:30] anyway.And what I'm gonna do right now is like this bank account, I'm gonna change the deposit from plus to minus as an example. And then I'm gonna run code similarly to how I did before, like it suggests to do that for the entire class. And then there is the code analysis soon. And when we announce very soon, part of this podcast, it's going to have more features here in the code analysis.We're gonna talk about it. Yep. And then there is the test that I can run. And the question is that if we're gonna catch the bag, the bugs using running the test, Because who knows, maybe this implementation is the right one, right? Like you need to, to converse with the developer. Maybe in this weird bank, bank you deposit and, and the bank takes money from you.And we could talk about how this happens, but actually you can see already here that we are already suggesting a hint that something is wrong here and here's a suggestion to put it from minus to to plus. And we'll try to reflect and, and fix and then we will see actually the model telling you, hey, maybe this is not a bug in the test, maybe it's in the code.Swyx: I wanna stay on this a little bit. First of all, this is very impressive and I think it's very valuable. What user numbers can you disclose, you launched it and then it's got fairly organic growth. You told me something off the air, but you know, I just wanted to show people like this is being adopted in quite a large amount.Itamar: [00:21:00] First of all, I'm a relatively transparent person. Like even as a manager, I think I was like top one percentile being transparent in Alibaba. It wasn't five out of five, which is a good thing because that's extreme, but it was a good, but it also could be a bad, some people would claim it's a bad thing.Like for example, if my CTO in Alibaba would tell me you did really bad and it might cut your entire budget by 30%, if in half a year you're not gonna do like much better and this and that. So I come back to a team and tell 'em what's going on without like trying to smooth thing out and we need to solve it together.If not, you're not fitting in this team. So that's my point of view. And the same thing, one of the fun thing that I like about building for developers, they kind of want that from you. To be transparent. So we are on the high numbers of thousands of weekly active users. Now, if you convert from 50,000 downloads to high thousands of weekly active users, it means like a lot of those that actually try us keep using us weekly.I'm not talking about even monthly, like weekly. And that was like one of their best expectations because you don't test your code every day. Right now, you can see it's mostly focused on testing. So you probably test it like once a week. Like we wanted to make it so smooth with your development methodology and development lifecycle that you use it every day.Like at the moment we hope it to be used weekly. And that's what we're getting. And the growth is about like every two, three weeks we double the amount of weekly and downloads. It's still very early, like seven weeks. So I don't know if it'll keep that way, but we hope so. Well [00:22:30] actually I hope that it'll be much more double every two, three weeks maybe. Thanks to the podcast.Swyx: Well, we, yeah, we'll, we'll add you know, a few thousand hopefully. The reason I ask this is because I think there's a lot of organic growth that people are sharing it with their friends and also I think you've also learned a lot from your earliest days in, in the private beta test.Like what have you learned since launching about how people want to use these testing tools?Itamar: One thing I didn't share with you is like, when you say virality, there is like inter virality and intra virality. Okay. Like within the company and outside the company. So which teams are using us? I can't say, but I can tell you that a lot of San Francisco companies are using us.And one of the things like I'm really surprised is that one team, I saw one user two weeks ago, I was so happy. And then I came yesterday and I saw 48 of that company. So what I'm trying to say to be frank is that we see more intra virality right now than inter virality. I don't see like video being shared all around Twitter. See what's going on here. Yeah. But I do see, like people share within the company, you need to use it because it's really helpful with productivity and it's something that we will work about the [00:24:00] inter virality.But to be frank, first I wanna make sure that it's helpful for developers. So I care more about intra virality and that we see working really well, because that means that tool is useful. So I'm telling to my colleague, sharing it on, on Twitter means that I also feel that it will make me cool or make me, and that's something maybe we'll need, still need, like testing.Swyx: You know, I don't, well, you're working on that. We're gonna announce something like that. Yeah. You are generating these tests, you know, based on what I saw there. You're generating these tests basically based on the name of the functions. And the doc strings, I guess?Itamar:So I think like if you obfuscate the entire code, like our accuracy will drop by 50%. So it's right. We're using a lot of hints that you see there. Like for example, the functioning, the dog string, the, the variable names et cetera. It doesn't have to be perfect, but it has a lot of hints.By the way. In some cases, in the code suggestion, we will actually suggest renaming some of the stuff that will sync, that will help us. Like there's suge renaming suggestion, for example. Usually in this case, instead of calling this variable is client and of course you'll see is “preferred client” because basically it gives a different commission for that.So we do suggest it because if you accept it, it also means it will be easier for our model or system to keep improving.Swyx: Is that a different model?Itamar: Okay. That brings a bit to the topic of models properties. Yeah. I'll share it really quickly because Take us off. Yes. It's relevant. Take us off. Off. Might take us off road.I think [00:25:30] like different models are better on different properties, for example, how obedient you are to instruction, how good you are to prompt forcing, like to format forcing. I want the results to be in a certain format or how accurate you are or how good you are in understanding code.There's so many calls happening here to models by the way. I. Just by clicking one, Hey Codium AI. Can you help me with this bank account? We do a dozen of different calls and each feature you click could be like, like with that reflect and fix and then like we choose the, the best one.I'm not talking about like hundreds of models, but we could, could use different APIs of open AI for example, and, and other models, et cetera. So basically like different models are better on different aspect. Going back to your, what we talked about, all the models will benefit from having those hints in, in the code, that rather in the code itself or documentation, et cetera.And also in the code analysis, we also consider the code analysis to be the ground truth to some extent. And soon we're also going to allow you to edit it and that will use that as well.Alessio: Yeah, maybe talk a little bit more about. How do I actually get all these models to work together? I think there's a lot of people that have only been exposed to Copilot so far, which is one use case, just complete what I'm writing. You're doing a lot more things here. A lot of people listening are engineers themselves, some of them build these tools, so they would love to [00:27:00] hear more about how do you orchestrate them, how do you decide which model the what, stuff like that.Itamar: So I'll start with the end because that is a very deterministic answer, is that we benchmark different models.Like every time this there a new model in, in town, like recently it's already old news. StarCoder. It's already like, so old news like few days ago.Swyx: No, no, no. Maybe you want to fill in what it is StarCoder?Itamar: I think StarCoder is, is a new up and coming model. We immediately test it on different benchmark and see if, if it's better on some properties, et cetera.We're gonna talk about it like a chain of thoughts in different part in the chain would benefit from different property. If I wanna do code analysis and, and convert it to natural language, maybe one model would be, would be better if I want to output like a result in, in a certain format.Maybe another model is better in forcing the, a certain format you probably saw on Twitter, et cetera. People talk about it's hard to ask model to output JSON et cetera. So basically we predefine. For different tasks, we, we use different models and I think like this is for individuals, for developers to check, try to sync, like the test that now you are working on, what is most important for you to get, you want the semantic understanding, that's most important? You want the output, like are you asking for a very specific [00:28:30] output?It's just like a chat or are you asking to give a output of code and have only code, no description. Or if there's a description of the top doc string and not something else. And then we use different models. We are aiming to have our own models in in 2024. Being independent of any other third party, like OpenAI or so, but since our product is very challenging, it has UI/UX challenges, engineering challenge, statical and dynamical analysis, and AI.As entrepreneur, you need to choose your battles. And we thought that it's better for us to, to focus on everything around the model. And one day when we are like thinking that we have the, the right UX/UI engineering, et cetera, we'll focus on model building. This is also, by the way, what we did in in Alibaba.Even when I had like half a million dollar a month for trading one foundational model, I would never start this way. You always try like first using the best model you can for your product. Then understanding what's the glass ceiling for that model? Then fine tune a foundation model, reach a higher glass ceiling and then training your own.That's what we're aiming and that's what I suggest other developers like, don't necessarily take a model and, and say, oh, it's so easy these days to do RLHF, et cetera. Like I see it's like only $600. Yeah, but what are you trying to optimize for? The properties. Don't try to like certain models first, organize your challenges.Understand the [00:30:00] properties you're aiming for and start playing with that. And only then go to train your own model.Alessio: Yeah. And when you say benchmark, you know, we did a one hour long episode, some benchmarks, there's like many of them. Are you building some unique evals to like your own problems? Like how are you doing that? And that's also work for your future model building, obviously, having good benchmarks. Yeah.Itamar:. Yeah. That's very interesting. So first of all, with all the respect, I think like we're dealing with ML benchmark for hundreds of years now.I'm, I'm kidding. But like for tens of years, right? Benchmarking statistical creatures is something that, that we're doing for a long time. I think what's new here is the generative part. It's an open challenge to some extent. And therefore, like maybe we need to re rethink some of the way we benchmark.And one of the notions that I really believe in, I don't have a proof for that, is like create a benchmark in levels. Let's say you create a benchmark from level one to 10, and it's a property based benchmark. Let's say I have a WebGPT ask something from the internet and then it should fetch it for me.So challenge level one could be, I'm asking it and it brings me something. Level number two could be I'm asking it and it has a certain structure. Let's say for example, I want to test AutoGPT. Okay. And I'm asking it to summarize what's the best cocktail I could have for this season in San Francisco.So [00:31:30] I would expect, like, for example, for that model to go. This is my I what I think to search the internet and do a certain thing. So level number three could be that I want to check that as part of this request. It uses a certain tools level five, you can add to that. I expect that it'll bring me back something like relevance and level nine it actually prints the cocktail for me I taste it and it's good. So, so I think like how I see it is like we need to have data sets similar to before and make sure that we not fine tuning the model the same way we test it. So we have one challenges that we fine tune over, right? And few challenges that we don't.And the new concept may is having those level which are property based, which is something that we know from software testing and less for ML. And this is where I think that these two concepts merge.Swyx: Maybe Codium can do ML testing in the future as well.Itamar: Yeah, that's a good idea.Swyx: Okay. I wanted to cover a little bit more about Codium in the present and then we'll go into the slides that you have.So you have some UI/UX stuff and you've obviously VS Code is the majority market share at this point of IDE, but you also have IntelliJ right?Itamar: Jet Brains in general.Swyx: Yeah. Anything that you learned supporting JetBrains stuff? You were very passionate about this one user who left you a negative review.What is the challenge of that? Like how do you think about the market, you know, maybe you should focus on VS Code since it's so popular?Itamar: Yeah. [00:33:00] So currently the VS Code extension is leading over JetBrains. And we were for a long time and, and like when I tell you long time, it could be like two or three weeks with version oh 0.5, point x something in, in VS code, although oh 0.4 or so a jet brains, we really saw the difference in, in the how people react.So we also knew that oh 0.5 is much more meaningful and one of the users left developers left three stars on, on jet brands and I really remember that. Like I, I love that. Like it's what do you want to get at, at, at our stage? What's wrong? Like, yes, you want that indication, you know, the worst thing is getting nothing.I actually, not sure if it's not better to get even the bad indication, only getting good ones to be re frank like at, at, at least in our stage. So we're, we're 9, 10, 10 months old startup. So I think like generally speaking We find it easier and fun to develop in vs code extension versus JetBrains.Although JetBrains has like very nice property, when you develop extension for one of the IDEs, it usually works well for all the others, like it's one extension for PyCharm, and et cetera. I think like there's even more flexibility in the VS code. Like for example, this app is, is a React extension as opposed that it's native in the JetBrains one we're using. What I learned is that it's basically is almost like [00:34:30] developing Android and iOS where you wanna have a lot of the best practices where you have one backend and all the software development like best practices with it.Like, like one backend version V1 supports both under Android and iOS and not different backends because that's crazy. And then you need all the methodology. What, what means that you move from one to 1.1 on the backend? What supports whatnot? If you don't what I'm talking about, if you developed in the past, things like that.So it's important. And then it's like under Android and iOS and, and you relatively want it to be the same because you don't want one developer in the same team working with Jet Brains and then other VS code and they're like talking, whoa, that's not what I'm seeing. And with code, what are you talking about?And in the future we're also gonna have like teams offering of collaboration Right now if you close Codium Tab, everything is like lost except of the test code, which you, you can, like if I go back to a test suite and do open as a file, and now you have a test file with everything that you can just save, but all the goodies here it's lost. One day we're gonna have like a platform you can save all that, collaborate with people, have it part of your PR, like have suggested part of your PR. And then you wanna have some alignment. So one of the challenges, like UX/UI, when you think about a feature, it should, some way or another fit for both platforms be because you want, I think by the way, in iOS and Android, Android sometimes you don't care about parity, but here you're talking about developers that might be on the same [00:36:00] team.So you do care a lot about that.Alessio: Obviously this is a completely different way to work for developers. I'm sure this is not everything you wanna build and you have some hint. So maybe take us through what you see the future of software development look like.Itamar: Well, that's great and also like related to our announcement, what we're working on.Part of it you already start seeing in my, in my demo before, but now I'll put it into a framework. I'll be clearer. So I think like the software development world in 2025 is gonna look very different from 2020. Very different. By the way. I think 2020 is different from 2000. I liked the web development in 95, so I needed to choose geocities and things like that.Today's much easier to build a web app and whatever, one of the cloud. So, but I think 2025 is gonna look very different in 2020 for the traditional coding. And that's like a paradigm I don't think will, will change too much in the last few years. And, and I'm gonna go over that when I, when I'm talking about, so j just to focus, I'm gonna show you like how I think the intelligence software development world look like, but I'm gonna put it in the lens of Codium AI.We are focused on code integrity. We care that with all this advancement of co-generation, et cetera, we wanna make sure that developers can code fast with confidence. That they have confidence on generated code in the AI that they are using that. That's our focus. So I'm gonna put, put that like lens when I'm going to explain.So I think like traditional development. Today works like creating some spec for different companies, [00:37:30] different development teams. Could mean something else, could be something on Figma, something on Google Docs, something on Jira. And then usually you jump directly to code implementation. And then if you have the time or patience, or will, you do some testing.And I think like some people would say that it's better to do TDD, like not everyone. Some would say like, write spec, write your tests, make sure they're green, that they do not pass. Write your implementation until your test pass. Most people do not practice it. I think for just a few, a few reason, let them mention two.One, it's tedious and I wanna write my code like before I want my test. And I don't think, and, and the second is, I think like we're missing tools to make it possible. And what we are advocating, what I'm going to explain is actually neither. Okay. It's very, I want to say it's very important. So here's how we think that the future of development pipeline or process is gonna look like.I'm gonna redo it in steps. So, first thing I think there do I wanna say that they're gonna be coding assistance and coding agents. Assistant is like co-pilot, for example, and agents is something that you give it a goal or a task and actually chains a few tasks together to complete your goal.Let's have that in mind. So I think like, What's happening right now when you saw our demo is what I presented a few minutes ago, is that you start with an implementation and we create spec for you and test for you. And that was like a agent, like you didn't converse with it, you just [00:39:00] click a button.And, and we did a, a chain of thought, like to create these, that's why it's it's an agent. And then we gave you an assistant to change tests, like you can converse it with it et cetera. So that's like what I presented today. What we're announcing is about a vision that we called the DRY. Don't repeat yourself. I'm gonna get to that when I'm, when I'm gonna show you the entire vision. But first I wanna show you an intermediate step that what we're going to release. So right now you can write your code. Or part of it, like for example, just a class abstract or so with a coding assistant like copilot and maybe in the future, like a Codium AI coding assistant.And then you can create a spec I already presented to you. And the next thing is that you going to have like a spec assistant to generate technical spec, helping you fill it quickly focused on that. And this is something that we're working on and, and going to release the first feature very soon as part of announcement.And it's gonna be very lean. Okay? We're, we're a startup that going bottom up, like lean features going to more and more comprehensive one. And then once you have the spec and implementation, you can either from implementation, have tests, and then you can run the test and fix them like I presented to you.But you can also from spec create tests, okay? From the spec directly to tests. [00:40:30]So then now you have a really interesting thing going on here is that you can start from spec, create, test, create code. You can start from test create code. You can start from a limitation. From code, create, spec and test. And actually we think the future is a very flexible one. You don't need to choose what you're practicing traditional TDD or whatever you wanna start with.If you have already some spec being created together with one time in one sprint, you decided to write a spec because you wanted to align about it with your team, et cetera, and now you can go and create tests and implementation or you wanted to run ahead and write your code. Creating tests and spec that aligns to it will be relatively easy.So what I'm talking about is extreme DRY concept; DRY is don't repeat yourself. Until today when we talked about DRY is like, don't repeat your code. I claim that there is a big parts of the spec test and implementation that repeat himself, but it's not a complete repetition because if spec was as detailed as the implementation, it's actually the implementation.But the spec is usually in different language, could be natural language and visual. And what we're aiming for, our vision is enabling the dry concept to the extreme. With all these three: you write your test will help you generate the code and the spec you write your spec will help you doing the test and implementation.Now the developers is the driver, okay? You'll have a lot [00:42:00] of like, what do you think about this? This is what you meant. Yes, no, you wanna fix the coder test, click yes or no. But you still be the driver. But there's gonna be like extreme automation on the DRY level. So that's what we're announcing, that we're aiming for as our vision and what we're providing these days in our product is the middle, is what, what you see in the middle, which is our code integrity agents working for you right now in your id, but soon also part of your Github actions, et cetera, helping you to align all these three.Alessio: This is great. How do you reconcile the difference in languages, you know, a lot of times the specs is maybe like a PM or it's like somebody who's more at the product level.Some of the implementation details is like backend developers for something. Frontend for something. How do you help translate the language between the two? And then I think in the one of the blog posts on your blog, you mentioned that this is also changing maybe how programming language themselves work. How do you see that change in the future? Like, are people gonna start From English, do you see a lot of them start from code and then it figures out the English for them?Itamar: Yeah. So first of all, I wanna say that although we're working, as we speak on managing we front-end frameworks and languages and usage, we are currently focused on the backend.So for example, as the spec, we won't let you input Figma, but don't be surprised if in 2024 the input of the spec could be a Figma. Actually, you can see [00:43:30] demos of that on a pencil drawing from OpenAI and when he exposed the GPT-4. So we will have that actually.I had a blog, but also I related to two different blogs. One, claiming a very knowledgeable and respectful, respectful person that says that English is going to be the new language program language and, and programming is dead. And another very respectful person, I think equally said that English is a horrible programming language.And actually, I think both of are correct. That's why when I wrote the blog, I, I actually related, and this is what we're saying here. Nothing is really fully redundant, but what's annoying here is that to align these three, you always need to work very hard. And that's where we want AI to help with. And if there is inconsistency will raise a question, what do, which one is true?And just click yes or no or test or, or, or code that, that what you can see in our product and we'll fix the right one accordingly. So I think like English and, and visual language and code. And the test language, let's call it like, like that for a second. All of them are going to persist. And just at the level of automation aligning all three is what we're aiming for.Swyx: You told me this before, so I I'm, I'm just actually seeing Alessio's reaction to it as a first time.Itamar: Yeah, yeah. Like you're absorbing like, yeah, yeah.Swyx: No, no. This is, I mean, you know, you can put your VC hat on or like compare, like what, what is the most critical or unsolved question presented by this vision?Alessio: A lot of these tools, especially we've seen a lot in the past, it's like the dynamic nature of a lot of this, you know?[00:45:00] Yeah. Sometimes, like, as you mentioned, sometimes people don't have time to write the test. Sometimes people don't have time to write the spec. Yeah. So sometimes you end up with things. Out of sync, you know? Yeah. Or like the implementation is moving much faster than the spec, and you need some of these agents to make the call sometimes to be like, no.Yeah, okay. The spec needs to change because clearly if you change the code this way, it needs to be like this in the future. I think my main question as a software developer myself, it's what is our role in the future? You know? Like, wow, how much should we intervene, where should we intervene?I've been coding for like 15 years, but if I've been coding for two years, where should I spend the next year? Yeah. Like focus on being better at understanding product and explain it again. Should I get better at syntax? You know, so that I can write code. Would love have any thoughts.Itamar: Yeah. You know, there's gonna be a difference between 1, 2, 3 years, three to six, six to 10, and 10 to 20. Let's for a second think about the idea that programming is solved. Then we're talking about a machine that can actually create any piece of code and start creating, like we're talking about singularity, right?Mm-hmm. If the singularity happens, then we're talking about this new set of problems. Let's put that aside. Like even if it happens in 2041, that's my prediction. I'm not sure like you should aim for thinking what you need to do, like, or not when the singularity happens. So I, [00:46:30] I would aim for mm-hmm.Like thinking about the future of the next five years or or, so. That's my recommendation because it's so crazy. Anyway. Maybe not the best recommendation. Take that we're for grain of salt. And please consult with a lawyer, at least in the scope of, of the next five years. The idea that the developers is the, the driver.It actually has like amazing team members. Agents that working for him or her and eventually because he or she's a driver, you need to understand especially what you're trying to achieve, but also being able to review what you get. The better you are in the lower level of programming in five years, it it mean like real, real program language.Then you'll be able to develop more sophisticated software and you will work in companies that probably pay more for sophisticated software and the more that you're less skilled in, in the actual programming, you actually would be able to be the programmer of the new era, almost a creator. You'll still maybe look on the code levels testing, et cetera, but what's important for you is being able to convert products, requirements, et cetera, to working with tools like Codium AI.So I think like there will be like degree of diff different type developers now. If you think about it for a second, I think like it's a natural evolution. It's, it's true today as well. Like if you know really good the Linux or assembly, et cetera, you'll probably work like on LLVM Nvidia [00:48:00] whatever, like things like that.Right. And okay. So I think it'll be like the next, next step. I'm talking about the next five years. Yeah. Yeah. Again, 15 years. I think it's, it's a new episode if you would like to invite me. Yeah. Oh, you'll be, you'll be back. Yeah. It's a new episode about how, how I think the world will look like when you really don't need a developer and we will be there as Cody mi like you can see.Mm-hmm.Alessio: Do we wanna dive a little bit into AutoGPT? You mentioned you're part of the community. Yeah.Swyx: Obviously Try, Catch, Finally, Repeat is also part of the company motto.Itamar: Yeah. So it actually really. Relates to what we're doing and there's a reason we have like a strong relationship and connection with the AutoGPT community and us being part part of it.So like you can see, we're talking about agent for a few months now, and we are building like a designated, a specific agent because we're trying to build like a product that works and gets the developer trust to have developer trust us. We're talking about code integrity. We need it to work. Like even if it will not put 100% it's not 100% by the way our product at all that UX/UI should speak the language of, oh, okay, we're not sure here, please take the driving seat.You want this or that. But we really not need, even if, if we're not close to 100%, we still need to work really well just throwing a number. 90%. And so we're building a like really designated agents like those that from code, create tests.So it could create tests, run them, fix them. It's a few tests. So we really believe in that we're [00:49:30] building a designated agent while Auto GPT is like a swarm of agents, general agents that were supposedly you can ask, please make me rich or make me rich by increase my net worth.Now please be so smart and knowledgeable to use a lot of agents and the tools, et cetera, to make it work. So I think like for AutoGPT community was less important to be very accurate at the beginning, rather to show the promise and start building a framework that aims directly to the end game and start improving from there.While what we are doing is the other way around. We're building an agent that works and build from there towards that. The target of what I explained before. But because of this related connection, although it's from different sides of the, like the philosophy of how you need to build those things, we really love the general idea.So we caught it really early that with Toran like building it, the, the maker of, of AutoGPT, and immediately I started contributing, guess what, what did I contribute at the beginning tests, right? So I started using Codium AI to build tests for AutoGPT, even, even finding problems this way, et cetera.So I become like one of the, let's say 10 contributors. And then in the core team of the management, I talk very often with with Toran on, on different aspects. And we are even gonna have a workshop,Swyx: a very small [00:49:00] meetingItamar: work meeting workshop. And we're going to compete together in a, in a hackathons.And to show that AutoGPT could be useful while, for example, Codium AI is creating the test for it, et cetera. So I'm part of that community, whether is my team are adding tests to it, whether like advising, whether like in in the management team or whether to helping Toran. Really, really on small thing.He is the amazing leader like visionaire and doing really well.Alessio: What do you think is the future of open source development? You know, obviously this is like a good example, right? You have code generating the test and in the future code could actually also implement the what the test wanna do. So like, yeah.How do you see that change? There's obviously not enough open source contributors and yeah, that's one of the, the main issue. Do you think these agents are maybe gonna help us? Nadia Eghbal has this great book called like Working in Public and there's this type of projects called Stadium model, which is, yeah, a lot of people use them and like nobody wants to contribute to them.I'm curious about, is it gonna be a lot of noise added by a lot of these agents if we let them run on any repo that is open source? Like what are the contributing guidelines for like humans versus agents? I don't have any of the answers, but like some of the questions that I've been thinking about.Itamar: Okay. So I wanna repeat your question and make sure I understand you, but like, if they're agents, for example, dedicated for improving code, why can't we run them on, mm-hmm.Run them on like a full repository in, in fixing that? The situation right now is that I don't think that right now Auto GPT would be able to do that for you. Codium AI might but it's not open sourced right now. And and like you can see like in the months or two, you will be able to like running really quickly like development velocity, like our motto is moving fast with confidence by the way.So we try to like release like every day or so, three times even a day in the backend, et cetera. And we'll develop more feature, enable you, for example, to run an entire re, but, but it's not open source. So about the open source I think like AutoGPT or LangChain, you can't really like ask please improve my repository, make it better.I don't think it will work right now because because let me like. Softly quote Ilya from Open AI. He said, like right now, let's say that a certain LLM is 95% accurate. Now you're, you're concatenating the results. So the accuracy is one point like it's, it's decaying. And what you need is like more engineering frameworks and work to be done there in order to be able to deal with inaccuracies, et cetera.And that's what we specialize in Codium, but I wanna say that I'm not saying that Auto GPT won't be able to get there. Like the more tools and that going to be added, the [00:52:30] more prompt engineering that is dedicated for this, this idea will be added by the way, where I'm talking with Toran, that Codium, for example, would be one of the agents for Auto GPT.Think about it AutoGPT is not, is there for any goal, like increase my net worth, though not focused as us on fixing or improving code. We might be another agent, by the way. We might also be, we're working on it as a plugin for ChatGPT. We're actually almost finished with it. So that's like I think how it's gonna be done.Again, open opensource, not something we're thinking about. We wanted to be really good before weSwyx: opensource it. That was all very impressive. Your vision is actually very encouraging as well, and I, I'm very excited to try it out myself. I'm just curious on the Israel side of things, right? Like you, you're visiting San Francisco for a two week trip for this special program you can tell us about. But also I think a lot of American developers have heard that, you know, Israel has a really good tech scene. Mostly it's just security startups. You know, I did some, I was in some special unit in the I D F and like, you know, I come out and like, I'm doing the same thing again, but like, you know, for enterprises but maybe just something like, describe for, for the rest of the world.It's like, What is the Israeli tech scene like? What is this program that you're on and what shouldItamar: people know? So I think like Israel is the most condensed startup per capita. I think we're number one really? Or, or startup pair square meter. I think, I think we're number one as well because of these properties actually there is a very strong community and like everyone are around, like are [00:57:00] working in a.An entrepreneur or working in a startup. And when you go to the bar or the coffee, you hear if it's 20, 21, people talking about secondary, if it's 2023 talking about like how amazing Geni is, but everyone are like whatever are around you are like in, in the scene. And, and that's like a lot of networking and data propagation, I think.Somehow similar here to, to the Bay Area in San Francisco that it helps, right. So I think that's one of our strong points. You mentioned some others. I'm not saying that it doesn't help. Yes. And being in the like idf, the army, that age of 19, you go and start dealing with technology like very advanced one, that, that helps a lot.And then going back to the community, there's this community like is all over the world. And for example, there is this program called Icon. It's basically Israelis and in the Valley created a program for Israelis from, from Israel to come and it's called Silicon Valley 1 0 1 to learn what's going on here.Because with all the respect to the tech scene in Israel here, it's the, the real thing, right? So, so it's an non-profit organization by Israelis that moved here, that brings you and, and then brings people from a 16 D or, or Google or Navon or like. Amazing people from unicorns or, or up and coming startup or accelerator, and give you up-to-date talks and, and also connect you to relevant people.And that's, that's why I'm here in addition to to, you know, to [00:58:30] me and, and participate in this amazing podcast, et cetera.Swyx: Yeah. Oh, well, I, I think, I think there's a lot of exciting tech talent, you know, in, in Tel Aviv, and I, I'm, I'm glad that your offer is Israeli.Itamar: I, I think one of thing I wanted to say, like yeah, of course, that because of what, what what we said security is, is a very strong scene, but a actually water purification agriculture attack, there's a awful other things like usually it's come from necessity.Yeah. Like, we have big part of our company of our state is like a desert. So there's, there's other things like ai by the way is, is, is big also in Israel. Like, for example, I think there's an Israeli competitor to open ai. I'm not saying like it's as big, but it's ai 21, I think out of 10.Yeah. Out. Oh yeah. 21. Is this really? Yeah. Out of 10 like most, mm-hmm. Profound research labs. Research lab is, for example, I, I love, I love their. Yeah. Yeah.Swyx: I, I think we should try to talk to one of them. But yeah, when you and I met, we connected a little bit Singapore, you know, I was in the Singapore Army and Israeli army.We do have a lot of connections between countries and small countries that don't have a lot of natural resources that have to make due in the world by figuring out some other services. I think the Singapore startup scene has not done as well as the Israeli startup scene. So I'm very interested in, in how small, small countries can have a world impact essentially.Itamar: It's a question we're being asked a lot, like why, for example, let's go to the soft skills. I think like failing is a bad thing. Yeah. Like, okay. Like sometimes like VCs prefer to [01:00:00] put money on a, on an entrepreneur that failed in his first startup and actually succeeded because now that person is knowledgeable, what it mean to be, to fail and very hungry to, to succeed.So I think like generally, like there's a few reason I think it's hard to put the finger exactly, but we talked about a few things. But one other thing I think like failing is not like, this is my fourth company. I did one as, it wasn't a startup, it was a company as a teenager. And then I had like my first startup, my second company that like, had a amazing run, but then very beautiful collapse.And then like my third company, my second startup eventually exit successfully to, to Alibaba. So, so like, I think like it's there, there are a lot of trial and error, which is being appreciated, not like suppressed. I guess like that's one of the reason,Alessio: wanna jump into lightning round?Swyx: Yes. I think we send you into prep, but there's just three questions now.We've, we've actually reduced it quite a bit, but you have it,Alessio: so, and we can read them that you can take time and answer. You don't have to right away. First question, what is a already appin in AI that Utah would take much longer than an sItamar: Okay, so I have to, I hope it doesn't sound like arrogant,
Clean Code, czyli Czysty Kod. To tytuł książki, którą często polecamy młodym programistom. Ponieważ, jednym z etapów rozwoju rzemiosła programisty, jest tworzenie prostego w zrozumieniu kodu.Sztuka ta nie jest łatwa, jednak istnieje kilkanaście różnych reguł i podpowiedzi, których stosowanie może pozwolić na uzyskanie "wystarczająco czystego kodu". Pytanie tylko, które z nich wybrać i kiedy stosować?✅ Czym jest Clean Code?✅ Jak definiować i jakie reguły można zastosować przy Clean Code?✅ Czy Clean Code może być uniwersalny i identyczny dla wszystkich naszych projektów?✅ Jakie zasady stosujemy w projektach i na co uważamy?W tym odcinku podpowiadamy jak my patrzymy na Clean Code. Kiedy i po co stosujemy pewne zasady oraz dlaczego SOLID nie zawsze jest wymagany.---Najważniejsze linki:- Serwer Discord DevEnv - https://bit.ly/devenv-discord- YouTube DevEnv - https://bit.ly/devenv-yt- Mapa Myśli Clean Code - https://devenv.pl/download/clean-code.pdf---W tym odcinku rozmawialiśmy o:(00:32) Wstęp do tematu odcinka(00:45) Serwer Discord DevEnv(01:18) Kontekst aplikacji jest ważny(02:30) Implementacje na przyszłość(03:10) AHA Programming(04:08) Ustalenie poziomu “kod wystarczająco dobry”(06:55) Wszyscy powinni rozumieć wymagania względem kodu(07:20) Reguły Clean Code, które można zastosować(08:37) Gotowe reguły dla narzędzia SCA(09:02) Wspólny standard nazewnictwa(12:00) Standardy na wielu poziomach(15:05) Unikamy komentarzy bez uzasadnienia(16:02) Kiedy komentarze są zasadne(18:03) Zasada Skauta(19:22) Magic Numbers & String(21:47) Zasada DRY - Don't Repeat Yourself(24:05) Zasady SOLID*(25:45) Dług techniczny, zasady, a konsekwencje(26:32) W Definition of Done - “Zawsze Testy”(27:15) Nauka na błędach jako sposób na poprawę swojego kodu(27:55) Odpowiedni poziom satysfakcji(29:00) Jak mierzyć Clean Code?(35:17) Zakończenie + Najważniejsze miejsca DevEnv---
Welcome to the TOEFL with Andrea podcast where I help you earn your dream score. Today's lesson, like every Wednesday, focuses on the writing section of the test, and today you'll learn about "Don't Repeat Yourself. Don't Repeat Yourself".As a special thanks for listening, I've got something special for you. It's a free mini-course. That's right, you can get a free video lesson that I made that teaches you the common English writing punctuation mistakes. You can access the course immediately for free at StudyWithAndrea.com/MINI.Happy learning, AndreaSupport the show
Steph and Chris announce Joël Quenneville as the new host of the show!
DRY signifie "Don't Repeat Yourself", mais que ce cache t'il derrière ce principe ? C'est le sujet de l'épisode d'aujourd'hui !Liens de l'épisode : - Article d'origine : https://code-garage.fr/blog/le-principe-dry-c-est-quoi/
Welcome to the TOEFL with Andrea podcast where I help you earn your dream score. Today's lesson, like every Wednesday, focuses on the writing section of the test, and today you'll learn about "Don't Repeat Yourself".As a special thanks for listening, I've got something special for you. It's a free mini-course. That's right, you can get a free video lesson that I made that teaches you the common English writing punctuation mistakes. You can access the course immediately for free at StudyWithAndrea.com/MINI.Happy learning, AndreaSupport the show
Being pregnant is hard, but this tapas episode is good! Steph discovered and used a #yelling Slack channel and attended a remote magic show. Chris touches on TypeScript design decisions and edge cases. Then they answer a question captured from a client Slack channel regarding a debate about whether I18n should be used in tests and whether tests should break when localized text changes. This episode is brought to you by ScoutAPM (https://scoutapm.com/bikeshed). Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy. Emma Bostian (https://twitter.com/EmmaBostian) Ladybug Podcast (https://www.ladybug.dev/) Gerrit (https://www.gerritcodereview.com/) Gregg Tobo the Magician (https://astonishingproductions.com/) Sean Wang - swyx - better twitter search (https://twitter.com/swyx/status/1328086859356913664) Twemex (https://twemex.app/) GitHub Pull Request File Tree Beta (https://github.blog/changelog/2022-03-16-pull-request-file-tree-beta/) Sam Zimmerman - CEO of Sagewell Financial on Giant Robots (https://www.giantrobots.fm/414) TypeScript 4.1 feature (https://devblogs.microsoft.com/typescript/announcing-typescript-4-1/) The Bike Shed: 269: Things are Knowable (Gary Bernhardt) (https://www.bikeshed.fm/269) TSConfig Reference - Docs on every TSConfig option (https://www.typescriptlang.org/tsconfig#noUncheckedIndexedAccess) Rails I18n (https://guides.rubyonrails.org/i18n.html) This episode is brought to you by Studio 3T (https://studio3t.com/free). Try Studio 3T's full suite of features for 30 days, no payment details needed. Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. There are a couple of new things in my world, so one of them that I wanted to talk about is the fact that being pregnant is hard. I feel like this is probably a known thing, but I feel like I don't hear it talked about as much as I'd really like, especially in sort of like a professional context. And so I just wanted to share for anyone else that may be listening, if you're also pregnant, this is hard. And I also really appreciate my team. Going through the first trimester is typically where you experience a lot of morning sickness and fatigue, and I had all of that. And so I was at the point that most of my days, I didn't even start till about noon and even some days, starting at noon was a struggle. And thankfully, the thoughtbot client that I'm working with most of the teams are on West Coast hours, so that worked out pretty well. But I even shared a post internally and was like, "Hey, I'm not doing great in the mornings. And so I really can't facilitate any morning meetings. I can't be part of some of the hiring intros that we do," because we like to have a team lead provide a welcoming and then closing for anyone that's coming for interview day. I couldn't do those, and those normally happen around 9:00 a.m. for Eastern Time. And everybody was super supportive of it. So I really appreciate all of thoughtbot and my managers and team being so great about this. Also, the client team they're wonderful. It turns out growing a little human; I'm learning how hard it is and working full time. It's an interesting challenge. Oh, and as part of that appreciation because…so there's just not a lot of women that I've worked with. This may be one of those symptoms of being in tech where one, I haven't worked with tons of women, and then two, working with a woman who is also pregnant and going through that as well. So it's been a little bit isolating in that experience. But there is someone that I follow on Twitter, @EmmaBostian. She's also one of the co-hosts for the Ladybug Podcast. And she has been just sharing some of her, like, I am two months sleep deprived. She's had her baby now, and she is sharing some of that journey. And I really appreciate people who just share that journey and what they're going through because then it helps normalize it for me in terms of what I'm feeling. I hope this helps normalize it for anybody else that might be listening too. CHRIS: I certainly can't speak to the specifics of being pregnant. But I do think it's wonderful for you to use this space that we have here to try and forward that along and say what your experience is like and share that with folks and hopefully make it a little bit better for everyone else out there. Also, you snuck in a sneaky pro-tip there, which is work on the East Coast and have a West Coast team. That just sounds like the obvious correct way to go about this. STEPH: That has worked out really well and been very helpful for me. I'm already not a great morning person; I've tried. I've really strived at times to be a morning person because I just have this idea in my head morning people get more stuff done. I don't think that's true, but I just have that idea. And I'm not the world's best morning person, so it has worked out for many reasons but yeah, especially in helping me get through that first trimester and also just supporting family and other things that are going on. Oh, I also learned a pro-tip about Twitter. This is going to seem totally random, but it was relevant when I was searching for stuff on Twitter [laughs] that was related to tech and pregnancy. But I learned...because I wanted to be able to search for something that someone that I follow what they said but I couldn't remember who said it. And so I found that in the search bar, I can add filter:follows. So you can have your search term like if you're looking for cake or pregnancy, or sleep-deprived and then look for filter:follows, and then that will filter the search results to everybody that you follow. I imagine that that probably works for followers too, but I haven't tried it. CHRIS: I like the left turn you took us on there but still keeping it connected. On the topic of Twitter search, they apparently have a very powerful search, but it's also hidden, and you got to know the specific syntax and whatnot. But there is a wonderful project by Shawn Wang, AKA Swyx, on the internet, bettertwitter.netlify.com is the URL for it. I will share a link to his tweet introducing it. But it's a really wonderful tool that just provides a UI for all of these different filters and configurations. And both make discoverability that much better and then also make it easy to just compose one of these searches and use that. The other thing that I'll recommend is, I think it's a Chrome plugin. I'm guessing is what I'm working with here like a browser extension, but it's called Twemex, T-W-E-M-EX. And there's a sidebar in Twitter now, which just seems wonderful and useful. So as I'm looking at a Swyx post here, or a tweet as they're called on Twitter because I know that vernacular, there's a sidebar which is specific to Shawn Wang. And there's a search at the top so I can search within it. But it's just finding their most popular tweets and putting that on a sidebar. It's a very useful contextual addition to Twitter that I found just awesome. So that combination of things has made my Twitter experience much better. So yeah, we'll have show notes for both of those as well. STEPH: Nice. I did not know about those. This may cause someone to laugh at me because maybe it's easier than I think. But I can never remember that advanced search that Twitter does offer; I have to search it every time. I just go to Google, and I'm like, advanced Twitter search, and then it brings up a site for me, and then I use that as the one that Twitter does provide. But yeah, from the normal UI, I don't know how to get there. Maybe I haven't tried hard enough. Maybe it's hidden. CHRIS: It's like they're hiding it. STEPH: Yeah, one of those. [laughs] CHRIS: It's very costly. They have to like MapReduce the entire internet in order to make that search work. So they're like, well, what if we hide it because it's like 50 cents per query? And so maybe we shouldn't promote this too much. STEPH: [laughs] CHRIS: And let's just live in the moment, everybody. Let's just swim in the Twitter stream rather than look back at the history. I make guesses about the universe now. STEPH: [laughs] On a different note, I also discovered at thoughtbot in our variety of Slack channels that we have a yelling channel, and I had not used it before. I had not hung out there before. It's a delightful channel. It's a place that you just go, and you type in all caps. You can yell about anything that you would like to. And I specifically needed to yell about Gerrit, which is the replacement or the alternative that we're using for GitHub or GitLab, or Bitbucket, or any of those services. So we're using Gerrit, and I've been working to feel comfortable with the UI and then be able to review CRs and things like that. My vernacular is also changing because my team refers to them as change requests instead of pull requests. So I'm floating back and forth between CRs and PRs. And because I'm in Gerrit world, I missed some of the updates that GitHub made to their pull request review screen. And so then I happened to hop in GitHub one day, and I saw it, and I was like, what is this? So that was novel. But going back to yelling, I needed to yell about Gerrit because I have not found a way to collaborate with someone who has already pushed up changes. I have found ways that I can pull their changes which then took a little while. I found it in a sneaky little tab called download. I didn't expect it to be there. But then the actual snippet it's like, run this in your terminal, and this is then how you pull down the changes. And I'm like, okay, so I did that. But I can't push to their existing changes because then I get like, well, you're not the owner, so we're going block you, which is like, cool, cool, cool. Okay, I kind of get that because you don't want me messing up somebody else's content or something that they've done. But I really, really, really want to collaborate with this person, and we're trying to do something together, and you're blocking me. And so I had to go to the yelling channel, and I felt better. And I'm yelling again. [laughs] Maybe I don't feel that great because I'm getting angry again talking about it. CHRIS: You vented a little into the yelling channel; maybe not everything, though. STEPH: Yeah, I still have more to vent because it's made life hard. Every time I wanted to push up a change or pull down someone else's changes, there are now all these CRs that then I just have to go and abandon, which is then the terminology for then essentially closing it and ignoring it, so I'm constantly going through. And if I do want to pull in changes or collaborate, then there's a flow of either where I abandon mine, or I pull in their changes, but then I have to squash everything because if you push up multiple commits to Gerrit, it's going to split those commits into different CRs, don't like that. So there are a couple of things that have been pain points. And yeah, so plus-one for yelling channels, let people get it out. CHRIS: Okay, so definitely some feelings that you are working through here. I'm happy to work together as a team to get through some of them. One thing that I want to touch on is you very quickly hinted at GitHub has got a bunch of new things that are cool. I want to talk about those. But I want to touch [laughs] on an anecdote. You talked about pushing something up to someone else's branch. You're like, oh, you know, I made some changes locally, and I'm going to push them up. I had an interesting experience once where I was interacting with another developer. I had done some code review. They weren't quite understanding where I was. They had a lot of questions. And finally, I said, you know what? This will just be easier. Here, I pushed up a commit to your branch, so now you can see what I'm talking about. And I thought of this as a very innocuous act, but it was not interpreted that way. That individual interpreted it in a very aggressive sort of; it was not taken well. And I think part of that was related to I think of Git commits as just these little ephemeral things where you're like, throw it out, feel free. This is just the easiest way for me to communicate this change in the context of the work that you're doing. I thought I was doing a nice favor thing here. That was not how it went. We had a good conversation after I got to the heart of where we both were emotionally on this thing. It was interesting. The interaction of emotion and tech is always interesting. But as a result, I'm very, very careful with that now. I do think it's a great way as long as I've gotten buy-in from the person beforehand. But I will always spot check and be like, "Hey, just to confirm, I can just push up a commit to your branch, but are you okay with that? Is that fine with you?" So I've become very cautious with that. STEPH: Yeah, that feels like one of those painful moments where it highlights that the people that you work with that you are accustomed to having a certain level of trust or default trust with those individuals, and then working with someone else that they don't have that where the cup is half-full in terms of that trust, or that this person means well kind of feelings towards a colleague or towards someone that they're working with. So it totally makes sense that it's always good to check and just to be like, "Hey, I'd love to push up some changes to your branch. Is that cool?" And then once you've established that, then that just makes it easier. But I do remember that happening, and yeah, that was a bit painful and shocking because we didn't see that coming and then learned from it. CHRIS: I do think it's an important thing to learn, though, because for me, in that moment, this was this throwaway operation that I thought almost nothing of, but then another individual interpreted it in a very different way. And that can happen, that can happen across tons of different things. And I don't even want to live in the idealized world where it's just tech; we're just pushing around zeros and ones; there's no human to this. But no, I actually believe it's a deeply human thing that we're doing here. It's our job to teach the computers to be a little closer to us humans or something like that. And so it was a really pointed clarification of that for me where it was this thing that I didn't even think once about, no less twice, and yet someone else interpreted it in such a different way. So it was a useful learning situation for me. STEPH: Yeah, I totally agree. I think that's a really wise default to have to check in with people before assuming that they'll be comfortable with something that we're comfortable with. CHRIS: Indeed. But shifting back to what you mentioned of GitHub, a bunch of new stuff came in GitHub, and you were super excited about it. And then you went on to say other things about another system. [laughs] But let's talk about the great things in GitHub. What are the particular ones that have caught your eye? I've seen some, but I'm intrigued. Let's compare notes. STEPH: So this is one of those where I hadn't seen GitHub in quite a while, and then I hopped in, and I was like, this is different. But some of the things that did stand out to me right away is that on the left-hand side, I can see all of the files that have been changed, and so that's a really nice tree where I just then immediately know. Because that was one of the things that I often did going to a PR is that I would see what files are involved in this change because it was just a nice overview of what part of the applications am I walking through? Are there tests for this? Have they altered or added tests? And so I really like that about it. I'm sure there's other stuff. But that is the main thing that stood out to me. How about you? CHRIS: Yeah, that sidebar file tree is very, very nice, which I find surprising because I don't use a file tree in my editor. I only do fuzzy finding to jump to files. But I think there's something about whenever GitHub had the file list; these are all the files that are changed. I'm like, this is just noise. I can't look at this and get anything out of it. But the file tree is so much more...there's a shape to it that my brain can sort of pattern match on. And it's just a much more discoverable way to observe that information. So I've really loved that. That was a wonderful one. The other one that I was surprised by is GitHub semantic code analysis; stuff has gotten much, much better over time subtly. I didn't even notice this happening. But I was discussing something with someone today, and we were looking at it on GitHub, and I just happened to click on an identifier, and it popped up a little thing that says, "Oh, do you want to hop to the references or the definition of this?" I was like, that is what I want to do. And so I hopped to the definition, hopped to the definition of another thing, and was just jumping around in the code in a way that I didn't know was available. So that was really neat. But then also, I was in a pull request at one point, and someone was writing a spec, and they had introduced a helper just like stub something at the bottom of a given spec file. And it's like, I feel like we have this one already. And I just clicked on the identifier. I think it might have actually been a matcher in RSpec, so it was like, have alert. And I was like, oh, I feel like we have this one, a matcher specific to flash message alerts on the page. And I clicked on it, and GitHub provided me a nice little inline dialog that showed me all of the definitions of have alert, which I think we were up to like four of them at that point. So it had been copied and pasted across a couple of different files, which I think is totally fine and a great way to start, but they were very similar implementations. I was like, oh, looks like we actually already have this in a couple of places, maybe we clean it up and extract it to a common spec support thing, and ta-da, I was able to do all of that from the GitHub pull requests UI. And I was like, this is awesome. So kudos to the GitHub team for doing some nifty stuff. Also, can I get into the merge queue? Thank you. ... STEPH: [laughs] There it is. That is very cool. I didn't know I could do that from the pull request screen. I've seen it where if I'm browsing code that, then I can see a snippet of where everything's defined and then go there, but I hadn't seen that from the pull request. I did find the changelogs for GitHub that talk about the introduction of having the tree, so we'll be sure to include a link in the show notes for that too. But yes, thank you for letting me use our podcast as a yelling channel. It's been delightful. [laughs] Mid-roll Ad Hi, friends, and now a quick break to hear from today's sponsor, Scout APM. Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat. Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend. And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed. CHRIS: Well, speaking of podcasts, actually, there was an interesting thing that happened where the CEO of Sagewell Financial, the company of which I am the CTO of, Sam Zimmerman is his name, and he went on the Giant Robots Podcast with Chad a couple of weeks ago. So that is now available. We'll link to that in the show notes. I'll be honest; it was a very interesting experience for me. I listened to portions of it. If we're being honest, I searched for my name in the transcript, and it showed up, and I was like, okay, that's cool. And it was interesting to hear two different individuals that I've worked with either in the past or currently talking about it. But then also, for anyone that's been interested in what I'm building over at Sagewell Financial and wants to hear it from someone who can probably do a much better job of pitching and describing the problem space that we're working in, and all of the fun challenges that we have, and that we're hopefully living up to and building something very interesting, I think Sam does a really fantastic job of that. That's the reason I'm at the company, frankly. So yeah, if anyone wants to hear a little bit more about that, that is a very interesting episode. It was a little weird for me to listen to personally, but I think everybody else will probably have a normal experience listening to it because they're not the CTO of the company. So that's one thing. But moving on, I feel like today's going to be a grab bag episode or tapas episode, lots of small plates, as we were discussing as we were prepping for this episode. But to share one little thing that happened, I've been a little more removed from the code of late, something that we've talked about on and off in previous episodes. Thankfully, I have a wonderful team that's doing an absolutely fantastic job moving very rapidly through features and bug fixes and all those sorts of things. But also, I'm just not as involved even in code review at this point. And so I saw one that snuck through today that, I'm going to be honest, I had an emotional reaction to. I've talked myself down; we're fine now. But the team collectively made the decision to move from a line length of 80 characters to a line length of 120 characters, and I had some feelings. STEPH: Did you fire everybody? [laughs] CHRIS: No. I immediately said, doesn't really matter. This is the whole conversation around auto-formatting tools is like we're just taking the decision away. I personally am a fan of the smaller line length because I like to have multiple files open left to right. That is my reason for it, but that's my reason. A collective of the developers that are frankly working more in the code than I am at this point decided this was meaningful. It was a thing that we could automate. I think that we can, you know, it's not a thing that we have to manage. So I was like, cool. There we go. The one thing that I did follow up on I was like, okay; y'all snuck this one in, it's fine, I'm fine with it. I feel fine; everything's fine. But let's add that to the git-blame-ignore-revs file, which is a useful thing to know about. Because otherwise, we have a handful of different changes like this where we upgrade Prettier, and suddenly, the manner in which it formats the files changes, so we have to reformat everything at once. And this magical file that exists in Git to say, "Hey, ignore this revision because it is not relevant to the semantic history of the app," and so it also takes that decision out of the consideration like yeah, should we reformat or not? Because then it'll be noisy. That magical file takes that decision away, and so I love that. STEPH: I so love the idea because you took vacation recently twice. So I love the idea of there was a little coup and people are applauding, and they're like, while Chris is on vacation, we're going to merge this change [laughs] that changes the character line. And yeah, that brings me joy. Well, I'm glad you're working through it. Sounds like we're both working through some hard emotional stuff. [laughs] CHRIS: Life's tricky, is all I'm going to say. STEPH: I am curious, what prompted the 80 characters versus 120? This is one of those areas that's like, yeah, I have my default preference like you said. But I'm more intrigued just when people are interested in changing it and what goes with it. So do you remember one of the reasons that 120 just suited their preferences better? CHRIS: Frankly, again, I was not super involved in the discussion or what led them to it. STEPH: [laughs] CHRIS: My guess is 120 is used...I think 80 is a pretty common one. I think 120 is another of the common ones. So I think it's just a thing that exists out there in the mindshare. But also, my guess is they made the switch to 120 and then reformatted a few files that had like, ah, this is like 85 characters, and that's annoying. What does it look like if we bump it up? And so 120 provided a meaningful change of like, this is a thing that splits to four lines if we have an 80 character thing, or it's one line if it's 120 characters, which is a surprising thing to say, but that's actually the way it plays out in certain cases because the way Prettier will break lines isn't just put stuff on the next line always. It's got to break across multiple lines, actually. All right, now that we're back in the opinion space, I have a strong one. STEPH: This is The Bikeshed. We can live up to that name. [laughs] CHRIS: So I do want an additional configuration in Prettier Ruby. This is the thing I'll say. Maybe I can chase down Kevin Newton and see if he's open to this. But when Prettier does break method call with arguments going into it but no parens on that method call, and it breaks out to multiple lines, it does the dangling indent thing, which I do not like. I find it distasteful; I find it noisy, the shape of the code. I'm a big fan of the squint test. I know that from Sandi Metz, I believe, or maybe it's Avdi Grimm. I associate it with both of them in my mind. But it's just a way to look at the code and kind of squint, and you see the shape of it, and it tells you something. And when the lines break in that weird way, and you have these arbitrary dangling indents, the shape of the code is broken up. And I don't feel so strongly. I actually regularly stop myself from commenting on pull requests on this because it's very easy. All you need to do is add explicit parens, and then Prettier will wrap the line in what I believe is a much more aesthetically pleasing, concise, consistent, lots of other good adjectives here that are definitely just my preferences and not facts about the world. But so what I want is, Prettier, hey, if you're going to break this line across multiple lines, insert the parens. Parens are no longer optional for breaking across multiple lines; parens are only optional within a given line. So if we're not breaking across lines, I want that configuration because this is now one of those things where I could comment on this. And if they added the optional parens, then Prettier would reform it in a different way. And I want my auto formatter don't give me ways to do stuff. Like, constrain me more but also within the constraints of the preferences that I have, please, thank you. STEPH: I love all the varying levels there [laughter] of you want a thing, but you know it's also very personal to you and how you're walking that line and hopping back and forth on each side. I also love the idea. We have the idea of clean code. I really want something that's called distasteful code now [laughs] where you just give examples of distasteful code, yes. Well, I wish you good luck in your journey [laughs] and how this goes and how you continue to battle. I also appreciate that you mentioned when you're reviewing code how you know it's something that you really want, but you will refrain from commenting on that. I just appreciate when people have that filter to recognize, like, is this valuable? Is it important? Or, like you said, how can we just make this more of the default so then we don't even have to talk about it? And then lean into whatever the default the team goes with. CHRIS: Well, thank you. I very much appreciate that because, frankly, it's been very difficult. STEPH: I do have something I want to yell about but in a very positive way or pranting as we determined or, you know, raving, the actual real term that wonderful listeners pointed out to us. CHRIS: Prant for life. That's my stance. STEPH: We had a magic show at thoughtbot. It was all remote, but the wonderful Gregg Tobo, the magician, performed a magic show for us where we all showed up on Zoom. And it was interactive, and it was delightful, and it was so much fun. And so if you need something fun for your team that you just want to bring folks together, highly recommend. I had no idea I was going to enjoy a magic show this much, but it was a lot of fun. So I'll be sure to include some links in the show notes in case that interests anyone. But yeah, magic. I'm doing jazz hands. People can't see it, but magic. I like how you referred earlier, saying that today is more of like a tapas episode. And I'm realizing that all of my tapas are related to being pregnant, yelling, and magic shows, and I'm okay with that. [laughs] But on that note, what else is on your tapas plate? CHRIS: Actually, a nice positive one that came into the world...I always like when we get those. So this is interesting because I was actually looking back at the history, and I had Gary Bernhardt on The Bike Shed back in Episode 269. We'll include a link in the show notes. But we talked a bunch about various things, including TypeScript. And I was lamenting what I saw as a pretty big edge case in TypeScript. So the goal of TypeScript is like, all right, JavaScript exists, this is true. What can we do on top of that? Let's not fundamentally change it, but let's build a type system on top of it and try and make it so that we can enforce correctness but understand that JavaScript is a highly dynamic language and that we don't want to overconstrain and that we've got to meet it where it is. And so one of the design decisions early on with TypeScript is if you have an array and you say like it's an array of integers, so you have typed that array to be this is an array of int, or it will be an array of number in JavaScript because JavaScript doesn't have integers; they only have numbers. Cool. [laughs] Setting aside other JavaScript variables here, you have an array of numbers. And so if you use element access to say, like, say the name of array is array of nums and then use brackets and you say zero, so get me the first element of that array. TypeScript will infer the type of that to be a number. Of course, it's a number, right? You got an array of numbers, you take a number out of it, of course, you're going to have a number, except you know what's also an array of numbers? An empty array. Well, of course. So there's no way for TypeScript because that's a runtime thing, whether or not the array is full of things or not. Or imagine you get the third element from the array. Well, JavaScript will either return you the third element, which indeed is a number, or undefined because there's no third element in this array. So that is an unfortunate but very understandable edge case that TypeScript was like, listen, this is how JavaScript works. So we're not going to…frankly, we don't think the people embracing TypeScript and bringing it into their world would accept this amount of noise because this is everywhere. Anytime you interact with an array, you are going to run into this, this sort of uncertainty of did I actually get the thing? And it's like, yeah, no, I know how many things are in the array that I'm working with. Spoiler, you maybe don't is the answer. And so, we ran into this edge case in our codebase. We were accessing an element, but TypeScript was telling us, "Yes, definitively, you have an object of that type because you just got it out of an array, which is an array of that type." But we did not; we had undefined. And so we had, you know blah is not a method on undefined or whatever that classic JavaScript runtime error is. And I was like, well, that's very sad. But now we get to the fun part of the story, TypeScript, as of version 4.1, which came out like the week that I recorded with Gary Bernhardt, which was interesting to look at the timeline here. TypeScript has added a new configuration. So a new strictness dial that you can configure in your tsconfig called noUncheckedIndexedAccess. So if you have an array and you are getting an element out of it by index, TypeScript will say, "Hey, you got to check if that's undefined," because to be clear, very much could be undefined. And I was so happy to find this. We turned it on in our codebase. It found the error in the place that we actually had an error and then found a few others that I think probably had errored at some times. But it was just one of those for me very nice things to be able to dial up the strictness and enforce correctness within our codebase, and so I was very happy about it. Other folks may say that seems like too much work. And, you know, I get that, I get that take. I'm definitely on the side of I'm willing to go through the effort to have enforced correctness, but you know, that's a choice. STEPH: Yeah, that's thoughtful. I like that, how you said you can dial up the strictness so then as you are introducing TypeScript, then people have that option. There is an argument there in the back of my head that's like, well, if you're introducing types, then you want to start more strict because then you're just creating problems for yourself down the road. But I also understand that that can make things very difficult to then introduce it to teams in existing codebases. So that seems like a really nice addition where then people can say, "Yeah, no, I really want the strictness. This is why I'm here," and then they can turn that on. CHRIS: So TypeScript in the configuration has strict mode, so you say strict true. And that is a moving target with each new version of TypeScript. But it's their sort of [inaudible 28:14] set of things that are part of strict, but apparently, this one's not in it. So now I'm like, wait, can I have a stricter? Can I have a strictest option? Can I have dial it to 11, please? [laughs] Really rough me up and make sure my code is correct. But it is the sort of thing like when we turn any of these on; it will find things in our codebase. Some of them, we have to appease the compiler even though we know the code to be correct. But the code is not provably correct as it sits in our file. So I am, again, happy to make that exchange. And I like that TypeScript as a project gives us configurability. But again, I am on team where's the strictest button? I would like to push that as hard as I can and live that life. STEPH: Yeah, I like that phrasing that you just said about provably correct. That's nice. CHRIS: That's the world I want to live in, everything you own in the box to the left, which is probably correct. STEPH: [laughs] That's how that song goes. CHRIS: Yeah. This is a reference to move errors to the left, which I think I've referenced before. But now that I'm just referencing Beyoncé and not the actual article, it's probably worth referencing the article, but the idea of, like, if a user hits an error, that's not great. So let's move it back to QA, that's a little further to the left in sort of the timeline. But what if we could move it to an automated test in CI? But what if we could move it into your editor? What if we could move it even further to the left? And so, a type system tends to be sort of very far ratcheted up to the left. It's as early as possible that you can catch these. So again, to reference Beyoncé, everything you own in a box to the left. STEPH: [singing] Everything you own in the box to the left. CHRIS: Thank you for doing the needful work there. STEPH: [laughs] Mid-roll Ad And now a quick break to hear from today's sponsor, Studio 3T. When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy. And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase. You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free that's studio3t.com/free. STEPH: I have a question for you that I'd really love to get your opinion on because I myself I'm waffling back and forth where someone brought up some really great points about a concern or just a question they had brought up around testing and i18n specifically. And I agree with the things that they're saying, but yet, there's also a part of me that doesn't, and so I'm Stephanie divided. And so, I'm trying to figure out where I stand on this. So let me dive in and give you some context; I'm going to share the statement/question that they had asked. So here we go. "One of my priorities has been I should be able to review a test without having to reference any other code. References to i18n means that I have to go over to YAML and make sure the right keys have the right values, and that seems error-prone. In some cases, a lack of a hit in the YAML defers to defaults. If the intent is to override the name of model attribute and error messages and it is coded incorrectly, the code fails silently without translating and uses the humanized attribute name, and that would go undetected. If libraries change structure, it might also fail silently as well, so to me, the only failsafe way is to be fully explicit in test." So this goes with the idea that if you're writing tests and then you're testing text, but it's on the screen or perhaps an email, that you're actually going to assert against that string that is shown to the user instead of referencing the i18n keys. And then that also backs up this person's idea that you really want to not have to jump around. If you're reading a test, everything you really need to know about that test should live very close by. And I really agree with that initial statement; I want everything that's very close to the test, especially if it's anywhere in that expectation line, I really want it close, so I can understand what's the expectation, what's under test, what are the inputs, what's the expected outcome. So I wholeheartedly support that idea. But yet, I am in the camp that I then will use YAML keys instead of providing that exact string because I do look at i18n as a helpful abstraction, and I want to trust that i18n is doing its job. And so that way, I don't have to provide that string that's there because then we're also choosing, okay, well, which language are we going to always use for our test? So this is the part where I feel divided. So I'm going to walk you through some of the reasons that I really support this idea and other reasons that I still use the i18n keys and then get your take on it. So there is a part of me that when I'm using the i18n YAML keys, it does make me sad because it reduces the readability in tests. Sometimes the keys are really well named where maybe it's a mailer.welcomemessage. And I'm like, okay, I understand the gist. I don't need to go see the actual string. I also think they highlighted a really good use case where if you're overriding behavior and it could default to something else, your test is still going to pass, and you don't actually know. So I could see the use case there where if you are overriding, then you want to be explicit about the string that you expect back. I also think there are some i18n messages that are fairly complex, and where then I really would like to see the string. So if you are formatting a date or a time or you're passing in just a lot of variables, then there's a chance that I do want to see how did that actually get generated for the person who's going to be reading it versus just maybe it's garbage text that came out? And I want to validate that the message that we think we're crafting is actually the one that the user is going to see. The case against actually being explicit, my biggest one is because then I do see i18n as a helpful abstraction. And I want to trust this abstraction that it's doing its job and it's doing it well. Because then if I do use explicit strings, it makes me sad if I change text from like hello to welcome, and now I have a failing test. I don't like that idea either. So I'm torn between these two worlds of it is very nice to have everything that you need in a test to be able to understand what is the expectation, but then I also lean into this abstraction and reference the i18n keys. So, Chris, with all of that, that was a bit of a whirlwind, [laughs] what are your thoughts? How do you test this stuff? CHRIS: Honestly, I'm surprised that you've got that much division in your own answer because for me, this is very obvious there's one...no, I'm kidding. This is obviously complicated. Similar to you, I think I'm going to have to give a grab bag of answers because I don't have a singular thought of like it is concretely this or that. I tend to go for explicit strings and tests all the way to...so like the readability of a test, and the conciseness of a test is interesting. I will often see developers extract. Say they're creating a user with a specific email, and then they log in with that email later, and then they expect something else. And so the email is referenced a few times, and they'll extract that into a variable called email. And I personally will tend to not do that. I will inline the literal string like user@example.com, and I'll do it in a few places. And I'm fine with that duplication because I like the readability of any given line that you're reading. So I will make that trade-off within tests. This is the thing I think we've talked about before, but the idea of DRY in tests is like I want to be careful applying that idea, Don't Repeat Yourself, to break apart the acronym. Those abstractions I will use them less than tests. And so I want the explicitness, I want the readability, I want to tell a little story, all of that feels true. That said, to flip it around, one of the things that I'm hearing...so I think I'm hearing a part of this that is around well, we can fail silently because we fail symmetrically in both the implementation and our test. Then an assertion may actually match even though it's matching on a fallback. I think that's a configurable thing. I would actually want my test to raise if I'm referencing an i18n key that is not defined. Now, granted, that's different for languages. And maybe this becomes a more complex story of like in production; in a different locale, it will fail because we don't have 100% parity across all our locale files. But fundamentally, I want to make sure that at least exists in our base, which I think typically would be en-US as the locale. I want to make sure all keys are looked up and found, and it's an error otherwise in our test. So that's a feeling. But am I misunderstanding that part of the story or how that configuration typically works? STEPH: No, I think you've got it. But just to make sure we're on the same page, so if you reference a key that doesn't exist, then it is going to fail. So at least you have your test failure is going to let you know that you've referenced something that doesn't exist. But if you are referencing, like if you want to override the defaults that Rails or i18n has provided for a model and say for an error message, if you reference that, but you want to override it, but then you've forgotten, that does exist. So you're not going to get the failure; you're going to get a different message. So it's probably not a terrible experience for the user. It's not going to crash. They're going to see something, but they're not going to see the custom message that you intended them to see. CHRIS: Gotcha. Okay, well, just to name it, the thing that I was describing, I don't know that that would be the configuration for every system. So I would strongly encourage any system where i18n just has a singular behavior which is we fall back to the key. I want my test to absolutely tell me if that's happening. And that should be a failure of the test. But to the discoverability documentation bit, I do wonder if tooling can actually help answer the question. And as I was describing the wonderful experience I had on GitHub the other day, viewing code as just static characters in a file is both true and also, I think increasingly, a limited view of it. We have editors, and we have code hosting tools that can understand semantically our code a little bit better. There's got to be like 20 Different VS Code plugins that, when you hover on an i18n reference, it will do the lookup for you. That feels like a thing that exists, and if it doesn't, well, now I've nerd-sniped myself, and I got a weekend project. JK, I'm definitely not building that this weekend. But that feels like can we use that to solve this? Maybe not. But that's just another thought of where we have these limitations where it's static, like those abstractions can be useful. But if we can very quickly dereference them, then the cost of the abstraction or that separation becomes smaller, and so the pain is reduced. And I wonder if that's a way to sort of offset it. STEPH: If I can poke at that a little bit more, because I think you're touching on something that I haven't expressed or thought through explicitly, but it's the idea of, like, why do I like the abstraction? What is it that's drawing me towards using these keys? And I think it's because most of the cases, I don't care. I don't care what the string is, and so that feels nice. Like, I understand that, yes, we're referencing something. If that key didn't exist, I'm going to see a failure. So I know that there's text there, and that's why I do lean into referencing the keys instead of the text because it feels good to not have to care about that stuff. And if we do make changes to the text, then it suddenly doesn't fail, and then I have to go update a test because we added a period or added a comma. I think that's the path of more sadness for me. And my goal is always a path of least sadness. So I think that's why I lean into it [laughs], I'm guessing. Is that why you lean into it as well? Or what do you like about referencing the keys over the explicit text? CHRIS: No, I think I share your inclination there, and the reason that you're in favor of it, and I think the consistency like if we're going to use i18n, then we should lean in because it's a non-trivial thing to do like porting to i18n projects, and they're tricky. Getting it right from the first step is also tricky. If you're going to do it, then let's lean in, and thus let's use that abstraction overall. But yeah, same ideas as you. STEPH: Cool. I think that helps validate where I'm at in terms of how I rationalize about this where ultimately, I do like leaning into that abstraction. And as you'd mentioned, some of those porting projects, I haven't been on one specifically, but I've seen that they are a lot of work. And so, if we have that in our system, then we want to continue to use it. It does reduce some of the readability. Like you said, maybe there's a VS Code plugin or some way that then we can help people be able to see if they want that full context in the test and not have to jump over to YAML. But yeah, otherwise, unless it's overriding default behavior or complex, then that's what I'm going to go with is with the keys. But I really appreciate this person's very thoughtful question and approach to testing because, normally or typically, I fully agree with I want full context in the test. And this one was one of those outliers that came up for me, and I had to really think through all the feelings and the reasons that I have for those feelings. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Don't Repeat Yourself! In dieser Episode des SoftwareArchitekTOUR-Podcasts sprechen Eberhard Wolff, Carola Lilienthal und Stefan Tilkov über das DRY-Prinzip.
Keep It Simple Stupid, o como me gusta recordarlo: Keep It Super Simple, "Mantenlo Súper Simple" son las siglas de un principio muy conocido en el mundo de la informática y los negocios, que nos ayudará a mejorar nuestra perspectiva sobre los procesos y proyectos para transformar lo complejo en sencillo, a pesar de que suene como una tarea difícil. Lo mismo ocurre con el principio D.R.Y o Don't Repeat Yourself, "No te repitas", que te ayudará a no redundar tus esfuerzos. Estas buenas prácticas aplicadas a tu podcast harán que la experiencia de creación de cada uno de tus episodios no se convierta en algo pesado. Escucha aquí todos los episodios de #prodcasting y encuentra más información interesante: cintaenblanco.com
In todays podcast episode we are going to talk about the DRY software principle. In software development DRY stands for Don't Repeat Yourself. Let's discuss what this means. Resources: https://thevaluable.dev/dry-principle-cost-benefit-example/ https://metova.com/dry-programming-practices/ Kick start your tech career with Amarachi Amaechi's new book Getting Started in Tech: A guide to building a tech career My web development courses ➡️ Learn How to build a JavaScript Tip Calculator ➡️ Learn JavaScript arrays ➡️ Learn PHP arrays ➡️ Learn Python ✉️ Get my weekly newsletter ⏰ My current live coding schedule (Times are BST) Thursdays 20:00 = Live Podcast YouTube Sundays 14:30 - Live coding on Twitch
話したネタ DIを勉強したい場合には何を参照する? 書籍: Dependency Injection Principles, Practices, and Patterns Laravel Pimple 書籍: Clean Architecture 達人に学ぶソフトウェアの構造と設計 単行本 GoFデザインパターンは、なぜここまで普及したのか? OOPを理解するきっかけになったため 差分プログラミングの誤解 継承には悪い面も多い 書籍: 増補改訂版Java言語で学ぶデザインパターン入門 OMT法とUML Don't Repeat Yourself 原則 と 差分プログラミング 差分プログラミングは、継承以外の手法でも実現できる is-a と has-a 構造ではなくて、責務・振る舞いについてプログラミングする DIコンテナ と Main関数 継承があるデザインパターンの現代における利用価値は? GolangやRustには継承がそもそもない Factory Method や Abstract Factory は DIコンテナ以後はほとんどなくなった Template method は現代においても使う Abstract Factory と DIコンテナ は似ている? リファクタリングとデザインパターンは、どういう関係性があるのか? 「こういうこともあろうかと」 YAGNI(You aren't gonna need it) リファクタリングのターゲットとしてのデザインパターン 振る舞い・インタフェースに対してプログラミングしていたが、インタフェース自体が変わるとき? テストピラミッド ユニットテストしかないのは、あまり良い状況ではない 15 年後に再整理された Design Patterns Null Object Nullish coalescing 書籍: テスト駆動開発 書籍: リファクタリング(第2版): 既存のコードを安全に改善する エピソードスポンサー 株式会社ゆめみ
Robert Blumen is a DevOps Engineer at Salesforce, joined by Ev Haus, Head of Technology at ZenHub. Together, they're going over a critique over several methodologies when writing code as part of a large team. First, there's DRY, which stands for Don't Repeat Yourself. It's the idea that one should avoid copy-pasting or duplicating lines of could, in favor of abstracting as much repeated functionality as possible. Then, there's DAMP, or Don't Abstract Methods Prematurely, which is somewhat in opposition to DRY. It advises teams to not create abstractions unless they are absolutely necessary. Last on the list is WET, or Write Everything Twice. This is the idea to embrace duplication whenever possible. Ev notes that, like many programming absolutes, the success of each strategy depends entirely on the context. DRY, for example, sounds like a really good idea, until it happens everywhere. Suddenly, a chunk of code becomes difficult to reason, as a developer jumps around various method definitions to piece together a flow. DAMP often makes sense as a counterpart to DRY, because if you abstract too early in your codebase, you may find yourself overloading methods or appending arguments to handle one-off cases. DRY is typically best suited for testing environments, where an absolutely reproducible set of explicit steps is often preferable in order to quickly understand what is occurring. No matter the strategy you use, the core tenant is to solve the problem first. Try to accomplish the goal you need to, whether that's adding a feature or squashing a bug. Don't over optimize until you've finished what you need to, and don't think too far into the future about all the possible edge cases. The rest of the balance comes with experience. Some duplication is bad, but not all of it. Figuring out the absolute perfect solution is unlikely, so you've got to put the code out into the real world to find out what works. After that, bake some flexibility into your processes to adjust hot code paths or refactor them when needed! Links from this episode ZenHub is an agile project management tool for GitHub Wikipedia's definition of DRY "Using DRY, WET & DAMP code" is Ev's article on different coding methodologies Codewars is a website with programming puzzles and challenges The Pragmatic Programmer by Dave Thomas is a popular book highlighting some of these concepts DRY code, DAMP DSLs by Jay Fields and DRY vs DAMP in Unit Tests by Vladimir Khorikov are more write-ups on the subject
The TASE Architects go script-less once again in this iteration of the TASE Show. We discuss current events, dream technical vacation, COVID-19 updates and challenges, and many other unscripted topics. Tune in for our raw reactions!Video(s) of the Week - Don't Repeat Yourself and Data ExposedProduced by Mr. WentworthFor more information on TASE Labs, please see https://www.taselabs.net/For more information on Solutions4Networks, please see http://www.s4nets.com/For legal information, see https://www.taselabs.net/legaltermsCampfire by Scandinavianz https://soundcloud.com/scandinavianzCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: https://bit.ly/_campfireMusic promoted by Audio Library https://youtu.be/9Rfykh-YzCc
I love studying leadership...and I love branding. In my 20-years of running a business, I’ve learned a ton of lessons on both topics. Now, with 2-successful companies and 42 employees, leadership is one of the most important things I do. It’s also one of the hardest. Today’s IMPACT Show episode stems from David Isaacson, who asks, “What are your top leadership tips that high-performance companies, teams, and organizations do to build iconic cultures and brands?” WOW. A great question that will allow me to share some extremely valuable leadership lessons with you. First, I am going to share what leadership is NOT. Second, I will share 10 things great leaders must do to build an iconic company. Depending on your stage of business, walk in life, and your desire for growth and leadership will determine just how many of these lessons/tips you use now. Regardless, I truly do believe that all of us must be servant leaders. Whatever your role or title at work, everyone has the ability and opportunity to LEAD. So apply these 10 lessons, be a servant leader, and go out there today and positively IMPACT mankind! Thanks for listening in. As always, I want to hear from you. Snap a picture of the cover of today’s episode, tag me, and share it on your social media. Or share your favorite 2-3 lessons out of the 10 that BEST applies to you today. Get after it today and be a GREAT LEADER. Please tag me @ToddDurkin. TIMESTAMPS: 1:04- Q&A: “What Are Your Top Leadership Tips That High-Performance Teams And Companies Do That Build Iconic Cultures And Brands?” 3:44- First Things First… What Leadership Is NOT. 5:28- #1: Uncomfortable And Difficult Conversations 6:20- #2: Preparing And Running Effective Meetings 8:38- #3: A Never-Ending Team-Building Mode 10:07- #4: Repeat Yourself 11:14- #5: Love Your People 13:09- #6: Be Driven By Vision 15:09- #7: Who Are You During Tough Times? 16:54- #8: Grow Other Leaders 19:28- #9: Making Hard Decisions 22:01- #10: High Expectations 25:00- Are You Willing To Do The Work Of Real Leaders? THANK YOU, David Isaacson (TWITTER: @DavidIsaacsonMS) for sending in your question! We LOVE answering your questions on the IMPACT Show & would love to highlight you also. To ask a question to Todd for the podcast, please go to www.todddurkin.com/podcast and ask a question on anything on your mind (business, leadership, training, high-performance, or mindset). If you enjoyed today’s episode of the Todd Durkin IMPACT Show, can you please do a few things: Give us a 5-star rating on iTunes. Write a great review of what you love about the show. Please SHARE the episode on your Social Media or in your email/newsletter. Thank you in advance! --- Follow Todd... --> Instagram & Twitter at @ToddDurkin -->Youtube: https://www.youtube.com/user/ToddDurkinFQ10 --> FB: @ToddDurkinFQ10 --> If you are a fitness pro or trainer and want to be coached by Todd, get information on his Mastermind at https://todddurkinmastermind.com/ ABOUT: Todd Durkin is one of the leading coaches, trainers, and motivators in the world. It’s no secret why some of the top athletes in the world have trained with him for over a decade. He’s a best-selling author, a motivational speaker, and owns the legendary Fitness Quest 10 in San Diego, CA where he leads an amazing team of 42 teammates. Todd is a coach on the Netflix show “STRONG” that is must-watch TV. He is a previous Jack LaLanne Award winner, a 2-time Trainer of the Year, and he runs his Todd Durkin Mastermind group of top trainers and fitness pros around the globe, coaching them with business, leadership, marketing, training, and personal growth mentorship. Todd and his wife Melanie head up the Durkin IMPACT Foundation (501-c-3) that has raised over $250,000 since it started in 2013. 100% of all proceeds go back to kids and families in need. --> To learn more about Todd, visit www.ToddDurkin.com and www.FitnessQuest10.com. --> To join his community of fire-breathing dragons and receive regular motivational and inspirational emails, visit www.ToddDurkin.com and opt-in to receive his value-rich content. --- Connect with Todd online in the following places: --> You can listen to Todd’s podcast, The IMPACT Show by going to www.todddurkin.com/podcast. --> Itunes: https://podcasts.apple.com/us/podcast/todd-durkin-impact-show/id1467269399
THE TOP 4 WAYS WE FAIL AT FIGHTING 1.) Yell! Want to Win an Argument? Science Says Stop Doing This 1 Thing. https://www.inc.com/wanda-thibodeaux/want-to-win-an-argument-science-says-stop-doing-this-1-thing.html 2.) Repeat Yourself (+ ignore boundaries) DOING THE SAME THING OVER AND OVER AND EXPECTING A DIFFERENT RESULT IS THE DEFINITION OF CRAZY. -EINSTEIN Is Your Relationship Dysfunctional? https://www.psychologytoday.com/us/blog/rediscovering-love/201401/is-your-relationship-dysfunctional 3.) Name Calling How Name-Calling Damages Your Relationship https://goodmenproject.com/sex-relationships/how-name-calling-damages-your-relationship-cmtt/ 4.) Interrupt https://workplacepsychology.net/2017/11/05/conversation-killers-interrupting-monopolizing-minimizing-discounting-opposing-arguing-not-paying-attention/
Sponsors Netlify Sentry use the code “devchat” for 2 months free on Sentry’s small plan Triplebyte offers a $1000 signing bonus CacheFly Panel Nader Dabit Justin Bennett Lucas Reis Dave Ceddia Charles Max Wood Joined by Special Guests: Thomas Aylott, Conlin Durbin Episode Summary Conlin Durbin is a front end software engineer for a company called Lessonly and occasionally writes about React. Thomas Aylott is a web guy from the 90’s who was briefly on the React team, and he makes thingsthatdostuff.com and groovytiesquad.com. The panel discusses Conlin’s article Link Lists in the Wild: React Hooks. They begin by talking about the relationship between linked lists and React hooks. Linked lists are used under the hood to render hooks every time that they’re created and maintain integrity of the hook chain. They discuss the importance of knowing what goes on under the hood share their methods of learning. They give tips for learning on the job. The panel agrees that one of the best ways to learn is to teach. Conlin shares his experience working for Lessonly, a company that builds lesson-building software. The panel discusses WET (Write Everything Twice) vs DRY (Don’t Repeat Yourself) programming. They talk about when it is beneficial to have abstractions in code and when it is not. It’s also important to think about the humans that are going to be using it, and to write the code so that it’s humane. They praise good error messages that tell you exactly where you went wrong and how to fix it. They talk about the dangers of putting invariants everywhere, and finish by talking about ways to improve. Links Linked list React Fiber Hooks Backbone JavaScript Redux Gatsby Flow Jake Archibald: In The Loop-JS Conf Asia 2018 (video) What the heck is the event loop anyway? (video) Practical 00 Design in Ruby, Sandi Metz Stop trying to be so DRY, instead Write Everything Twice (WET) Sebastian Markbage: Minimal API Surface Area – Learning patterns instead of frameworks Someone Is Changing Your Code Conlin Durbin username for most places is ‘wuz’, except Twitter for twitter it’s @CallMeWuz Follow DevChat on Facebook and Twitter Picks Justin Bennett: The 3 most effective ways to build trust as a leader article Pheonix Live View Lucas Reis: Pamela Zave Small Functions Considered Harmful article Dave Ceddia: New Redux course Kinesis Advantage 2 Keyboard Charles Max Wood: MicroConf BuzzSprout Thomas Aylott: Noflojs.org The Laws of Human Nature by Robert Greene Conlin Durbin: https://dev.to/ Soft Skills Engineering Conlin’s Discord server
Sponsors Netlify Sentry use the code “devchat” for 2 months free on Sentry’s small plan Triplebyte offers a $1000 signing bonus CacheFly Panel Nader Dabit Justin Bennett Lucas Reis Dave Ceddia Charles Max Wood Joined by Special Guests: Thomas Aylott, Conlin Durbin Episode Summary Conlin Durbin is a front end software engineer for a company called Lessonly and occasionally writes about React. Thomas Aylott is a web guy from the 90’s who was briefly on the React team, and he makes thingsthatdostuff.com and groovytiesquad.com. The panel discusses Conlin’s article Link Lists in the Wild: React Hooks. They begin by talking about the relationship between linked lists and React hooks. Linked lists are used under the hood to render hooks every time that they’re created and maintain integrity of the hook chain. They discuss the importance of knowing what goes on under the hood share their methods of learning. They give tips for learning on the job. The panel agrees that one of the best ways to learn is to teach. Conlin shares his experience working for Lessonly, a company that builds lesson-building software. The panel discusses WET (Write Everything Twice) vs DRY (Don’t Repeat Yourself) programming. They talk about when it is beneficial to have abstractions in code and when it is not. It’s also important to think about the humans that are going to be using it, and to write the code so that it’s humane. They praise good error messages that tell you exactly where you went wrong and how to fix it. They talk about the dangers of putting invariants everywhere, and finish by talking about ways to improve. Links Linked list React Fiber Hooks Backbone JavaScript Redux Gatsby Flow Jake Archibald: In The Loop-JS Conf Asia 2018 (video) What the heck is the event loop anyway? (video) Practical 00 Design in Ruby, Sandi Metz Stop trying to be so DRY, instead Write Everything Twice (WET) Sebastian Markbage: Minimal API Surface Area – Learning patterns instead of frameworks Someone Is Changing Your Code Conlin Durbin username for most places is ‘wuz’, except Twitter for twitter it’s @CallMeWuz Follow DevChat on Facebook and Twitter Picks Justin Bennett: The 3 most effective ways to build trust as a leader article Pheonix Live View Lucas Reis: Pamela Zave Small Functions Considered Harmful article Dave Ceddia: New Redux course Kinesis Advantage 2 Keyboard Charles Max Wood: MicroConf BuzzSprout Thomas Aylott: Noflojs.org The Laws of Human Nature by Robert Greene Conlin Durbin: https://dev.to/ Soft Skills Engineering Conlin’s Discord server
2-Minute Tip: Practice To be successful, there is no substitution for preparation. Practice. Rehearse. Prepare. And then practice some more. When you see speakers who make it look easy -- who effortlessly string together words and phrases and jokes and more -- it's usually not off the cuff. It's because they've practiced. Put in the time and know your stuff cold so that you can come up with it seamlessly when it's time to speak. That's one of the beautiful things about speaking. It's not magic. It's just work. On a related note, check out the documentary Comedian on Netflix. It's about the work and process that Jerry Seinfeld went through to develop a new set. It looks effortless when you see the final set, but it can take a year to get there -- even when you've been doing your trade for decades. Post Tip Discussion: Brandin' buildin' and Boomin' with Joe Apfelbaum Joe Apelbaum is a high-energy force of nature, and that really comes through in this episode. Joe's professional focus is on B2B or Business to Business marketing, which is something many folks don't think about. He has built a large following and expertise on LinkedIn that you should definitely check out there. Joe also just launched a new course dedicated to Social Selling. You can find it here. Bio Joe Apfelbaum is the CEO of Ajax Union, a B2B digital marketing agency based in Brooklyn, NY. Joe is a business strategist, marketing expert and certified Google trainer. Joe enjoys speaking and writing about marketing, business networking and personal development in his seminars, webinars and articles. Joe is the host of the popular podcasts The Breakthrough Maze. Joe is the Author of 3 books including High Energy Secrets his most recent book about how he lost 95 pounds and does flying selfies. He is the producer of GrowTime.tv and has published over 500 Mojovational Street Talk videos on YouTube. Joe is a contributing member of Entrepreneurs Organization in Brooklyn, a group with over 12,000 CEO’s and an active member of Executives Association, a premiere business networking organization in NYC. Joe is a selfie master, he takes 1000 selfies a year with entrepreneurs and makes hundreds of introductions to business professionals in his network. Joe is proud of all his accomplishments, but most of all he is proud of his purpose, his beautiful amazing kids. Lessons In this episode, one of the things I'd like you to listen for is Joe's pattern of speech. He uses a lot of repetition and parallel structure to make his points and it just sounds natural and powerful. I talked about these techniques back in Episode 10 and 35. It takes practice to do this effectively, and the best way to get better at that is to do it. Another lesson besides preparing and rehearsing is that when it comes to speaking you just have to get out and do it. And do it again. And do it some more. Links Joe Apfelbaum on LinkedIn https://www.linkedin.com/in/joeapfelbaum/ Social Sellin' System (Joe's LinkedIn Course) https://learn.socialsellin.com/ Ajax Union https://www.ajaxunion.com/ Joe's Website http://www.joeapfelbaum.com/ Joe on Twitter https://twitter.com/joeapfelbaum Joe on Facebook https://www.facebook.com/joeapfelbaum Joe on Instagram https://www.instagram.com/joeapfelbaum/ Grow Time TV on YouTube https://www.youtube.com/channel/UC8_nEearUG2JyW-lZbyjAuA Joe's Street Talk Videos https://www.youtube.com/watch?v=t1nXhsbrHRc&list=PL_GUx-xCvZptR2MJKV-O8XCfIBPZFrBUl Joe's Book: High Energy Secrets https://www.amazon.com/High-Energy-Secrets-pounds-higher/dp/1948363003/ref=sr_1_1?keywords=high+energy+secrets&qid=1551570163&s=gateway&sr=8-1 CEO Mojo Podcast http://www.joeapfelbaum.com/category/podcast/ The Breakthrough Maze Podcast http://www.joeapfelbaum.com/category/the-breakthrough-maze/ On Writing by Stephen King https://www.amazon.com/Writing-10th-Anniversary-Memoir-Craft/dp/1439156816/ref=sr_1_1 Comedian with Jerry Seinfeld https://en.wikipedia.org/wiki/Comedian_(film) 2MTT: Episode 010 — Parallel Structure and Tim Garber (Part 1) http://2minutetalktips.com/2017/02/28/episode-010-parallel-structure-and-tim-garber-part-1/ 2MTT: Episode 039 — Skip the Gimmicks and Repeat Yourself http://2minutetalktips.com/2017/12/05/episode-039-skip-the-gimmicks-and-repeat-yourself/ 2MTT: Episode 035 — Let the Audience React and Ancient Rhetoric Today http://2minutetalktips.com/2017/11/07/episode-035-let-the-audience-react-and-ancient-rhetoric-today/ Preview http://2minutetalktips.com/JoePreview Call to Action So connect with Joe via LinkedIn. Are you looking for stuff to share on your social media channels? I created a preview of this episode. If you found today's chat interesting or valuable, you can post the preview in your LinkedIn or Facebook channel by sharing the link: http://2minutetalktips.com/JoePreview. Don't get best…get better.
Aprende ingles con inglespodcast de La Mansión del Inglés-Learn English Free
In this episode we'll be speaking about avoiding repetition. Not saying 'thank you' all the time or repeating expressions like 'How are you?' Más podcasts para mejorar tu ingles en: http://www.inglespodcast.com/ More podcasts to improve your English at: http://www.inglespodcast.com/ Listener Feedback: Fransisco from Granada Voice message - good news! Francisco Espínola from Granada passed his FCE exam! Thanks for your comments, Francisco. Wonderful pronunciation and not one mistake! Itunes reviews thank yous to everyone who has taken the time to write a short for us. It's because of you that we are one of the best podcasts for learning English in itunes - the most visible. chuspo from Spain Merak.kain from Mexico rrg01 from Mexico Sirihus from Spain ("It's the best podcast I've ever heard and you are a perfect couple, doing that everything flows so perfect and easy") Mcorrea2004 from Spain Alvaroscali from Spain Comment on the website from Rafael: Hello Reza and Craig, very interesting this episode speaking about drugs - Episode 118 http://www.inglespodcast.com/2016/08/28/drugs-and-addiction-airc118/ Me ha gustado mucho todo lo que comentáis. Lo ha hecho como siempre, muy bien. Hablais de las adicciones en las que se toman algunas sustancias químicas, pero habéis pasado por alto una adicción muy potente que es la "ludopatía" o la afición por el juego, (ya sea cartas, lotería o las máquinas tragaperras - slot machines, fruit machines, one-armed bandit) Muchas personas se enganchan sin tomar ninguna droga, Es curioso como la química del cerebro crea sus propias sustancias para que la gente quede muy enganchada de personas que lo han perdido todo, sin tomar absolutamente ninguna sustancia química. También he recordado una canción de John Lennon, que se llamaba "Cold Turkey" ahora ya sé lo que significaba, "el mono". Saludos, Rafael. to gamble - apostar, jugar If you're struggling to understand this podcast: Nuestra tienda de descargas: http://store.mansioningles.net/ Voice message from Elisa from Finland - She hates dependent prepositions! Time flies and the show must go on. Hi, this is Javier from Tolosa. One question, please. Episode 119 - http://www.inglespodcast.com/2016/09/04/getting-dressed-and-undressed-airc119/ What do you wear for work (usually – as a habit) You always say that after preposition goes -ing, then I do not understand "...for work", why it is not "... for working" or "What do your wear TO work" Thanks for helping me. A hug. Javier González Tolosa (Gipuzkoa) PREPOSITION + ___ing VERB But also PREPOSITION + NOUN/PRONOUN eg. What do you wear for/to work. CORRECT. “For” or “to” are prepositions and “work” is a noun. “Work” can be a noun or a verb. Voice message from David Martinez, Alcoy. FCE September. FCE practice: flo-joe.com: http://www.flo-joe.com/fce/students/index.htm Exam English: http://www.examenglish.com/FCE/fce_listening.html Cambridge English TV: https://www.youtube.com/user/cambridgeenglishtv Mansion Ingles 60 hour FCE course: http://www.mansioningles.com/cd_first.htm Level test on the website at mansioningles.com http://www.mansioningles.com/First_cert.htm How Not to Repeat Yourself in English Saying ‘Can you repeat that, please?’ Alternatives: Sorry? Sorry, I didn't get/catch that. Sorry, what was that (you said)? I'm afraid I don't follow (you) (formal) Come again? (informal) saying 'hello' and 'How are you?' Alternatives: Alright? What's up? How's it going? How are you doing? How are things? 'bout you! (Belfast greeting - 'How about you?) Ey up! (Greeting in the North of England) Whatcha! (What you) Saying 'Thank You' Alternatives: Thanks Cheers! Much appreciated I owe you one Many thanks Thanks a bunch Saying 'That's very, very good' Alternatives: That's amazing, fantastic, unbelievable, wonderful, awesome, out of this world! Saying 'That's very, very bad' Alternatives: That's terrible, awful, horrible, disgusting Saying 'I'm sorry' Alternatives; I'm really/very/extremely/so sorry I apologise I can't apologise enough Please forgive me It won't happen again! ...and now it's your turn to practise your English. Do you have a question for us or an idea for a future episode? Send us a voice message and tell us what you think. https://www.speakpipe.com/inglespodcast Thank you for all the voice messages you sent during the summer. Please keep sending them. It takes 3 or 4 minutes and we love receiving them. Send us an email with a comment or question to craig@inglespodcast.com or belfastreza@gmail.com. If you would like more detailed shownotes, go to https://www.patreon.com/inglespodcast We need $100 Our lovely sponsors are: Lara Arlem Carlos Garrido Zara Heath Picazo Mamen Juan Leyva Galera Sara Jarabo Corey Fineran from Ivy Envy Podcast Manuel García Betegón Jorge Jiménez Raul Lopez Rafael Daniel Contreras Aladro Manuel Tarazona On next week's episode: Phrasal Verbs with TAKE and GET (request from Ivan Ballester) And now, as promised, let's hear from Mónica Stocker from El Blog Para Aprender Inglés Supera la barrera del INTERMEDIATE y consigue ser un ADVANCED El curso FITA, de Mónica Stocker, es un curso completo de inglés, especialmente diseñado para hispanoparlantes de nivel intermediate que quieran llegar a ser advanced. ¡Apúntate ahora al curso GRATIS de 4 días y llévate un audio-libro de regalo! http://intermediatetoadvanced.com/pages/4-days-free-english-course The music in this podcast is by Pitx. The track is called 'See You Later' Más podcasts para mejorar tu ingles en: http://www.inglespodcast.com/ More podcasts to improve your English at: http://www.inglespodcast.com/
Aprende ingles con inglespodcast de La Mansión del Inglés-Learn English Free
The difference between ALL and EVERYTHING | FIX, MANAGE, MAKE IT and FIGURE OUT - AIRC123 In this episode we speak about the difference between ALL and EVERYTHING | FIX, MANAGE, MAKE IT and FIGURE OUT and your feedback and questions that you sent us during the summer. Más podcasts para mejorar tu ingles en: http://www.inglespodcast.com/ More podcasts to improve your English at: http://www.inglespodcast.com/ We recieved a Voice message from Hellen Jimenez from Costa Rica. As Helen said, you can find a free grammar reference at http://www.mansioningles.com/ . There is also grammar in our free courses and you can download the grammar pdf from the store: http://store.mansioningles.net/ it costs 1.99 euros. Listener Feedback: Ivan from Cuba Hi guys I'm Ivan and I'm Cuban that’s why my situation here with the internet is kind of complicated but I will always find a way to get your episodes. I wanted to say that you guys are great and I believe truly in what you do. I'd like to ask you about the use of ALL and EVERYTHING. That's all, thank you. ALL and EVERYTHING = 100% of something or of a group ALL All + uncountable/plural countable nouns eg. He ate all the food. (uncountable noun) / These students are all my friends. (plural countable noun) Pronoun + all eg.Craig and I love you all./ We all love holidays. / It all seemed a bit strange, from start to finish./ They all came to see us. / We love you all / We love all of our listeners. All of + object form of pronoun (Compare with Pronoun + all) eg. Craig and I love all of you. We all love holidays / All of us love holidays. It all semed a bit strange / All of it seemed a bit strange. They all came to see us. / All of them came to see us. All = all of + determiner (the, this, those, my, etc.) “All of” is more common in American Eng. eg. Craig’s eaten all (of) the chocolate. The listeners had heard all (of) my jokes before. BUT COMPARE: Not all podcasts are popular. (Talking about podcasts in general. No “the”; no “of”) Not all (of) the podcasts are popular. (Talking about specific podcasts. eg. Aprender inglés con Reza y Craig podcasts.) All's well with me at the moment. All that matters is that YOU improve your English. (the only thing that matters.....) All (that) I ever wanted was for Berta to love me. All he wants now is to get a divorce. 'All' often goes with 'that' - We say Is everything finished? ~ Yes, everything is finished. (Not XIs all finishedX) EVERYTHING Everything = All + relative clause eg. Reza gave Berta everything, but she still wasn’t satisfied. = Reza gave Berta all (that) he had, but she still wasn’t satisfied. The bad businessman lost everything. = The bad businessman lost all (that) he owned. EVERYTHING is usually used as a pronoun: Everything is OK. / I did some work, but I didn't finish everything. Everything substitutes 'other things', for example, "I had to reply to emails, make some images, record a podcast, phone my co-worker, post on Facebook.......but I didn't have enough time and I didn't do everything. All = Everything/Everybody - dramatic/ poetic/ old-fashioned English eg. I saw you with your new boyfriend last night. Tell me all/everything! Newspaper headline: “Ship sinks. All are dead. No survivors.” All = nothing more/the only thing(s) eg. All (that) I ever wanted was for Berta to love me. All we did was a friendly kiss on the cheek - nothing more. I promise! Hi Craig! I am Karla from Costa Rica... I just wanted to thank you for this excellent tool that allows me to practice and improve my English. I am going to start a new job having interaction with people from different countries in Europe, so I was concerned about accents and slang words. As any language, I think it is about learning through daily interaction, right? Any advice? Thanks again! Speak to people (Italki, language exchanges) Listen to podcasts and watch TV series in English (Netflix, YouTube) Mamen - Biescas, Huesca Hi guys Thank you so much for keeping working on your podcast so hard during the summer We all appreciate your big effort! This podcast had been so useful 'cause you get (give) me the opportunity to learn and improve every day I wonder if you could help me with some issues that I always have. Please, could you explain the difference between : fix, manage, figured out, make it? I've heard these verbs in so many situations and it's a bit confusing. Thank you so much Hope you could manage or what ever with the hot summer. BIG KISS FIX - a problem/something broken/a time (mend, repair) - arreglar, reparar: “I took my broken watch to the watchmaker to have it fixed.” “This company is losing money and we’d better fix it soon before it’s too late!” “I need to fix our ceiling fan." Fix (attach) 'I'll fix this piece of paper to the wall.” Fix a price - 'We've fixed the price of our First Certificate course download at 17 euros.' ( http://store.mansioningles.net/downloads/first-certificate-course/ ) Fix a time: “We have to fix a time tomorrow for our meeting.” Fix food (make/prepare food) “Can I fix you a sandwich?” / "Say, can I fix you a drink." “Fix your eyes on this.” “The game/election/boxing match was fixed.” (fix=arreglar) MANAGE = direct/be able to (organize) - dirigir, manejar, gestionar: “Henry manages a small family business.” “In the UK, my sister managed a small team of 4 office clerks.” manage (control): “How do public school teachers manage a class of 30 or 40 kids?” manage (get by, survive) - arreglarse: “I don't know how single parents can manage if they're both looking after children.” manage (succeed) - conseguir, lograr: “Can you manage to get there by one o’clock?” / “It's difficult to release a podcast episode every single week, but we manage.” FIGURE OUT - a puzzle/a solution figured out (solve) - resolver, solucionar: “Today’s crossword is too hard to figure out.” / “It's difficult for me to figure out maths problems.” ('work out' is more British English) “They lost their home to the bank and had to figure out what to do next.” figure out (understand) - comprender - 'I finally figured out why my ceiling fan wouldn't stop.' 'I couldn't figure it out' / I couldn't work it out' MAKE IT = attend/come/arrive/get to the end/survive make it (succeed): llegar a lo más alto, triunfar: “When you win an award for your podcast, you know you've finally made it!” make it (make sure that it is) - asegurar que: "Bring me a cup of tea and make it snappy!" - 'Make it quick.' Make it (arrive on time): “I’m having a party at my house tomorrow. I hope you can make it?” / “I thought I was going to miss the beginning of the film, but I made it.” “We got lost on our way to Peter’s house. We made it as far as the park.” “Listen to me, your Captain, men! This is going to be a hard battle. Not all of you will make it.” (survive) Voice message from Ana from Mexico - not clear audio, but if Ana took the time to record it, we want to play it. "Thank you for our time and the effort to make the podcast, sharing our experience and knowledge. Ana has the feeling that she knows us! ...and now it's your turn to practise your English. Do you have a question for us or an idea for a future episode? Send us a voice message and tell us what you think. https://www.speakpipe.com/inglespodcast Send us an email with a comment or question to craig@inglespodcast.com or belfastreza@gmail.com. Thank you do Carlosgarridot@gmail.com who is our latest Patron. "I am trying now to get the Cambridge First Certificate, so I was looking for some audios in the internet in order to train my listening skills when I found your podcasts by chance. I´d like to tell you that not only are your podcasts really useful to improve my listening and also grammar skills, but they are also very funny, I have a good time with them. (I really enjoy them) Actually, I usually go running twice or three times in a week and I do that listening to your episodes. Sometimes you guys make me laugh and people who look at me running and laughing. They probably think that I am absolutely crazy. If you would like more detailed shownotes, go to https://www.patreon.com/inglespodcast We need $100 Our lovely sponsors are: Lara Arlem Carlos Garrido Zara Heath Picazo Mamen Juan Leyva Galera Sara Jarabo Corey Fineran from Ivy Envy Podcast Manuel García Betegón Jorge Jiménez Raul Lopez Rafael Daniel Contreras Aladro Manuel Tarazona On next week's episode: How Not to Repeat Yourself in English Más podcasts para mejorar tu ingles en: http://www.inglespodcast.com/ More podcasts to improve your English at: http://www.inglespodcast.com/ The music in this podcast is by Pitx. The track is called 'See You Later'
In part two of our interview with Dave Thomas, we dive into some of his other contributions to the community, including coining the phrase “DRY” (Don’t Repeat Yourself), popularizing the code kata, and signing the Manifesto for Agile Software Development. We explore the impacts of these contributions, particularly to code newbie community. Show Links Digital Ocean (sponsor) MongoDB (sponsor) Heroku (sponsor) TwilioQuest (sponsor) Code Kata DRY SOLID Manifesto for Agile Software Development Agile Is Dead Codeland Conf Codeland 2019
Dave Thomas has done a lot for the programming community. He coined the phrase “DRY” (Don’t Repeat Yourself). He popularized the idea of code katas. He was one of the signers of the Manifesto for Agile Software Development, and he's the founder of the Pragmatic Bookshelf publishing company. But despite all that, he refers to himself as simply a programmer. In this episode, he shares his own coding journey, gives advice to new developers on how to navigate the mountain of information available, and how he ended up becoming a publisher. Show Links Digital Ocean (sponsor) MongoDB (sponsor) Heroku (sponsor) TwilioQuest (sponsor) Lone Star Ruby Elixir Tacit knowledge Design patterns C2.com Airbrake Avdi Grimm CodeNewbie Austin Elm Functional programming Statically Typed vs. Dynamically Typed Languages Codeland Conf Codeland 2019
In the future, CSS visualizations will dramatically change. How they will change is debatable but they will enable developers to do a lot more than they may think. We may see custom properties like variables to further improve DRY (Don't Repeat Yourself) code & on-the-fly cascading calculations, CSS Extensions to create our own custom selector properties, custom functions, & custom selector combinations. Some of these rules are even starting to be implemented in browsers today like “will-change” to pre-optimize changes in DOM structures and CSS Shapes. These will further help us define display, flow, & wrapping of content and it's containers. CSS is moving rapidly and this is just the tip of what is to come for web development in the coming years or even months in some cases. It used to be to create powerful visualizations in a browser you needed to use Flash or some non-standard tool to get the performance & consistency you needed from complicated animations. Today we have help in bridging the gaps of today and tomorrow. CSS Preprocessors given us powerful features in our CSS code. Some of the more notable ones are loops, conditionals, variables, custom mixins/functions, and heavy grade math calculations. While these are extremely useful, they only help us, currently, before we even see CSS in the browser. Online tools like Codepen.io help us quickly build and view CSS, HTML, & JavaScript that can be easily shared and updated without the overhead of understanding setup, build processess, or dependency management. Although extremely powerful, this means that the tools we have only have the ability to allow CSS to react to change in the DOM in a limited capacity. Looking at todays standard CSS, we now have ways of doing some dynamic calculations and conditions in the browser and device viewers. Directives like @supports and @media give us powerful conditionals. We have several types units of measurement, such as viewport units, frequency units, time units, & resolution units. Rules like ‘calc' give us the ability to computationally react to mutations in the DOM tree. Keyframe Directives give us robust animation, the ‘transform' rule yields great power to setup and animate DOM structures and also dynamically change rotation, skewing, scaling, and translation both 2D and 3D space, all without needing one line of JavaScript. Resources http://davidwalsh.name http://codepen.io/thebabydino/live/08634ee35593c97bd8cfb2ddd9324c24 http://davidwalsh.name/css-supports http://www.w3.org/TR/css3-values/ https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Using_CSS_transforms http://css-tricks.com/five-questions-with-david-walsh/ http://codepen.io/collection/wHune/ http://codepen.io/thebabydino/pen/jgtof http://codepen.io/thebabydino/ http://techblog.rga.com/math-driven-css/ http://davidwalsh.name/css-flip http://css-tricks.com/a-couple-of-use-cases-for-calc/ https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Using_CSS_animations http://stackoverflow.com/users/1397351/ana http://davidwalsh.name/svg-animation http://stackoverflow.com/users/1397351/ana http://stackoverflow.com/help/badges/17/necromancer?userid=1878132 http://sass-lang.com/ http://www.myth.io/ http://dev.w3.org/csswg/css-extensions/ http://sarasoueidan.com/ http://shoptalkshow.com/episodes/129-sara-soueidan/ http://5by5.tv/webahead/81 http://www.sitepoint.com/css-variables-can-preprocessors-cant/ http://codepen.io/shankarcabus/pen/jcyDA http://daneden.github.io/animate.css/ http://codepen.io/thebabydino/tag/calc()/ http://figures.spec.whatwg.org/