Podcasts about railsconf

  • 68PODCASTS
  • 419EPISODES
  • 42mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Jun 24, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about railsconf

Show all podcasts related to railsconf

Latest podcast episodes about railsconf

The Bike Shed
466: All about keynotes with Aji Slater

The Bike Shed

Play Episode Listen Later Jun 24, 2025 43:36


As the final RailsConf draws near Joël and Aji Slater sit down to discuss its varied and interesting history of keynote presentations. The pair reminisce on their previous trips and talks at RailsConf, share some tips on creating the perfect keynote, as well as discussing the strong community that's rallied behind RailsConf for so many years and how to best connect with others at similar cons as an audience member. — Don't miss out on the final RailsConf (https://railsconf.org/) which takes place July 8th - July 10th in Philadelphia, PA! Get ready for by checking out Aji's recommenced keynotes from previous years 2022 (https://www.youtube.com/watch?v=DzyGdOd_6-Y) - 2017 (https://www.youtube.com/watch?v=V4fnzHxHXMI) Thanks to our sponsors for this episode Judoscale - Autoscale the Right Way (https://judoscale.com/bikeshed) (check the link for your free gift!), and Scout Monitoring (https://www.scoutapm.com/). You can connect with Aji via LinkedIn and GitHub (https://github.com/DoodlingDev), or check out some of the topics he's written about over on his thoughtbot blog You can connect with Aji via LinkedIn and GitHub (https://github.com/DoodlingDev), or check out some of the topics he's written about over on his thoughtbot blog (https://thoughtbot.com/blog/authors/aji-slater). Your host for this episode has been Joël Quenneville (https://www.linkedin.com/in/joel-quenneville-96b18b58/). If you would like to support the show, head over to our GitHub page (https://github.com/sponsors/thoughtbot), or check out our website (https://bikeshed.thoughtbot.com). Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot (https://thoughtbot.com/) podcast. Stay up to date by following us on social media - YouTube (https://www.youtube.com/@thoughtbot/streams) - LinkedIn (https://www.linkedin.com/company/150727/) - Mastodon (https://thoughtbot.social/@thoughtbot) - BlueSky (https://bsky.app/profile/thoughtbot.com) © 2025 thoughtbot, inc.

Remote Ruby
Unpacking Direct Routes and More

Remote Ruby

Play Episode Listen Later Jun 20, 2025 46:34


In this episode of Remote Ruby, Chris and Andrew discuss the recent Google Cloud Platform and Heroku outages, sharing personal experiences of system impacts and recovery strategies. The conversation shifts to technical insights, including a deep dive into Rails ‘direct' routes and their routing helper capabilities. They also touch on the latest performance enhancements in Ruby 3.3, such as Embedded TypedData Objects and their impacts. Also, they explore parsing Ruby code with Prism and chat about  productivity hacks, upcoming RailsConf plans, parenting chaos, and dreams of launching their own MTV show. Hit the download button now! LinksJudoscale - Remote Ruby listener giftImplementing Embedded TypedData Objects (Rails at Scale)Supercharging Ruby with Embedded TypedData Objects (Ruby Stack News)Prism Ruby parserRailsConf 2025, Philadelphia, PA, July 8-10HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

Code and the Coding Coders who Code it
Episode 52 - Vladimir Dementyev

Code and the Coding Coders who Code it

Play Episode Listen Later Jun 17, 2025 65:55 Transcription Available


What happens when you put Rails in a browser? Vladimir Dementyev (Vova) is pushing WebAssembly to its limits by creating an interactive Rails playground that runs entirely client-side. This groundbreaking project aims to eliminate the frustrating installation barriers that often discourage newcomers from trying Ruby on Rails."I asked myself the question - can I run Rails on WASM? And that's when you feel yourself like a pilgrim software engineer, experiencing something for the first time that no one ever experienced," Vova shares. The project isn't just a technical curiosity but serves a vital educational purpose - allowing anyone to learn Rails through the official tutorial without wrestling with Ruby version managers or environment setup.As principal engineer at Evil Martians, Vova balances multiple innovative projects simultaneously. Beyond Rails on WASM, he's organizing the first San Francisco Ruby Conference (coming November 2024), building a custom open-source CFP application, expanding AnyCable to support Laravel, and updating his technical book "Ruby on Rails Applications." His creative problem-solving approach extends to production environments too, where techniques developed for experimental projects help solve real client challenges like making libvips fork-safe for high-performance web servers.Vova's philosophy on productivity is refreshingly practical: work when inspiration strikes rather than forcing creativity during arbitrary hours. "If I have no desire to sit at my desk and stare at the laptop, I'm not going to do that. I wait for the moment to come, and then I sit and work, and it's really efficient."Ready to see what Ruby and Rails can do in previously impossible environments? Follow Vova's work, attend his RailsConf talk, or join the growing San Francisco Ruby community to witness how Ruby's flexibility continues to break new ground in unexpected ways.Send us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

Remote Ruby
The MMM Episode-Mario Kart, Meta Tags, and One Month Rails

Remote Ruby

Play Episode Listen Later Jun 9, 2025 37:16


In this fun episode of Remote Ruby, Chris and Andrew dive into a wide-ranging chat that kicks off with Nintendo's new Switch 2, dentist nightmares, and the chaos of Mario Party. From there, it transitions into some highly practical Rails development discussions, including a slick new implementation for MetaTags in Jumpstart, the headaches and benefits of JSON-LD for SEO, and an overdue breaking change to the Pay gem. There's also a smart e-ink countdown widget built with Ruby called TRMNL and updates on RailsConf and Rocky Mountain Ruby. Hit download now to hear more!LinksSchema.org CourseMetaTagsJSON-LDRailsConf 2025-Philadelphia, PA, July 8-10Rocky Mountain Ruby 2025-Boulder, CO, October 6-7RailsWorld 2025-Amsterdam, NL, September 4-5TRMNLTRMNL-GitHubPay GemRails AssetsHow to use the new action_ text-trix gem (GoRails YouTube)HoneybadgerNintendo Switch 2HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

Code and the Coding Coders who Code it
Episode 51 - Chris Oliver

Code and the Coding Coders who Code it

Play Episode Listen Later Jun 3, 2025 82:24 Transcription Available


The last RailsConf ever is coming to Philadelphia this summer, and co-chair Chris Oliver joins us to pull back the curtain on what's sure to be a historic gathering for the Rails community.Chris reveals how the programming committee curated an exceptional lineup from hundreds of submissions, balancing nostalgic looks at Rails' 18-year journey with cutting-edge technical content. You'll hear why Philadelphia's walkable layout, incredible food scene (Reading Terminal Market gets particular praise), and Fourth of July celebrations make it the perfect host city for this final RailsConf hurrah.Beyond the sessions themselves, Chris and I explore what truly makes tech conferences special—those irreplaceable in-person connections. Whether you're a seasoned Rails veteran or relatively new to the framework, the hallway conversations, shared meals, and spontaneous problem-solving sessions offer exponentially more value than what appears on the official schedule. We both share how these gatherings have accelerated our careers and sparked lasting professional relationships.The conversation takes an enlightening turn as Chris opens up about his current technical challenges, including the complexities of testing Hotwire applications and designing flexible API wrappers for payment processing systems. His insights on balancing specificity with adaptability when building reusable libraries offer valuable perspective for anyone writing code meant to be shared.This episode serves both as an enthusiastic invitation to join the Ruby community in Philadelphia and a thoughtful exploration of why in-person events remain vital in our increasingly remote world. Supporting RailsConf isn't just about personal growth—it's about strengthening the Ruby ecosystem that has supported so many developers throughout their careers.Ready to book your ticket for this historic event? Don't miss our podcast panel at RailsConf—come experience our conversations live and in person!Links: RailsConfGoRailsLearn Hotwireexcid3 on BlueSkySend us some love. HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleAutoscaling that actually works. Take control of your cloud hosting.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the show

Remote Ruby
Mac Upgrades to Debugging Dilemmas

Remote Ruby

Play Episode Listen Later Mar 21, 2025 47:41


In this episode, Andrew and Chris discuss Chris's new hardware upgrade to a Mac Studio, diving into its benefits for video processing and development work. They share stories about troubleshooting a perplexing bug related to WebSockets and Cable Ready, and discuss the conference proposal process, offering insights into writing effective CFPs based on their experiences with RailsConf and Rails World. Additionally, Andrew shares a game update about Cyberpunk and Chris shares the inspiring success story of the game ‘Balatro,' highlighting the developer's journey from side project to commercial triumph. Hit the download button now!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.JudoscaleMake your deployments bulletproof with autoscaling that just works.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. Chris Oliver X/Twitter Andrew Mason X/Twitter Jason Charnes X/Twitter

Code and the Coding Coders who Code it
Episode 46 - David Hill

Code and the Coding Coders who Code it

Play Episode Listen Later Feb 25, 2025 40:56 Transcription Available


David Hill, the innovative mind behind "Ode to RailsConf" and a senior engineer at Simplify, invites us to explore his fascinating journey into the world of podcasting. Inspired by the final announcement of RailsConf, David crafted a platform to celebrate the cherished memories of the event while also providing himself with a bridge to manage social interactions more comfortably. With his love for board games providing a structured approach, David shares how the podcasting framework has transformed him from a hesitant introvert to a comfortable conversationalist.Our conversation takes an intriguing turn as we delve into the art of podcast guest planning and the intricate process of editing conference videos. From featuring guests from the Scholar Guide program at RailsConf and RubyConf to orchestrating a unique episode with nine guests from a single company, we leave no stone unturned. Engaging discussions with prominent figures like Freedom Dumlao and Sarah May offer listeners a treasure trove of insights, while upcoming episodes with Ruby Central's Rhiannon and Ali Vogel promise to further explore the dynamic world of PR, marketing, and operations.As we navigate the evolution of podcasting strategies, the conversation shifts to the often-overlooked balance between coding and communication. The journey from a simple chat between friends to a thriving podcasting community has not been without its challenges and surprises. We reflect on the impact of Jason Charnes' departure due to family commitments and celebrate the resilience and growth that comes with embracing new roles. Amidst it all, the spirit of supporting creators, learning new skills, and fostering personal growth shines through, with an optimistic outlook for the show's future.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

The Bike Shed
455: Noisy Animals Kata with Fritz Meissner

The Bike Shed

Play Episode Listen Later Feb 18, 2025 46:06


Joël talks with fellow thoughtboter Fritz Meissner about the thinking process behind his latest kata project and the vast world of coding problems. Fritz explains why he developed the noisy animals kata and how it helped to better understand and streamline his code, the best ways to break down conditionals and how to clean them up efficiently within your workflow, as well as knowing where the limits of improvement are in each project you work on. — Refine your conditional logic technique with a copy of 99 Bottles of OOP (https://sandimetz.com/99bottles) and then test your skills with Fritz's Noisy Animals Kata (https://github.com/thoughtbot/noisy-animals-kata). Compare notes with Joël (https://github.com/JoelQ/noisy-animals-kata) and Fritz (https://github.com/thoughtbot/noisy-animals-kata/blob/fm-refactored-v3/noisy_animal.rb) to see how you stack up once you're done! Listen to Joël's RailsConf talk The Math Every Programmer Needs (https://www.youtube.com/watch?v=wzYYT40T8G8) or check out some previous episodes for a refresher on some of the logic and math topics discussed in this show - Ep 398 (https://bikeshed.thoughtbot.com/398) - Ep 353 (https://bikeshed.thoughtbot.com/353) - Ep 418 (https://bikeshed.thoughtbot.com/418) - Ep 428 (https://bikeshed.thoughtbot.com/428) If you'd like to contact Fritz about his Kata or anything else programming related he can be found via LinkedIn (https://www.linkedin.com/in/fritz-meissner-057a4a6/) Your host for this episode has been thoughtbot's own Joël Quenneville (https://www.linkedin.com/in/joel-quenneville-96b18b58/). If you would like to support the show, head over to our GitHub page (https://github.com/sponsors/thoughtbot), or check out our website (https://bikeshed.thoughtbot.com). Got a question or comment about the show? Why not write to our hosts: hosts@bikeshed.fm This has been a thoughtbot (https://thoughtbot.com/) podcast. Stay up to date by following us on social media - YouTube (https://www.youtube.com/@thoughtbot/streams) - LinkedIn (https://www.linkedin.com/company/150727/) - Mastodon (https://thoughtbot.social/@thoughtbot) - Instagram (https://www.instagram.com/thoughtbot/) © 2025 thoughtbot, inc.

Code and the Coding Coders who Code it
Episode 45 - Stephen Margheim

Code and the Coding Coders who Code it

Play Episode Listen Later Feb 4, 2025 38:46 Transcription Available


Stephen Margheim, a celebrated figure in the Ruby and Rails community, returns to unravel the fascinating intricacies of his latest project—writing a parser for SQLite's SQL dialect in Ruby. He shares his enlightening journey of translating complex SQL syntax, which at first seemed a simple endeavor but soon unfolded into a realm of deep learning and unexpected challenges. Alongside this, Stephen collaborates with Aaron Francis on "High Leverage Rails," a video course designed to spotlight the synergy between Rails and SQLite, offering a treasure trove of insights into developing high-quality applications.We dive into the nuanced world of SQL parsing, where Stephen candidly recounts the arduous process of porting SQLite's lexer and parser into Ruby. What began as a straightforward task quickly turned into a labyrinth of complex syntax and discrepancies that required astute attention and incremental progress. He reflects on the absence of a fully compatible SQLite parser in any language, emphasizing the significance of open parsers like Postgres in creating a robust ecosystem for tools and libraries.Stephen's excitement is palpable as he discusses Quickdraw, a groundbreaking testing framework that revolutionizes testing in multi-core environments. This innovation, along with the anticipation for RailsConf 2025 in Philadelphia, paints a bright future for the Rails community. With rich discussions on parsing, testing, and upcoming Rails events, this episode promises to inspire and engage both seasoned developers and newcomers to the Ruby and Rails landscape. Join us for an episode filled with excitement, insight, and a glimpse into the future of Rails development.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Code and the Coding Coders who Code it
Episode 44 - Adam McCrea

Code and the Coding Coders who Code it

Play Episode Listen Later Dec 17, 2024 35:42 Transcription Available


What if you could scale your SaaS platforms effortlessly across diverse hosting services? Join us as we welcome Adam McCrea, the brilliant mind behind JudoScale, who takes us through his fascinating evolution from being a Rails developer to creating a cutting-edge autoscaling solution. Adam opens up about the technical challenges he faced while adapting JudoScale for platforms like Render, Fly, and Railway, and how Heroku's unique architecture initially shaped his approach. His journey is one of innovation driven by necessity, as JudoScale originated from a need to optimize costs more efficiently than existing solutions.Our conversation doesn't shy away from complexity; in fact, it embraces it. Adam shares his experiences of grappling with AWS integration, navigating the intricate maze of ECS, EC2, Fargate, and IAM, all driven by customer demand. We explore the strategic shift from metered billing to flat-tiered pricing and the hurdles faced while setting up a staging environment on Render, ultimately reaffirming Heroku's smoother experience. This episode promises valuable insights into the strategic decisions and architectural reimaginations that keep JudoScale ahead of the game.Adding a creative flair, we delve into the entertaining world of infomercial production, as Adam recounts his experience crafting a humorous Billy Mays-inspired ad for JudoScale. With the aid of AI tools like ChatGPT and Descript, Adam turned a fun concept into an engaging reality. As we wrap up, Adam shares his excitement for RailsConf in Philadelphia and the significance of fostering connections through digital networking. Whether you're a tech enthusiast or a developer seeking innovative scaling solutions, this episode is brimming with insightful takeaways and creative inspiration.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Code and the Coding Coders who Code it
Episode 43 - Stan Lo

Code and the Coding Coders who Code it

Play Episode Listen Later Dec 3, 2024 32:45 Transcription Available


What drives a seasoned developer from Taiwan to London, and how does one translate a passion for Ruby into groundbreaking projects? Hear from Stan Lo of Shopify's RubyDX team as he shares his captivating journey and his significant impact on the Ruby development landscape. From his essential work on the debug gem and IRB to his current efforts with the Sorbet type checker and Prism parser, Stan delves into the technical intricacies of using C++ for performance and memory management. Gain unique insights into the collaborative decision-making process at Shopify that guided his transition from the Ruby LSP to focusing on Sorbet's integration.We also tackle the hurdles of progressing Ruby's Sorbet parser to Prism and the challenges of maintaining comprehensive Ruby documentation. Discover the importance of community-driven contributions, and how small acts like fixing typos can have a profound impact on the Ruby ecosystem. Experience Stan's personal anecdotes, from climbing adventures to mastering calisthenics, and explore the innovative shift from VS Code to Cursor, amplifying his development experience through AI capabilities. As we gear up for future events like RailsConf and RubyKaigi, there's an air of excitement for community reunions and ongoing projects. Join us for a blend of technical discussion, personal stories, and a call to action for all Ruby enthusiasts.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

The Bike Shed
446: All about rewrites

The Bike Shed

Play Episode Listen Later Nov 12, 2024 35:11


When is it time for a rewrite? How do you justify it? If you're tasked with one, how do you approach it? In today's episode of The Bike Shed, we dive into the tough question of software rewrites, sharing firsthand experiences that reveal why these projects are often more complicated and risky than they first appear. We unpack critical factors that make or break a rewrite, from balancing developer satisfaction with business value to managing stakeholder expectations when costs and timelines stretch unexpectedly. You'll hear about real-world rewrite pitfalls like downtime and reintroducing bugs, as well as strategies for achieving similar improvements through incremental changes or refactoring instead. If you're a developer or team lead considering a rewrite, this conversation offers a pragmatic perspective that could save your team time, effort, and potential setbacks. Tune in to learn how to make the best call for your codebase and find out when a rewrite might actually be necessary! Key Points From This Episode: Accessible selectors versus test IDs: best practices in Capybara and React Testing Library. Balancing test coverage with pragmatism and risk tolerance with Good Enough Testing. Software rewrites and the tough questions around deciding when they're necessary. The importance of prioritizing business value over frustrations with the current codebase. Drawbacks of rewrites, such as downtime, data loss, and reintroducing past bugs. Risks of “grass is greener” thinking and using mocked data in demos. Unrealistic expectations of full feature parity and why an MVP approach is better. How incremental refactoring can achieve similar goals to a complete rewrite. The appeal and hubris of a “fresh start” and why it's much more complex than that. Balancing innovation with practicality: ways to introduce new elements without rewriting. An example that illustrates when a rewrite might actually be necessary. Reasons that early prototypes and test builds are the best candidates for rewrites. Links Mentioned in Today's Episode: Matt Brictson: ‘Simplify your Capybara selectors' (https://mattbrictson.com/blog/simplify-capybara-selectors) React Testing Library Guidelines (https://testing-library.com/docs/queries/about/#priority) Capybara Accessibility Selectors (https://github.com/citizensadvice/capybara_accessible_selectors) Good Enough Testing (https://goodenoughtesting.com/) ‘RailsConf 2023: The Math Every Programmer Needs by Joël Quenneville' (https://youtu.be/fMetBx77vKY) ‘Testing Your Edge Cases' (https://thoughtbot.com/blog/testing-your-edge-cases) 'Working Iteratively' (https://thoughtbot.com/blog/working-iteratively) 'Technical Considerations to Help Scale Your Product' (https://thoughtbot.com/blog/technical-considerations-when-scaling-your-application) Dan McKinley: ‘Choose Boring Technology' (https://mcfunley.com/choose-boring-technology) The Bike Shed (https://bikeshed.thoughtbot.com/) Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) Joël Quenneville on X (https://x.com/joelquen) Support The Bike Shed (https://github.com/sponsors/thoughtbot)

Remote Ruby
RubyGems & Ruby Central with Marty Haught

Remote Ruby

Play Episode Listen Later Nov 1, 2024 42:24


In this episode, Jason and Chris welcome back Marty Haught, a long-time leader in the Ruby community, to discuss his history and continued involvement with Ruby Central. Marty shares his journey from joining the Ruby Central board in 2012 to his recent role as interim open source lead. The conversation dives into the origins of RubyGems, the evolution of RailsConf and RubyConf, and the challenges of managing these vital aspects of the Ruby ecosystem. Marty also talks about his plans for sustaining RubyGems' future and the infamous "Marty dinner" tradition at conferences. Hit download now to hear more! Jason Charnes X/Twitter Chris Oliver X/Twitter Andrew Mason X/Twitter

Code and the Coding Coders who Code it
Episode 42 - Cody Norman

Code and the Coding Coders who Code it

Play Episode Listen Later Oct 22, 2024 28:48 Transcription Available


Cody Norman, an independent Ruby on Rails consultant and creator of SpotSquid, takes us on a fascinating journey through the intersection of technology and tattoo artistry. Discover how Cody transformed a traditionally paper-based industry into a tech-savvy environment, using customer feedback to tackle the unique challenges faced by tattoo artists and shop owners. With anecdotes from his tech conference experiences and insights into his consulting career, Cody's story is both relatable and inspiring for anyone looking to merge creativity with technology.In this episode, you'll unlock the secrets to finding your niche and the delicate balance between diverse client projects and passion-driven endeavors. We explore Cody's path to becoming a potential expert in Action Mailbox and email solutions within the Rails community, as well as his strategies for creating impactful educational content. Cody's experiences offer valuable lessons on testing email functionality and the potential of establishing oneself as an authoritative figure in a specialized area.As Cody shares his journey through various tech conferences like Rocky Mountain Ruby and RailsConf, listeners will be captivated by his engaging presentations and the excitement of future opportunities. We delve into the anticipation of attending events like MicroConf and RailsConf and the potential breakthroughs these gatherings can bring. Wrapping up with Cody's entrepreneurial aspirations, this episode promises insights and inspiration for developers eager to carve their own path in the dynamic world of software development.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Ruby on Rails Podcast
Episode 525: Catching Up With Ruby Central with Marty Haught

Ruby on Rails Podcast

Play Episode Listen Later Oct 9, 2024 29:23


Ruby Central has been a foundational part of the Ruby community since 2003. They organize Ruby Conf and Rails Conf and maintain critical Ruby infrastructure like rubygems.org. With Ruby Conf Chicago just around the corner and new initiatives at Ruby Central, we thought it would be a good time to catch up with our friends at Ruby Central. Marty Haught joins the show to tell us more about Ruby Central's open source initiatives. Show Notes https://rubyconf.org/ https://rubycentral.org/news/

Code and the Coding Coders who Code it
Episode 40 - Jeremy Smith

Code and the Coding Coders who Code it

Play Episode Listen Later Sep 24, 2024 38:52


What does it take to build a modern, distraction-free forum platform that fosters deep community engagement? Join us as we welcome back Jeremy Smith, a seasoned Rails developer and consultant, who shares his journey of creating Liminal, an innovative platform inspired by conversations at RailsConf. Jeremy's insights offer a unique look into his work under Hybrid Studio, his passion for Ruby and Rails projects, and his latest ventures, including organizing Blue Ridge Ruby and co-hosting the Indie Rails podcast. Don't miss out on his practical advice for developers and creators looking to build meaningful online communities.Launching a new product is never easy, and Jeremy opens up about the challenges he faced with Liminal. From focusing on core features to attract users to overcoming common roadblocks like gaining traction and effective marketing, Jeremy shares valuable lessons learned through personal anecdotes. He discusses the importance of communication and storytelling in successful product development, reflecting on why his similar project, Fractional, didn't take off while Joe Mazzolotti's RailsDevs.com flourished. Jeremy's journey into building a fractional services platform highlights the critical role of targeting a niche audience and marketing effectively.Finally, we delve into the future of video content creation with tools like Riverside. Jeremy highlights the efficiency of its AI tools for creating and editing video content, making quick weekly releases a breeze. This episode also explores the joy of building niche events like Blue Ridge and Ruby on Trails, where Jeremy hones his skills in promoting and engaging with the Rubyist community. Tune in for a wealth of practical advice, personal stories, and insightful discussions that will leave you inspired to take your own projects to the next level.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

The Bike Shed
435: Cohesive Code with Jared Norman

The Bike Shed

Play Episode Listen Later Jul 30, 2024 28:45


How easy is it for a layperson to understand your systems? Jared Norman is a software consultant, speaker, and host of the Dead Code Podcast who specializes in building e-commerce applications in Ruby on Rails. This episode follows two recent talks at RailsConf and covers a theme that emerged from both of them: coupling and cohesion. Tuning in, you'll gain insights on how to create more cohesive components to allow for change and improve your understanding of value objects, systems, and more. You'll also hear about navigating the complexity of domain-driven design and learn how to gauge if your code is easy to understand through a simple rule of thumb. We discuss what it might look like to improve the cohesion of individual objects, identify your systems' seams to create simplicity, and the liminal space between inheritance and composition and the role of decorators in moving through it. Join us today to hear all this and more! Key Points From This Episode: Introducing Jared Norman recent speaker at RailsConf and Ruby on Rails specialist. Jared's interests outside of coding: cycling. Themes that emerged from Jared and Stephanie's talks: coupling and cohesion. A rule of thumb for achieving high cohesion. How value objects tie into the idea of cohesion. Creating more cohesive components in order to have code and systems that are easier to change. The relationships between objects in increasing cohesion and how complex nestings of objects can hinder this. Rearranging systems in order to find seams and create cohesion. Simplifying code in order to facilitate it working independently to support functionality. Improving systems by identifying opportunities for decoupling and other relationships. Inheritance, composition, and decorators and the liminal space between. The complexity of domain-driven design. A rule that indicates when a system is easy to understand. Links Mentioned in Today's Episode: Jared Norman Jared Norman on X The Most Useful Design Pattern Dead Code So Writing Tests Feel Painful. What Now? Dungeons & Dragons & Rails by Joël Quenneville Building Reusable Object-Oriented Systems: Composition Debugging at the Boundaries Working Effectively with Legacy Code Growing Object-Oriented Software Guided by Tests What's in a Name
The Bike Shed Joel Quenneville on LinkedIn Support The Bike Shed

The Bike Shed
433: Riffing with Kasper Timm Hansen

The Bike Shed

Play Episode Listen Later Jul 16, 2024 37:20


Have you ever wondered how improvisation can revolutionize coding? In today's episode, Stephanie sits down with Kasper Timm Hansen to discuss his innovative “riffing” approach to code development. Kasper is a long-time Ruby developer and former member of the Rails core team. He focuses on Ruby and domain modeling, developing various Ruby gems, and providing consulting services in the developer space. He has become renowned for his approach of “riffing” to software development, particularly in the Ruby on Rails framework. In our conversation, we delve into his unique approach to coding, how it differs from traditional methods, and the benefits of improvisation to code development. Discover the “feeling” part of riffing, the steps to uncovering relationships between models, and why it is okay not to know how to do something. Explore how riffing enhances collaboration, improves communication with and between teams, identifies alternative code, why “clever code” does not make for good solutions, and much more! Tune in to learn how to take your coding skills to the next level and uncover the magic of riffing with Kasper Timm Hansen! Key Points From This Episode: Introduction to Kasper, his background in Ruby, and experience as a consultant. An overview of his RailsConf 2024 presentation on domain modeling. His motivation behind his presentation and the overall reception of the concept. Unpack the concept of “riffing” with code as a developer. Insights into his methodology and how it differs from traditional approaches. Examples of “riffing" and how it benefits the development process. How he determines the best code to implement during his process. Kasper shares how he frames problems and builds solutions. Ways riffing highlights gaps in skillsets early in the development process. Hear about the various ways riffing fosters and improves collaboration. Unpack how riffing can help developers communicate more effectively. Balancing the demands of code review with the riffing approach. Final takeaways for listeners and how to contact Kasper to begin riffing! Links Mentioned in Today's Episode: Kasper on Github (https://github.com/kaspth), Mastodon (https://ruby.social/@kaspth), LinkedIn (https://www.linkedin.com/in/kasper-timm-hansen-33b151314/), and X (https://twitter.com/kaspth) Riffing on Rails RailsConf talk (https://www.youtube.com/watch?v=vH-mNygyXs0) and slides (https://speakerdeck.com/kaspth/railsconf-2024-riffing-on-rails-sketch-your-way-to-better-designed-code) Riffing on Spotify's generated mixes (https://www.youtube.com/watch?v=i1MM2EOniPg) with Jeremy Smith Modeling a Kanban board with riffing (https://buttondown.email/kaspth/archive/how-to-approach-modelling-a-kanban-board-in-rails/) Some of Kasper's open source work: * ActiveRecord Associated Object (https://github.com/kaspth/active_record-associated_object) * ActiveJob Performs (https://github.com/kaspth/active_job-performs) * Oaken (https://github.com/kaspth/oaken)

Ruby on Rails Podcast
Episode 518: Live From The Rails Conf Hallway Track!

Ruby on Rails Podcast

Play Episode Listen Later Jun 26, 2024 26:01


While at Rails Conf, I caught up with several attendees and organizers to discuss their experiences. These quick interviews have been collected for this episode. Our Guests: Trevor John https://github.com/trevorrjohn Joshua Paine https://github.com/midnightmonster/activerecord-summarize David Hill https://github.com/Esmale Alex Mitchell @a_mitch on instagram Patrick O'Grady https://github.com/heyogrady Ella Herlihy https://github.com/ellaherlihy Steven Ancheta https://LinkedIn.com/in/stancheta https://github.com/stancheta Ollie Atkins https://github.com/oatkins8 Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!

The Bike Shed
430: Test Suite Pain & Anti-Patterns

The Bike Shed

Play Episode Listen Later Jun 25, 2024 40:57


Stephanie and Joël discuss the recent announcement of the call for proposals for RubyConf in November. Joël is working on his proposals and encouraging his colleagues at thoughtbot to participate, while Stephanie is excited about the conference being held in her hometown of Chicago! The conversation shifts to Stephanie's recent work, including completing a significant client project and her upcoming two-week refactoring assignment. She shares her enthusiasm for refactoring code to improve its structure and stability, even when it's not her own. Joël and Stephanie also discuss the everyday challenges of maintaining a test suite, such as slowness, flakiness, and excessive database requests. They discuss strategies to balance the test pyramid and adequately test critical paths. Finally, Joël emphasizes the importance of separating side effects from business logic to enhance testability and reduce complexity, and Stephanie highlights the need to address testing pain points and ensure tests add real value to the codebase. RubyConf CFP (https://sessionize.com/rubyconf-2024/) RubyConf CFP coaching (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform?pli=1) Testing pyramid (https://thoughtbot.com/blog/rails-test-types-and-the-testing-pyramid) Outside-in testing (https://thoughtbot.com/blog/testing-from-the-outsidein) Writing fewer system specs with request specs (https://thoughtbot.com/blog/faster-tests-with-capybara-and-request-specs) Unnecessary factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Your Test Suite is Making Too Many Database Calls (https://www.youtube.com/watch?v=LOlG4kqfwcg) Your flaky tests might be time dependent (https://thoughtbot.com/blog/your-flaky-tests-might-be-time-dependent) The Secret Ingredient: How To Understand and Resolve Just About Any Flaky Test (https://www.youtube.com/watch?v=De3-v54jrQo) Separating side effects to improve tests (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects) Functional core, imperative shell (https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell) Thoughtbot testing articles (https://thoughtbot.com/blog/tags/testing) Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Something that's new in my world is that RubyConf just announced their call for proposals for RubyConf in November. They're open for...we're currently recording in June, and it's open through early July, and they're asking people everywhere to submit talk ideas. I have a few of my own that I'm working with. And then, I'm also trying to mobilize a lot of other colleagues at thoughtbot to get excited to submit. STEPHANIE: Yes, I am personally very excited about this year's RubyConf in November because it's in Chicago, where I live, so I have very little of an excuse not to go [laughs]. I feel like so much of my conference experience is traveling to just kind of, like, other cities in the U.S. that I want to spend some time in and, you know, seeing all of my friends from...my long-distance friends. And it definitely does feel like just a bit of an immersive week, right? And so, I wonder how weird it will feel to be going to this conference and then going home at the end of the night. Yeah, that's just something that I'm a bit curious about. So, yeah, I mean, I am very excited. I hope everyone comes to Chicago. It's a great city. JOËL: I think the pitch that I'm hearing is submit a proposal to the RubyConf CFP to get a chance to get a free ticket to go to RubyConf, where you get to meet Bike Shed co-host Stephanie Minn. STEPHANIE: Yes. Ruby Central should hire me to market this conference [laughter] and that being the main value add of going [laughs], obviously. Jokes aside, I'm excited for you to be doing this initiative again because it was so successful for RailsConf kind of internally at thoughtbot. I think a lot of people submitted proposals for the first time with some of the programming you put on. Are you thinking about doing things any differently from last time, or any new thoughts about this conference cycle? JOËL: I think I'm iterating on what we did last time but trying to keep more or less the same formula. Among other things, people don't always have ideas immediately of what they want to speak about. And so, I have a brainstorming session where we're just going to get together and brainstorm a bunch of topics that are free for anyone to take. And then, either someone can grab one of those topics and pitch a talk on it, or it can be, like, inspiration where they see that it jogs their mind, and they have an idea that then they go off and write a proposal. And so, that allows, I think, a lot of colleagues as well, who are maybe not interested in speaking but might have a lot of great ideas, to participate and sort of really get a lot of that energy going. And then, from there, people who are excited to speak about something can go on to maybe draft a proposal. And then, I've got a couple of other events where we support people in drafting a proposal and reviewing and submitting, things like that. STEPHANIE: Yes, I really love how you're just involving people with, you know, just different skills and interests to be able to support each other, even if, you know, there's someone on our team who's, like, not interested in speaking at all, but they're, like, an ideas person, right? And they would love to see their idea come to life in a talk from someone else. Like, I think that's really cool, and I certainly appreciate it as a not ideas person [laughs]. JOËL: Also, I want to shout out that Ruby Central is doing CFP coaching sessions on June 24th, June 25th, and June 26th, and those are open to anyone. You can sign up. We'll put a link to the signup form in the show notes. If you've never submitted something before and you'd like some tips on what makes for a good CFP, how can you up your chances of getting accepted, or maybe you've submitted before, you just want to get better at it; I recommend joining one of those slots. So, Stephanie, what's new in your world? STEPHANIE: So, I just successfully delivered a big project on my client work last week. So, I'm kind of riding that wave and getting into the next bit of work that I have been assigned for this team, and I'm really excited to do this. But I also, I don't know, I've been just, like, thinking about it quite a bit. Basically, I'm getting to spend two dedicated weeks to just refactoring [laughs] some really, I guess, complicated code that has led to some bugs recently and just needing some love, especially because there's some whiffs of potentially, like, really investing in this area of the product, and people wanting to make sure that the foundation does feel very stable to build on top of for extending and changing that code. And I think I, like, surprised myself by how excited I was to do this because it's not even code I wrote. You know, sometimes when you are the one who wrote code, you're like, oh, like, I would love time to just go back and clean up all these things that I kind of missed the first time around or just couldn't attend to for whatever reason. But yeah, I think I was just a little bit in the peripheries of that code, and I was like, oh, like, just seeing some weird stuff. And now to kind of have the time to be like, oh, this is all I'm going to be doing for two weeks, to, like, really dive into it and get my hands dirty [laughs], I'm very excited. JOËL: I think that refactoring is a thing that can be really fun. And also, when you have a larger chunk of time, like two days, it's easy to sort of get lost in sort of grand visions or projects. How do you kind of balance the, I want to do a lot of refactoring; I want to take on some bigger things while maybe trying to keep some focus or have some prioritization? STEPHANIE: Yeah, that's a great question. I was actually the one who said, like, "I want two weeks on this." And it also helped that, like, there was already some thoughts about, like, where they wanted to go with this area of the codebase and maybe what future features they were thinking about. And there are also a few bugs that I am fixing kind of related to this domain. So, I think that is actually what I started with. And that was really helpful in just kind of orienting myself in, like, the higher impact areas and the places that the pain is felt and exploring there first to, like, get a sense of what is going on here. Because I think that information gathering is really important to be able to kind of start changing the code towards what it wants to be and what other devs want it to be. I actually also started a thread in Slack for my team. I was, like, asking for input on what's the most confusing or, like, hard to reason about files or areas in this particular domain or feature set and got a lot of really good engagement. I was pleasantly surprised [laughs], you know, because sometimes you, like, ask for feedback and just crickets. But I think, for me, it was very affirming that I was, like, exploring something that a lot of people are like, oh, we would love for someone to, you know, have just time to get into this. And they all were really excited for me, too. So, that was pretty cool. JOËL: Interesting. So, it sounds like you sort of budgeted some refactoring time and then, from there, broke it down into a series of a couple of debugging projects and then a couple of, like, more bounded refactoring projects, where, like, specifically, I want to restructure the way this object works or something like that. STEPHANIE: Yeah. I think there was that feeling of wanting to clean up this area of the codebase, but you kind of caught on to that bit of, you know, it can go so many different ways. And, like, how do you balance your grand visions [laughs] of things with, I guess, a little bit of pragmatism? So, it was very much like, here's all these bugs that are causing our customers problems that are kind of, like, hard for the devs to troubleshoot. You know, that kind of prompts the question, like, why? And so, if there can be, you know, the fixing of the bugs, and then the learning of, like, how that part of the system works, and then, hopefully, some improvements along the way, yeah, that just felt like a dream [laughs] for me. And two weeks felt about the right amount of time. I don't know if anyone kind of hears that and feels like it's too long or too little. I would be really curious. But I feel like it is complex enough that, like, context switching would, I think, make this work harder, and you kind of do have to just sit with it for a little bit to get your bearings. JOËL: A scenario that we encounter on a pretty regular basis is a customer coming to us and telling us that they're feeling a lot of test pain and asking what are the ways that we can help them to make things better and that test pain can come under a lot of forms. It might be a test suite that's really slow and that's hurting the team in terms of their velocity. It might be a test suite that is really flaky. It might be one that is really difficult to add to, or maybe one that has very low coverage, or one that is just really brittle. Anytime you try to make a change to it, a bunch of things break, and it takes forever to ship anything. So, there's a lot of different aspects of challenging test suites that clients come to us with. I'm curious, Stephanie, what are some of the ones that you've encountered most frequently? STEPHANIE: I definitely think that a slow test suite and a flaky test suite end up going hand in hand a lot, or just a brittle one, right? That is slowing down development and, like you said, causing a lot of pain. I think even if that's not something that a client is coming to us directly about, it maybe gets, like, surfaced a little bit, you know, sometime into the engagement as something that I like to keep an eye on as a consultant. And I actually think, yeah, that's one of kind of the coolest things, I think, about our consulting work is just getting to see so many different test suites [laughs]. I don't know. I'm a testing nerd, so I love that kind of stuff. And then, I think you were also kind of touching on this idea of, like, maintaining a test suite and, yeah, making testing just a better experience. I have a theory [laughs], and I'd be curious to get your thoughts on it. But one thing that I really struggle with in the industry is when people talk about writing tests as if it's, like, the morally superior thing to do. And I struggle with this because I don't think that it is a very good strategy for helping people feel better or more confident and, like, upskill at writing tests. I think it kind of shames people a little bit who maybe either just haven't gotten that experience or, you know, just like, yeah, like, for whatever reason, are still learning how to do this thing. And then, I think that mindset leads to bad tests [laughs] or tests that aren't really serving the purpose that you hope they would because people are doing it more out of obligation rather than because they truly, like, feel like it adds something to their work. Okay, I kind of just dropped that on you [laughs]. Do you have any reactions? JOËL: Yeah, I guess the idea that you're just checking a box with your test rather than writing code that adds value to the codebase. They're two very different perspectives that, in the end, will generate more lines of code if you're just doing a checkbox but may or may not add a whole lot of value. So, maybe before even looking at actual, like, test practices, it's worth stepping back and asking more of a mindset question: Why does your team test? What is the value that your team feels they get out of testing? STEPHANIE: Yeah. Yeah. I like that because I was about to say they go hand in hand. But I do think that maybe there is some, you know, question asking [laughs] to be done because I do think people like to kind of talk about the testing practices before they've really considered that. And I am, like, pretty certain from just kind of, at least what I've seen, and what I've heard, and what I've experienced on embedding into client teams, that if your team can't answer that question of, like, "What value does testing bring?" then they probably aren't following good testing practices [laughs]. Because I do think you kind of need to approach it from a perspective of like, okay, like, I want to do this because it helps me, and it helps my team, rather than, like you said, getting the check mark. JOËL: So, once we've sort of established maybe a bit of a mindset or we've had a conversation with the team around what value they think they're getting out of tests, or maybe even you might need to sell the team a little bit on like, "Hey, here's, like, all these different ways that testing can bring value into your life, make your life as developers easier," but once you've done that sort of pre-work and you can start looking at what's actually the problem with a test suite, a common complaint from developers is that the test suite is too slow. How do you like to approach a slow test suite? STEPHANIE: That's a good question. I actually...I think there's a lot of ways to answer that. But to kind of stay on the theme of stepping back a little bit, I wonder if assessing how well your test suite aligns with the testing pyramid would be a good place to start; at least, that could be where I start if I'm coming into a client team for the first time, right, and being asked to start assessing or just poking around. Because I think the slowness a lot of the time comes from a lot of quote, unquote, "integration tests" or, like, unit tests masquerading as integration tests, where you end up having, like, a lot of duplication of things that are being tested in ways that are integrating with some slow parts of the system like the database. And yeah, I think even before getting into some of the more discreet reasons why you might be writing slow tests, just looking at the structure of your test suite and what kinds of things you're testing, and, again, even going back to your team and asking, like, "What kinds of things do you test?" Or like, "Do you try to test or wish to be testing more of, less of?" Like looking at the structure, I have found to be a good place to start. JOËL: And for those who are not familiar, you used the term testing pyramid. This is a concept which says that you probably want to have a lot of small, fast unit tests, a medium amount of integration tests that test a few different components together, and then a few end-to-end tests. Because as you go up that pyramid, tests become more expensive. They take a lot longer to run, whereas the little unit tests are super cheap. You can create thousands of them, and they will barely impact your run time. Adding a dozen end-to-end tests is going to be noticeable. So, you want to balance sort of the coverage that you get from end to end with the sort of cheapness and ubiquity of the little unit tests, and then split the difference for tests that are in between. STEPHANIE: And I think that is challenging, even, you know, you're talking about how you want the peak of your pyramid to be end-to-end tests. So, you don't want a lot of them, but you do want some of them to really ensure that things are totally plumbed and working correctly. But that does require, I think, really looking at your application and kind of identifying what features are the most critical to it. And I think that doesn't get paid enough attention, at least from a lot of my client experiences. Like, sometimes teams just end up with a lot of feature bloat and can't say like, you know, they say, "Everything's important [chuckles]," but everything can't be equally important, you know? JOËL: Right. I often like to develop using a sort of outside-in approach, where you start by writing an end-to-end test that describes the behavior that your new feature ticket is asking for and use that to drive the work that I'm doing. And that might lead to some lower-level unit tests as I'm building out different components, but the sort of high-level behavior that we're adding is driven by adding an end-to-end spec. Do you feel that having one new end-to-end spec for every new feature ticket that you work on is a reasonable thing to do, or do you kind of pick and choose? Do you write some, but maybe start, like, coalescing or culling them, or something like that? How do you manage that idea that maybe you would or would not want one end-to-end spec for each feature ticket? STEPHANIE: Yeah, it's a good question. Actually, as you were saying that, I was about to ask you, do you delete some afterwards [laughs]? Because I think that might be what I do sometimes, especially if I'm testing, you know, edge cases or writing, like, the end-to-end test for error states. Sometimes, not all of them make it into my, like, final, you know, commit. But they, you know, had their value, right? And at least it prompted me to make sure I thought about them and make sure that they were good error states, right? Like things that had visible UI to the user about what was going on in case of an error. So, I would say I will go back and kind of coalesce some of them, but they at least give me a place to start. Does that match your experience? JOËL: Yeah, I tend to mostly write end-to-end tests for happy paths and then write kind of mid-level things to cover some of my edge cases, maybe a couple of end-to-end tests for particularly critical paths. But, at some point, there's just too many paths through the app that you can't have end-to-end coverage for every single branch on every single path that can happen. STEPHANIE: Yeah, I like that because if you find yourself having a lot of different conditions that you need to test in an end-to-end situation, maybe there's room for that to, like, be better encapsulated in that, like, more, like, middle layer or, I don't know, even starting to ask questions about, like, does this make sense with the product? Like, having all of these different things going on, does that line up with kind of the vision of what this feature is trying to be or should be? Because I do think the complexity can start at that high of a level. JOËL: How do you feel about the idea that adding more end-to-end tests, at some point, has diminishing returns? STEPHANIE: I'm not quite sure I'm following [laughs]. JOËL: So, let's say you have an end-to-end test for the happy path for every core feature of the app. And you decide, you know what, I want to add maybe some, like, side features in, or maybe I want to have more error states. And you start, like, filling in more end-to-end tests for those. Is it fair to say that adding some of those is a bit of a diminishing return? Like, you're not getting as much value as you would from the original specs. And maybe as you keep finding more and more rare edge cases, you get less and less value for your test. STEPHANIE: Oh, yeah, I see. And there's more of a cost, too, right? The cost of the time to run, maintain, whatever. JOËL: Right. Let's say they're roughly all equally expensive in terms of cost to run. But as you stray further and further off of that happy path, you're getting less and less value from that integration test or that end-to-end test. STEPHANIE: I'm actually a little conflicted about this because that sounds right in theory, but then in practice, I feel like I've seen error states not get enough love [laughs] that it's...I don't even want to say, like, you make any kind of claim [laughs] about it. But, you know, if you're going to start somewhere, if you have, like, a limited amount of time and you're like, okay, I'm only going to write a handful of end-to-end tests, yeah, like, write tests for your happy paths [laughs]. JOËL: I guess it's probably fair to say that error states just don't get as much love as they should throughout the entire testing stack: at the unit level, at the integration level, all the way up to end to end. STEPHANIE: I'm curious if you were trying to get at some kind of conclusion, though, with the idea of diminishing returns. JOËL: I guess I'm wondering if, from there, we can talk about maybe a breakdown of a particular testing pyramid for a particular test suite is being top heavy, and whether there's value in maybe pushing some of these tests, some of these edge cases, some of these maybe less important features down from that, like, top end-to-end layer into maybe more of an integration layer. So, in a Rails context, that might be moving system specs down to something like a request spec. STEPHANIE: Yeah, I think that is what I tend to do. I'm trying to think of how I get there, and I'm not quite sure that I can explain it quite yet. Yeah, I don't know. Do you think you can help me out here? Like, how do you know it's time to start writing more tests for your unhappy paths lower on the pyramid? JOËL: Ideally, I think a lot of your code should be unit-tested. And when you are unit testing it, those pieces all need coverage of the happy and unhappy paths. I think the way it may often happen naturally is if you're pushing logic out of your controllers because it's a little bit challenging sometimes to test Rails controllers. And so, if you're moving things into domain objects, even service objects, depending on how you implement them, just doing that and then making sure you unit test them can give you a lot more coverage of all the different edge cases that can happen. Where things sometimes fall apart is getting out of that business layer into the web layer and saying, "Hey, if something raises an error or if the save fails or something like that, does the user get a good experience, or do we just crash and give them a 500 page?" STEPHANIE: Yeah, that matches with a lot of what I've seen, where if you then spend too much time in that business layer and only handling errors there, you don't really think too much about how it bubbles up. And, you know, then you are digging through, like, your error monitoring [laughs] service, trying to find out what happened so that you can tell, you know, your customer support team [laughs] to help them resolve, like, a bug report they got. But I actually think...and you were talking about outside in, but, in some ways, in my experience, I also get feedback from the bottom up sometimes that then ends up helping me adjust some of those integration or end-to-end tests about kind of what errors are possible, like, down in the depths of the code [laughs], and then finding ways to, you know, abstract that or, like, kind of be like, "Oh, like, here are all these possible, like, exceptions that might be raised." Like, what HTTP status code do I want to be returned to capture all of these things? And what do I want to say to the user? So, yeah, I'm [laughs] kind of a little lost myself, but this idea that going both, you know, outside in and then maybe even going back up a little bit has served me well before. JOËL: I think there can be a lot of value in sort of dropping down a level in the pyramid, and maybe instead of doing sort of end-to-end tests where you, like, trigger a scenario where something fails, you can just write a request back against the controller and say, "Hey, if I go to this controller and something raises an error, expect that you get redirected to this other location." And that's really cheap to run compared to an end-to-end test. And so, I think that, for me, is often the right compromise is handling error states at sort of the next lowest level and also in slightly more atomic pieces. So, more like, if you hit this endpoint and things go wrong, here's how things happen. And I use endpoint not so much in an API sense, although it could be, but just your, you know, maybe you've got a flow that's multiple steps where, you know, you can do a bunch of things. But I might have a test just for one controller action to say, "Hey, if things go wrong, it redirects you here, or it shows you this error page." Whereas the end-to-end test might say, "Oh, you're going to go through the entire flow that hits multiple different controllers, and the happy path is this nice chain." But each of the exit points off at where things fail would be covered by a more scoped request spec on that controller. STEPHANIE: Yeah. Yeah. That makes sense. I like that. JOËL: So, that's kind of how I've attempted to balance my pyramid in a way that balances complexity and time with coverage. You mentioned that another area that test suites get slow is making too many requests to the database. There's a lot of ways that that happens. Oftentimes, I think a classic is using a factory where you really don't need to, persisting data to the database when all you needed was some object in memory. So, there are different strategies for avoiding that. It's also easy to be creating too much data. So, maybe you do need to persist some things in the database, but you're persisting a hundred objects into memory or into the database when you really meant to persist two, so that's an easy accident. A couple of years ago, I gave a talk at RailsConf titled "Your Test Suite is Making Too Many Database Requests" that went over a bunch of different ways that you can be doing a lot of expensive database requests you didn't plan on making and how that slows down your test suite. So, that is also another hot spot that I like to look at when dealing with a slow test suite. STEPHANIE: Yeah, I mentioned earlier the idea of unit tests really masquerading as integration tests [laughs]. And I think that happens especially if you're starting with a class that may already be a little bit too big than it should be or have more responsibilities than it should be. And then, you are, like, either just, like, starting with using the create build, like, strategy with factories, or you find yourself, like, not being able to fully run the code path you're trying to test without having stuff persisted. Those are all, I think, like, test smells that, you know, are signaling a little bit of a testing anti-pattern that, yeah, like, is there a way to write, like, true unit tests for this stuff where you are only using objects in memory? And does that require breaking out some responsibilities? That is a lot of what I am kind of going through right now, actually, with my little refractoring project [laughs] is backfilling some tests, finding that I have to create a lot of records. And you know what? Like, the first step will probably be to write those tests and commit them, and just have them live there for a little while while I figure out, you know, the right places to start breaking things up, and that's okay. But yeah, I did want to, like, just mention that if you are having to create a lot of records and then also noticing, like, your test is running kind of slow [laughs], that could be a good indicator to just give a good, hard look at what kind of style of test you think you're writing [laughs]. JOËL: Yeah, your tests speak to you, and when you're feeling pain, oftentimes, it can be a sign that you should consider refactoring your implementation. And I think that's doubly true if you're writing tests after the fact rather than test driving. Because sometimes you sort of...you came up with an implementation that you thought would be good, and then you're writing tests for it, and it's really painful. And that might be telling you something about the underlying implementation that you have that maybe it's...you thought it's well scoped, but maybe it actually has more responsibilities than you initially realized, or maybe it's just really tightly coupled in a way that you didn't realize. And so, learning to listen to your tests and not just sort of accepting the world for being the way it is, but being like, "No, I can make it better." STEPHANIE: Yeah, I've been really curious why people have a hard time, like, recognizing that pain sometimes, or maybe believing that this is the way it is and that there's not a whole lot that you can do about it. But it's not true, like, testing really does not have to be painful. And I feel like, again, this is one of those things that's like, it's hard to believe until you really experience it, at least, that was the case for me. But if you're having a hard time with tests, it's not because you're not smart enough. Like, that, I think, is a thing that I really want to debunk right now [laughs] for anyone who has ever had that thought cross their mind. Yeah, things are just complicated and complex somehow, or software entropy happens. That's, like, not how it should be, and we don't have to accept that [laughs]. So, I really like what you said about, oh, you can change it. And, you know, that is a bit of a callback to the whole mindset of testing that we mentioned earlier at the beginning. JOËL: Speaking of test suites, we have not covered yet is paralyzing it. That could probably be its own Bike Shed episode on its own entirely on paralyzing a test suite. We've done entire engagements where our job was to come in and help paralyze a test suite, make it faster. And there's a lot of, like, pros and cons. So, I think maybe we can save that for a different episode. And, instead, I'd like to quickly jump in a little bit to some other common pain points of test suites, and I would say probably top of that list is test flakiness. How do you tend to approach flakiness in a client project? STEPHANIE: I am, like, laughing to myself a little bit because I know that I was dealing with test flakiness on my last client engagement, and that was, like, such a huge part of my day-to-day is, like, hitting that retry button. And now that I am on a project with, like, relatively low flakiness, I just haven't thought about it at all [laughs], which is such a privilege, I think [laughs]. But one of the first things to do is just start, like, capturing metrics around it. If you, you know, are hearing about flakiness or seeing that, like, start to plague your test suite or just, you know, cropping up in different ways, I have found it really useful to start, like, I don't know, just, like, maybe putting some of that information in a dashboard and seeing how, just to, like, even make sure that you are making improvements, that things are changing, and seeing if there's any, like, patterns around what's causing the flakiness because there are so many different causes of it. And I think it is pretty important to figure out, like, what kind of code you're writing or just trying to wrangle. That's, you know, maybe more likely to crop up as flakiness for your particular domain or application. Yeah, I'm going to stop there and see, like, because I know you have a lot of thoughts about flakiness [laughs]. JOËL: I mean, you mentioned that there's a lot of different causes for flakiness. And I think, in my experience, they often sort of group into, let's say, like, three different buckets. Anytime you're testing code that's doing things that are non-deterministic, that's easy for tests to be flaky. And so, you might think, oh, well, you know, you have something that makes a call to random, and then you're going to assert on a particular outcome of that. Well, clearly, that's going to not be the same every time, and that might be flaky. But there are, like, more subtle versions of that, so maybe you're relying on the system clock in some way. And, you know, depending on the time you run that test, it might give you a different value than you expect, and that might cause it to fail. And it doesn't have to be you're asserting on, like, oh, specifically a given millisecond. You might be doing math around, like, number of days, and when you get near to, let's say, the daylight savings boundary, all of a sudden, no, you're off by an hour, and your number of days...calculation breaks because relying on the clock is something that is inherently non-deterministic. Non-determinism is a bucket. Leaky tests is another bucket of failures that I see, things where one test might impact another that gets run after the fact, oftentimes by mutating some sort of global state. So, maybe you're both relying on some sort of, like, external file that you're both writing to or maybe a cache that one is writing to that the other one is reading from, something like that. It could even just be writing records into the database in a way that's not wrapped in a transaction, such that there's more data in the database when the next test runs than it expects. And then, finally, if you are doing any form of parallelization, that can improve your test suite speed, but it also potentially leads to race conditions, where if your resources aren't entirely isolated between parallel test runners, maybe you're sharing a database, maybe you're sharing Redis instance or whatever, then you can run into situations where you're both kind of fighting over the same resources or overriding each other's data, or things like that, in a way that can cause tests to fail intermittently. And I think having a framework like that of categorization can then help you think about potential solutions because debugging approaches and then solutions tend to be a little bit different for each of these buckets. STEPHANIE: Yeah, the buckets of different causes of flaky tests you were talking about, I think, also reminded me that, you know, some flakiness is caused by, like, your testing environment and your infrastructure. And other kinds of flakiness are maybe caused more from just the way that you've decided how your code should work, especially that, like, non-deterministic bucket. So, yeah, I don't know, that was just, like, something that I noticed as you were going through the different categories. And yeah, like, certainly, the solutions for approaching each kind are very different. JOËL: I would like to pitch a talk from RubyConf last year called "The Secret Ingredient: How To Resolve And Understand Just About Any Flaky Test" by Alan Ridlehoover. Just really excellent walkthrough of these different buckets and common debugging and solving approaches to each of them. And I think having that framework in mind is just a great way to approach different types of flaky tests. STEPHANIE: Yes, I'll plus one that talk, lots of great pictures of delicious croissants as well. JOËL: Very flaky pastry. STEPHANIE: [laughs] Joël, do you have any last testing anti-pattern guidances for our audience who might be feeling some test pain out there? JOËL: A quick list, I'm going to say tight coupling that has then led to having a lot of stubbing in your tests often leads to tests that are very brittle, so tests that maybe don't fail when they should when you've actually broken things, or maybe, alternatively, tests that are constantly failing for the wrong reasons. And so, that is a thing that you can fix by making your code less coupled. Tests that also require stubbing a lot of things because you do a lot of side effects. If you are making a lot of HTTP calls or things like that, that can both make a test more complex because it has to be aware of that. But also, it can make it more non-deterministic, more flaky, and it can just make it harder to change. And so, I have found that separating side effects from sort of business logic is often a great way to make your test suite much easier to work with. I have a blog post on that that I'll link in the show notes. And I think this maybe also approaches the idea of a functional core and an imperative shell, which I believe was an idea pitched by Gary Bernhardt, like, over ten years ago. There's a famous video on that that we'll also link in the show notes. But that architecture for building an app can lead to a much nicer test to write. I guess the general idea being that testing code that does side effects is complicated and painful. Testing code that is more functional tends to be much more pleasant. And so, by not intermingling the two, you tend to get nicer tests that are easier to maintain. STEPHANIE: That's really interesting. I've not heard that guidance before, but now I am intrigued. That reminded me of another thing that I had a conversation with someone about. Because after the RailsConf talk I gave, which was about testing pain, there was some stubbing involved in the examples that I was showing because I just see a lot of that stuff. And, you know, this audience member kind of had that question of, like, "How do you know that things are working correctly if you have to stub all this stuff out?" And, you know, sometimes you just have to for the time being [chuckles]. And I wanted to just kind of call back to that idea of having those end-to-end tests testing your critical paths to at least make sure that those things work together in the happy way. Because I have seen, especially with apps that have a lot of service objects, for some reason, those being kind of the highest-level test sometimes. But oftentimes, they end up not being composed well, being quite coupled with other service objects. So, you end up with a lot of stubbing of those in your test for them. And I think that's kind of where you can see things start to break down. JOËL: Yep. And when the RailsConf videos come out, I recommend seeing Stephanie's talk, some great gems in there for building a more maintainable test suite. Stephanie and I and, you know, most of us here at thoughtbot, we're testing nerds. We think about this a lot. We've also written a lot about this. There are a lot of resources in the show notes for this episode. Check them out. Also, just generally, check out the testing tag on the thoughtbot blog. There is a ton of content there that is worth looking into if you want to dig further into this topic. STEPHANIE: Yeah, and if you are wanting some, like, dedicated, customized testing training, thoughtbot offers an RSpec workshop that's tailored to your team. And if you kind of are interested in the things we're sharing, we can definitely bring that to your company as well. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Remote Ruby
Auditing in Rails & Andrew's Cursed Idea

Remote Ruby

Play Episode Listen Later Jun 21, 2024 46:26


In this episode, Chris and Andrew dive into the intricacies of tracking changes in Rails models using gems like Paper Trail and Audited. They discuss challenges faced in bulk actions like 'update all' and 'destroy all' that don't trigger Active Record callbacks. The conversation explores potential solutions, including overriding methods and using wrappers to ensure changes are logged efficiently without significant performance hits. They also touch upon mentorship and the importance of learning fundamental Ruby skills to master Rails development. The discussion also extends to experiences at RailsConf, the impact of community interactions, and reflections on career growth through continuous learning and mentorship. Press download now to hear more! HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

The Bike Shed
429: Transforming Experience Into Growth

The Bike Shed

Play Episode Listen Later Jun 18, 2024 43:38


Stephanie has a newfound interest in urban foraging for serviceberries in Chicago. Joël discusses how he uses AI tools like ChatGPT to generate creative Dungeons & Dragons character concepts and backstories, which sparks a broader conversation with Stephanie about AI's role in enhancing the creative process. Together, the hosts delve into professional growth and experience, specifically how to leverage everyday work to foster growth as a software developer. They discuss the importance of self-reflection, note-taking, and synthesizing information to enhance learning and professional development. Stephanie shares her strategies for capturing weekly learnings, while Joël talks about his experiences using tools like Obsidian's mind maps to process and synthesize new information. This leads to a broader conversation on the value of active learning and how structured reflection can turn routine work experiences into meaningful professional growth. Obsidian (https://obsidian.md/) Zettelkasten (https://en.wikipedia.org/wiki/Zettelkasten) Mindmaps in Mermaid.js (https://mermaid.js.org/syntax/mindmap.html) Module Docs episode (https://bikeshed.thoughtbot.com/417) Writing Quality Method docs blog post (https://thoughtbot.com/blog/writing-quality-method-docs) Notetaking for Developers episode (https://bikeshed.thoughtbot.com/357) Learning by Helping blog post (https://thoughtbot.com/blog/learning-by-helping) Transcript:  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, as of today, while we record this, it's early June, and I have started foraging a little bit for what's called serviceberries, which is a type of tree/shrub that is native to North America. And I feel like it's just one of those, like, things that more people should know about because it makes these little, tiny, you know, delicious fruit that you can just pick off of the tree and have a little snack. And what's really cool about this tree is that, like I said, it's native, at least to where I'm from, and it's a pretty common, like, landscaping tree. So, it has, like, really pretty white flowers in the spring and really beautiful, like, orange kind of foliage in the fall. So, they're everywhere, like, you can, at least where I'm at in Chicago, I see them a lot just out on the sidewalks. And whenever I'm taking a walk, I can just, yeah, like, grab a little fruit and have a little snack on them. It's such a delight. They are a really cool tree. They're great for birds. Birds love to eat the berries, too. And yeah, a lot of people ask my partner, who's an arborist, like, if they're kind of thinking about doing something new with the landscaping at their house, they're like, "Oh, like, what are some things that I should plant?" And serviceberry is his recommendation. And now I'm sharing it with all of our Bike Shed listeners. If you've ever wondered about [laughs] a cool and environmentally beneficial tree [laughs] to add to your front yard, highly recommend, yeah, looking out for them, looking up what they look like, and maybe you also can enjoy some June foraging. JOËL: That's interesting because it sounds like you're foraging in an urban environment, which is typically not what I associate with the idea of foraging. STEPHANIE: Yeah, that's a great point because I live in a city. I don't know, I take what I can get [laughs]. And I forget that you can actually forage for real out in, you know, nature and where there's not raccoons and garbage [laughs]. But yeah, I think I should have prefaced by kind of sharing that this is a way if you do live in a city, to practice some urban foraging, but I'm sure that these trees are also out in the world, but yeah, have proved useful in an urban environment as well. JOËL: It's really fun that you don't have to, like, go out into the countryside to do this activity. It's a thing you can do in the environment that you live in. STEPHANIE: Yeah, that was one of the really cool things that I got into the past couple of years is seeing, even though I live in a city, there's little pieces of nature around me that I can engage with and picking fruit off of people's [inaudible 03:18] [laughs], like, not people's, but, like, parkway trees. Yeah, the serviceberry is also a pretty popular one here that's planted in the Chicago parks. So, yeah, it's just been like, I don't know, a little added delight to my days [laughs], especially, you know, just when you're least expecting it and you stumble upon it. It's very fun. JOËL: That is really fun. It's great to have a, I guess, a snack available wherever you go. STEPHANIE: Anyway, Joël, what is new in your world? JOËL: I've been intersecting two, I guess, hobbies of mine: D&D and AI. I've been playing a lot of one-shot games with friends, and that means that I need to constantly come up with new characters. And I've been exploring what AI can do to help me develop more interesting or compelling character concepts and backstories. And I've been pretty satisfied with the result. STEPHANIE: Cool. Yeah. I mean, if you're playing a lot and having to generate a lot of new ideas, it can be hard if you're, you know, just feeling a little empty [laughs] in terms of, you know, coming up with a whole character. And that reminds me of a conversation that you and I had in person, like, last month as we were talking about just how you've been, you know, experimenting with AI because you had used it to generate images for your RailsConf talk. And I think I connected it to the idea of, like, randomness [laughs] and how just injecting some of that can help spark some more, I think, creativity, or just help you think of things in a new way, especially if you're just, like, having a hard time coming up with stuff on your own. And even if you don't, like, take exactly what's kind of provided to you in a generative AI, it at least, I don't know, kind of presents you with something that you didn't see before, or yeah, it's just something to react to. JOËL: Yeah, it's a great tool for getting unstuck from that kind of writer's block or that, like, blank page feeling. And oftentimes, it'll give you a thing, and you're like, that's not really exactly what I wanted. But it sparks another idea, which is what I actually want. Or sometimes you can be like, "Hey, here's an idea I have. I'm not sure what direction to take it in. Give me a few options." And then, you see that, and you're like, "Oh, that's actually pretty interesting." One thing that I think is interesting is once I've come up with a little bit of the character concept, or maybe even, like, a backstory element...so, I'm using ChatGPT, and it has that concept of memory. And so, throughout the conversation, it keeps bringing it back. So, if I tell it, "Look, this is an element that's going to be core to the character," and then later on, I'm like, "Okay, help me brainstorm some potential character flaws for this character," it'll actually find things that connect back to my, like, core concept, or maybe an element of the backstory. And it'll give me like, you know, 5 or 10 different ideas, and some of them can be actually really good. So, I've really enjoyed doing that. It's not so much to just generate me a character so much as it is like a conversation back and forth of like, "Okay, help me come up with a vibe for it. Okay, now that I have a vibe or a backstory element or, like, a concept, help me workshop this thing. And what about that?" And if I want to say, "It's going to be this character class, what are maybe some ways I could develop it that are unusual?" and just sort of step by step kind of choose your own adventure. And it kind of walking me through the process has been really fun. STEPHANIE: Nice. Yeah, the way you're talking about it makes a lot of sense to me how asking it to help you, not necessarily do all of it, like, you know, kind of just spit out something that you're like, okay, like, that's what I'm going to use, approaching it as a tool, and yeah, that's really fun. Have you had good experiences then playing with those characters [chuckles]? JOËL: I have. I think it's also really great for sort of padding out some of the content. So, I had a character I played who was a washed-up politician. And at one point, I knew that I was going to have to make a campaign speech. And I asked ChatGPT, "Can you help me, like...here are the themes I want to hit. Give me a, like, classic, very politician-sounding speech that sounds inspiring but also says nothing at the same time." And it did a really good job of that. And you can tell it, "Oh, that's too long. That's too short. I want three sentences. I want five sentences." And that was great. So, I saved that, brought it to the table, and read out my campaign speech, and it was a hit. STEPHANIE: Amazing. That's really fun. I like that because, yeah, I don't think...I am so poor at just improvising things like that, even though, like, I want to really embody the character. So, that's cool that you found a way to help you be able to do that because that just feels like kind of what playing D&D can be about. JOËL: I've never DM'd, but I could imagine a situation where, because the DMs have to improv so much, and you know what the players do, I could imagine having a tool like that available behind the DM screen being really helpful. So, all of a sudden, someone's just like, "Oh, I went to a place," and, like, all of a sudden, you have to, like, sort of generate a village and, like, ten characters on the spot for people that you didn't expect, or an organization or something like that. I could imagine having a tool like that, especially if it's already primed with elements from your world that you've created, being something really helpful. That being said, I've never DM'd myself, so I have no idea what it actually is like to be on the other side of that screen. STEPHANIE: Cool. I mean, if you ever do try that or have a DM experience and you're like, hmm, I wonder kind of how I might be able to help me here, I bet that would be a very cool experience to share on the show. JOËL: I definitely have to report back here. Something that I've been thinking about a lot recently is the difference between sort of professional growth and experience, so the time that you put into doing work. Particularly maybe because, you know, we spend part of our week doing client work, and then we have part of the week that's dedicated to maybe more directly professional growth: our investment day. How do we grow from that, like, four days a week where we're doing client work? Because not all experience is created equal. Just because I put in the hours doesn't mean that I'm going to grow. And maybe I'm going to feel like I'm in a rut. So, how do I take those four days a week that I'm doing code and transform that into some sort of growth or expansion of my knowledge as a developer? Do you have any sort of tactics that you like to use or ways you try to be a little bit more mindful of that? STEPHANIE: Yeah, this is a fun question for me, and kind of reminds me of something we've talked a little bit about before. I can't remember if it was, like, on air or just separately, but, you know, we talk a lot about, like, different learning strategies on the show, I think, because that's just something you and I are very into. And we often, like, lean on, you know, our investment day, so our Fridays that we get to not do client work and kind of dedicate to professional development. But you and I also try to remember that, like, most people don't have that. And most people kind of are needing to maybe find ways to just grow from the day-to-day work that they do, and that is totally possible, I think. And some of the strategies that I have are, I guess, like, it is really...it can be really challenging to, like, you know, be like, okay, I spent 40 hours doing this, and like, what did I learn [chuckles]? Feeling like you have to have something to show for it or something to point to. And one thing that I've been really liking is these automated check-ins we have at the end of the week. And, you know, I suspect that this is not that uncommon for just, like, a workplace to be like, "Hey, like, how did your week go? Like, what are some ways that it was successful? Like, what are your challenges? Like, where do you need support or help?" And I think I've now started using that as both, like, space for giving an update on just, like, business-y things. Like, "Here's the status of this project," or, like, "Here's, you know, a roadblock that we faced that took some extra time," or whatever. Then also being like, oh, this is a great time to make this space for myself, especially because...I don't know about you, but whenever I have, like, performance review time and I have to write, like, a self-review, I'm just like, did I do anything in the last six months [laughs], or how have I grown in the last six months? It feels like such a big question, kind of like you were talking about that blank page syndrome a little bit. But if I have kind of just put in the 10 minutes during my Friday to be like, is there something that was kind of just for me that I can say in my check-in? I can go back and, yeah, just kind of start to see just, like, you know, pick out or just pay attention to how, like, my 40 hours is kind of serving me in growing in the ways that I want to and not just to deliver code [laughs]. JOËL: What you're describing there, that sort of weekly check-in and taking notes, reminds me of the practice of journaling. Is that something that you've ever tried to do in your, like, regular life? STEPHANIE: Oh yeah, very much so. But I'm not nearly as, like, routine about it in my personal life. But I suspect that the routine is helpful in more of a, like, workplace setting, at least for me, because I do have, like, more clear pathways of growth that I'm interested in or just, like, something that, I don't know, not that it's, like, expected of everyone, but if that is part of your goals or, like, part of your company's culture, I feel like I benefit from that structure. And yeah, I mean, I guess maybe that's kind of my way of integrating something that I already do in my personal life to an environment where, like I said, maybe there is, like, that is just part of the work and part of your career progression. JOËL: I'm curious about the frequency. You mentioned that you sort of do this once a week, sort of a check-in at the end of the week. Do you find that once a week is about the right frequency versus maybe something like daily? I know a lot of these sort of more modern note-taking systems, Roam Research, or Obsidian, or whatever, have this concept of, like, a daily note that's supposed to encourage something that's kind of like journaling. Have you ever tried something more on a daily basis, or do you feel like a week is about...or once a week is about the right cadence for you? STEPHANIE: Listen, I have, like, complicated feelings about this because I think the daily note is so aspirational for me [laughs] and just not how I work. And I have finally begrudgingly come to accept this no matter how much, like, I don't know, like, bullet journal inspirational content I consume on the internet [laughs]. I have tried and failed many a time to have more frequency in that way. But, I don't know, I think it almost just, like, sets me up for failure [laughs] because I have these expectations. And that's, like, the other thing. It's like, you can't force learning necessarily. I don't know if this is, like, a strategy, but I think there is some amount of, like, making sure that I'm in the right headspace for it and, you know, like, my environment, too, kind of is conducive to it. Like, I have, like, the time, right? If I'm trying to squeeze in, I don't know, maybe, like, in between meetings, 20 minutes to be like, what did I learn from this experience? Nothing's coming out [laughs]. That was another thing that I was kind of mulling over when he had this topic proposed is this idea of, like, mindset and environment being really important because you know when you are saying, like, not all time is created equal, and I suspect that if, you know, either you or, like, the people around you and the environment you're in is not also facilitating growth, and, like, how much can you really expect for it to be happening? JOËL: I mean, that's really interesting, right? The impact of sort of a broader company culture. And I think that definitely can act as a catalyst for growth, either to kind of propel you forward or to pull you back. I want to dig into a little bit something you were saying about being in the right headspace to capture ideas. And I think that there's sort of almost, like, two distinct phases. There's the, like, capturing data, and information, and experiences, and then, there's synthesizing it, turning information into learning. STEPHANIE: Yes. JOËL: And it sounds like you're making a distinction between those two things, specifically that synthesis step is something that has to happen separately. STEPHANIE: Ooh, I don't even...I don't know if I would necessarily say that I'm only talking about synthesis, but I do like that you kind of separated those categories because I do think that they are really important. And they kind of remind me a lot about the scientific method a little bit where, you know, you have the gathering data and, like, observations, and you have, you know, maybe some...whatever is precipitating learning that you're doing maybe differently or new. And that also takes time, I think, or intention at least, to be like, oh, do I have what I need to, like, get information about how this is going? And then, yeah, that synthesis step that I think I was talking about a little bit more. But I don't think either is just automatic. There is, I think, quite a bit of intention involved. JOËL: I think maybe the way I think about this is colored by reading some material on the Zettelkasten method of note-taking, which splits up the idea of fleeting notes and literature notes, which are sort of just, like, jotting down ideas, or things you've seen, things that you've learned, maybe a thought you had when you read a particular paragraph in a blog post, something like that. And then, the permanent notes, which are more, like, fully formed thoughts that arise out of the more fleeting ones. And so, the idea is that the fleeting ones maybe you're taking those in a notebook if you're doing it pen and paper. You could be doing it in some sort of, like, daily note, or something like that. And then, those are temporary. They were there to just capture information. Later on, you process that, and then you can throw them out if you need to. STEPHANIE: Yeah, that makes a lot of sense. This has actually been a shift for me, where I used to rely a lot more on memory and perhaps, like, didn't have a great system for taking things like fleeting notes and, like, documenting kind of [inaudible 18:28] what I was saying earlier about how do I make sure that the information is recorded, you know, for me to synthesize later? And I have found a lot more success lately in that fleeting note style of operating. And thanks to Obsidian honestly, now it's so easy to be like, oh, I'm just going to open a quick new file. And I need as little friction as possible to, like, put stuff somewhere [laughs]. And, actually, I'm excited to talk a little bit more about this with you because I think you're a little bit different where you somehow find the time [laughs] and care to create your diagrams. I'm like, if I can, for some reason, even get an Obsidian file open, I'll tab to Slack. And I send myself a lot of notes in my just own personal DM space. In fact, it's actually kind of embarrassing because I use the Command+K shortcut to navigate to my own personal DMs, which you can get to by typing me, like, M-E. And sometimes I've accidentally just entered that into a channel chat [laughs], and then I have to delete it really quick later when I realize what I've done. So, yeah, like, I meant to navigate to my personal notes, and I just put in our team chat, "Me [laughs]." And, I don't know, I have no idea how that comes up [laughs], what people think is going on. But if anyone's listening to this podcast from thoughtbot and has seen that of me, that's what happened. JOËL: You may not be the only one who's done that. STEPHANIE: Thank you. Yeah [laughs], that's good to know. JOËL: I want to step back a little bit because we've been talking about, like, introspection, and synthesis, and finding moments to capture information. And I think we've sort of...there's an unspoken assumption here that a way to kind of turbocharge learning from day-to-day experience is some form of synthesis or self-reflection. Would you agree with that statement? STEPHANIE: Okay. This is another thing that I am perhaps, like, still trying to figure out, and we can figure it out together, which is separating, like, self-driven learning and, like, circumstance-driven learning. Because it's so much easier to want to reflect on something and find time to be, like, oh, like, how does this kind of help my goals or, like, what I want to be doing with my work? Versus when you are just asked to do something, and it could still be learning, right? It could still be new, and you need to go do some research or, you know, play around with a new tool. But there's less of that internal motivation or, like, kind of drive to integrate it. Like, do you have this distinction? JOËL: I've definitely noticed that when there is motivation, I get more out of every hour of work that I put in in terms of learning new things. The more interest, the more motivation, the more value I get per unit of effort I put in. STEPHANIE: Yeah. I think, for me, the other difference is, like, generative learning versus just kind of absorbing information that's already out there that someone else's...that is kind of, yeah, just absorbing rather than, like, creating something new from, like, those connections. JOËL: Ooh. STEPHANIE: Does that [chuckles] spark something for you? JOËL: The gears are turning in my head because I'm almost hearing that as, like, a passive versus active learning thing. But just sort of like, I'm going to let things happen to me, and I will come out of that with some experience, and something is going to happen. Versus an active, I am going to, like, try to move in a direction and learn from that and things like that. And I think this maybe connects back to the original question. Maybe this sort of, like, checking in at the end of the week, taking notes is a way to convert something that's a bit more of a passive experience, spending four days a week doing a project for a client, into something that's a little bit of a more active learning, where you say, "Okay, I did four weeks of this particular type of Rails work. What do I get out of it? What have I learned? What is something new that I've seen? What are some opinions I have formed, patterns I like or dislike?" STEPHANIE: Yeah, I like that distinction because, you know, a few weeks ago, we were at RailsConf. We had kind of recapped it in a previous episode. And I think we had talked about like, oh, do we, like, to sit in talks or participate in workshops? And I think that's also another example of, like, passive versus active, right? Because I 100%, like, don't have the same type of learning by just, you know, listening to a talk that I do with maybe then going to look up, like, other things this person has put out in the world, finding them to talk to them about it, like, doing something with the content, right? Otherwise, it's just like, oh yeah, I heard this talk. Maybe one day I'll remember it when the need arises [laughs]. I, like, have a pointer to it in my brain. But until then, it probably just kind of, like, sits there, and nothing's really happened with it. JOËL: I think maybe another thing that's interesting in that passive versus active distinction is that synthesis is inherently an act of creation. You are now creating new ideas of your own rather than just capturing information that is being thrown at you, either by sitting in a talk or by shipping tickets. The act of synthesizing and particularly, I think, making connections between ideas, either because something that, let's say you're in a talk, a speaker said that sparks an idea for yourself, or because you can connect something that speaker said with another idea that you already have or an idea that you've seen elsewhere. So, you're like, oh, the thing this person is saying connects to this thing I read in a book or something another speaker said in an earlier session, or something like that. All of a sudden, now you're creating these new bits of knowledge, new perspectives, maybe even new mental models. We talked about mental models last week. And so, knowledge is not just the facts that you absorb or memorize. A lot of it is building the connections between those facts. And those are things that are not always given to you. You have to create them yourself. STEPHANIE: Yeah, I am nodding my head a lot because that's resonating with, like, an experience that I'm having kind of coaching and mentoring a client developer on my team who is earlier in her career. And one thing that I've been really, like, working on with her is asking like, "Oh, like, what do you think of this?" Or like, "Have you seen this before? What are your reactions to this code, or, like this comment?" or whatever. And I get the sense that, like, not a lot of people have prompted her to, like, come up with answers for those kinds of questions. And I'm really, really hopeful that, like, that kind of will help her achieve some of the goals that she's, like, hoping for in terms of her technical growth, especially where she's felt like she's stagnated a little bit. And I think that calls back really well to what you said at the beginning of, like, you can spend years, right? Just kind of plugging away. But that's not the same as that really active growth. And, again, like, that's fine if that's where you're at or want to be at for a little while. But I suspect if anyone is kind of, like, wondering, like, where did that time go [laughs]...even for me, too, like, once someone started asking me those questions, I was like, oh, there's still so much to figure out or explore. And I think you're actually really good at doing that, asking questions of yourself. And then, another thing that I've picked up from you is you ask questions about, like, what are questions other people would have? And that's a skill that I feel like I still have yet to figure out. I'm [chuckles] curious what you think about that. JOËL: That's interesting because that kind of goes to another level. I often think of the questions other people would have from a more, like, pedagogical sense. So, I write a lot of blog posts. I write a lot of talks that I give. So, oftentimes when I'm creating that kind of material, there's a bit of an inner critic who's trying to, you know, sitting in the audience listening to myself speak, and who's going to maybe roll their eyes at certain points, or just get lost, or maybe raise their hand with a question. And that's who I try to address those things so that then when I go through it the next time, that inner critic is actually feeling engaged and paying attention. STEPHANIE: Do you find that you're able to do that because you've seen that happen enough times where you're like, oh, I can kind of predict maybe what someone might feel confused about? I'm curious, like, how you got from being, like, well, I know what I would be confused about to what would someone else be unsure or, like, want more information about. JOËL: Part of the answer there is that I'm a very harsh critic myself. STEPHANIE: [laughs] Yes. JOËL: So, I'm sitting in somebody else's talk, and there are probably parts where I'm rolling my eyes or being like, wait a minute, how did you get from this idea to this other thing? That doesn't follow. And so, I try to turn that back towards myself and use that as fuel to make my own work better. STEPHANIE: Yeah, that's cool. I like that. Even if it's just framed as, like, a missed opportunity for people to have better or more comprehensive understanding. I know that's something that you're, like, very motivated to help kind of spread more of [laughs]. Understanding and learning is just important to you and to me. So, I think that's really cool that you're able to find ways to do that. JOËL: Well, you definitely want to, I think, to keep a sort of beginner's mindset for a lot of these things, and one of the best ways to do that is to work with beginners. So, I spent a lot of time, back in the day, for example, in the Elm language chat room, just helping people answer basic questions, looking up documentation, explaining sort of basic concepts. And that, I think, helped me get a sense of like, where were newcomers to the language getting stuck? And what were the explanations of those concepts that really connected? Which I could then translate into my work. And I think that that made me a better developer and helped me build this, like, really deep understanding of the underlying concepts in a way that I wouldn't have had just writing code on my own. STEPHANIE: Wow, forum question answering hero. I have never thought to do that or felt compelled to do that. But I remember my friend was telling me, she was like, "Yeah, sometimes I just want to feel good about myself. And I remember that I know things that other people, like, are wanting to find out," and she just will answer some easy questions on Stack Overflow, you know, about, like, basic Rails stuff or something. And she is like, "Yeah, and that's doing my good deed [laughs]." And yeah, I think that it also, you know, has the same benefits that you were just saying earlier about...because you want to be helpful, you figure out how to actually be helpful, right? JOËL: There's maybe a sense as well that helping others, once more, forces you into more of an active mindset for growth in the same way that interrogating yourself does, except now it's a beginner who's interrogating you. And so, it forces you to think a little bit more about those whys or those places where people get stuck. And you've just sort of assumed it's a certain way, but now you have to, like, explain it and really get into some of the concepts. STEPHANIE: So, on the show, we've talked a lot about the fun things you share in the dev channel in our Slack workspace. But I recently discovered that someone (Was it you?) created an Obsidian MD channel for our favorite note-taking software. And in it, you shared a really cool tool that is available in Obsidian called mind maps. JOËL: Yeah, so mind maps are a type of diagram. They're effectively a tree structure, but they don't really look like that when you draw them out. You start with a sort of topic in the center, and then you just keep drawing branches off of that, going every direction. And then, maybe branches off branches and keep going as you add more content. Turns out that Mermaid.js supports mind maps as a graph type, and Obsidian embeds Mermaid diagrams. So, you can use Mermaid's little language to express a mind map. And now, all of a sudden, you have mind mapping as a tool available for you within Obsidian. STEPHANIE: And how have you been using that to kind of process and experience or maybe, like, end up with some artifacts from, like, something that you're just doing in regular day-to-day work? JOËL: So, kind of like you, I think I have the aspiration of doing some kind of, like, daily note journaling thing and turning that into bigger ideas. In practice, I do not do that. Maybe that's the thing that I will eventually incorporate into my practice, but that's not something that I'm currently doing. Instead, a thing that I've done is a little bit more like you, but it's a little bit more thematically chunked. So, for example, recently, I did several weeks of work that involved doing a lot of documentation for module-level documentation. You know, I'd invested a lot of time learning about YARD, which is Ruby's documentation system, and trying to figure out, like, what exactly are docs that are going to be helpful for people? And I wanted that to not just be a thing I did once and then I kind of, like, move on and forget it. I wanted to figure out how can I sort of grow from that experience maximally? And so, the approach I took is to say, let's take some time after I've completed that experience and actually sort of almost interrogate it, ask myself a bunch of questions about that experience, which will then turn into more broad ideas. And so, what I ended up doing is taking a mind-mapping approach. So, I start that center circle is just a circle that says, "My experience writing docs," and then I kind of ring it with a series of questions. So, what are questions that might be interesting to ask someone who just recently had experience writing documentation? And so, I come up with 4,5,6 questions that could be interesting to ask of someone who had experience. And here I'm trying to step away from myself a little bit. And then, maybe I can start answering those questions, or maybe there are sub-questions that branch off of that. And maybe there are answers, or maybe there are answers that are interesting but that then trigger follow-up questions. And so I'm almost having a conversation with myself and using the mind map as a tool to facilitate that. But the first step is putting that experience in the center and then ringing it with questions, and then kind of seeing where those lead. STEPHANIE: Cool. Yeah, I am, like, surprised that you're still following that thread because the module docs experience was quite a little bit a while ago now. We even, you know, had an episode on it that I'll link in the show notes. How do you manage, like, learning new things all the time and knowing what to, like, invest energy and attention into and what to kind of maybe, like, consider just like, oh, like, I don't know, that was just an experience that I had, and I might not get around to doing anything with it? JOËL: I don't know that I have a great system. I think sometimes when I do, especially a more prolonged chunk of time doing a thing, I find it really worthwhile to say, hey, I don't want that to sort of just be a thing that was in my memory, and then it moves out. I'd like to pull out some more maybe practical or long-term ideas from it. Part of that is capture, but some of that is also synthesis. I just spent two weeks or I just spent a month using a particular technology or doing a new kind of task. What do I have to show for it? Are there any, like, bigger ideas that I have here? Does this connect with any other technologies I've done or any other ideas or theories? Did I come up with any opinions? Did I like this technology? Did I not? Are there elements that were inspirational? And then capturing some of that eventually with the idea of...so I do a sort of Zettelkasten-style permanent note collection, the idea to create at least a few of those based off of the experience that I can then connect to other things. And maybe it eventually turns into other content. Maybe it's something I hold onto for a while. In the case of the module docs, it turned into a Bike Shed episode. It also turned into a blog post that was published this past week. And so, it does have a way of coming back. STEPHANIE: Yeah. Yeah. One thing that sparked for me was that, you know, you and I spend a lot of time thinking about, like, the practice of writing software, you know, in the work we do as consultants, too. But I find that, like, you can also apply this to the actual just your work that you are getting paid for [laughs]. This was, I think, a nascent thought in the talk that I had given. But there's something to the idea of, like, you know, if you are working in some code, especially legacy code, for a long time, and you learn so much about it, and then what do you have to show for it [chuckles], you know? I have really struggled with feeling like all of that work and learning was useful if it just, like, remains in my memory and not necessarily shared with the team or, I don't know, just, like, knowing that if I leave, especially since I am a contractor, like, just recognizing that there's value in being like, oh, I spent an hour or, like, half a day sifting through this complex legacy code just to make, like, a small change. But that small change is not the full value of all of the work that I did. And I suspect that, like, just the mind mapping stuff would be really interesting to apply to more. It's not, like, just practical work, but, like, more mundane, I don't know, like, labor [laughs], if you will. JOËL: I can think of, like, sort of two types of knowledge that you can take out of something like that. Some of it is just understanding how this legacy system works, saying, oh, well, they have this user model that's connected to this old persona table, which is kind of unused, but we sometimes rely for in this legacy case. And you've got to have this permission flag turned on and, like, all those things that you had to just discover by reading the code and exploring. And that's going to be useful to you as long as you work in that legacy codebase, as long as you work through that path. But when you move on to another project, that knowledge probably doesn't serve you a whole lot. There are things that you did throughout that journey, though, that you can probably pull out that are going to be useful to you on other projects. And that might be maybe you came up with a new way of navigating the code or a new way of, like, finding how different pieces were connected. Maybe it was a diagramming tool; maybe it was some sort of gem. Maybe it was just a, oh, a heuristic, like, when I see a model, I like to follow the associations first. And I always go for the hasmanys over the belongstos because those generally lead me in the right direction. Like, that's really interesting insight, and that's something that might serve you on a following project. You can also pull out bigger things like, are there refactoring techniques that you experimented with or that you learned on this project that you would use again elsewhere? Are there ways of maybe quarantining scary code on a legacy project that are a thing that you would want to make more consistent part of your practice? Those are all great things to pull out of, just a like, oh yeah, I did some work on a, like, old legacy part of an app. And what do I have to show for it? I think you can actually have a lot to show for it. STEPHANIE: Yeah, that's really cool. That sounds like a sure way of multiplying the learning. And I think I didn't really consider that when I was first talking about it, too. But yeah, there are, like, both of those things kind of available to you to, like, learn from. Yeah, it's like, that time is never just kind of, like, purely wasted. Oh, I don't know, sometimes it really feels like that [laughs] when you are debugging something really silly. But yeah, like, I would be interested in kind of thinking about it from both of those lenses because I think there's value in what you learn about that particular system in that moment of time, even if it might not translate to just future works or future projects. And, like, that's something that I think we would do better at kind of capturing, and also, there's so much stuff, too, kind of to that higher level growth that you were speaking to. JOËL: I think some of the distinctions we're talking about here is something that was explored in an older episode on note-taking with Amanda Beiner, where we sort of explored the difference between exploratory notes, debugging notes, idea notes, and how note-taking is not a single thing. It can serve many purposes, and they can have different lifespans. And those are all just ways to aid your thinking. But being maybe aware of the kind of thinking that you're trying to do, the kind of notes you're trying to take can help you make better use of that time. STEPHANIE: I have one last question for you before we wrap up, which is, do you find, like, the stuff we're talking about to be particularly true about software development, or it just happens to be the thing that you and I both do, and we also love to learn, and so, therefore, we are able to talk about this for, like, 50 minutes [laughs]? Are you able to make any kind of distinction there, or is it just kind of part of pedagogy in general? JOËL: I would say that that sort of active versus passive thing is a thing that's probably true, just about anything that you do. For example, I do a lot of bouldering. Just going spending a lot of time on the wall, climbing a lot; that's going to help me get better. But a classic way that people try to improve is filming themselves or having a friend film themselves, and then you can look at it, and then you evaluate, oh, that's what I did. This is where I was struggling to get the next hold. What if I try to do something different? So, building in an amount of, like, self-reflection into the loop all of a sudden catalyzes that learning and helps you grow at a rate that's much more than if you're just kind of mindlessly putting time into it. So, I would go so far as to say that self-reflection, synthesis—those are all things that are probably going to catalyze growth in most areas of your life if you're being a little bit more self-aware. But I've found that it's been particularly useful for me when it comes to trying to get better at the job that I do every week. STEPHANIE: Yeah, I think, for me, it's like, yeah, getting better at being a developer rather than being, you know, a software developer at X company. Like, not necessarily just getting better at working at that company but getting better at the skill itself. JOËL: And those two things have a way of sort of, like, folding back into themselves, right? If you're a better software developer in general, you will probably be a better developer at that company. Yes, you want domain knowledge and, like, a deep understanding of how the system works is going to make you a better developer at that company. But also, if you're able to find more generic approaches to onboard onto new things, or to debug more effectively, or to better read or understand unknown code of high complexity, those are all going to make you much better at being a developer at that company as well. And they're transferable skills, so they're all really good things to have. STEPHANIE: On that note. Shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Code and the Coding Coders who Code it
Episode 36 - Live (at the time) from RailsConf 2024

Code and the Coding Coders who Code it

Play Episode Listen Later Jun 18, 2024 29:45 Transcription Available


In this special crossover episode recorded live from RailsConf 2024 in Detroit, join us for a unique gathering of prominent Ruby podcasters. Drew teams up with Elise from the 'Ruby on Rails' podcast, Jason from 'Code with Jason,' Joël from 'The Bike Shed,' and Julie from 'Ruby for All' The group discusses their experiences at RailsConf, including workshops, talks about Test Driven Development (TDD), and building dynamic applications with Turbo. They delve into the implications of RailsConf being discontinued after 2025, the thriving local Ruby conference scene, and share candid moments about their interactions with the community. Additionally, they touch upon diverse topics such as Detroit-style pizza, hot dog eating capacities, and food opinions, blending technical insights with light-hearted banter. The episode concludes with gratitude for the well-coordinated event and excitement for future Ruby gatherings. Enjoy!Panelists:Julie J.Elise ShafferJason SwettDrew BraggJoël QuennevilleLinks:Julie J. TwitterJulie J. WebsiteRuby for All PodcastJason Swett TwitterCode with Jason WebsiteJoël Quenneville TwitterJoël Quenneville WebsiteThe Bike Shed PodcastElise Shaffer WebsiteThe Ruby on Rails PodcastRailsConf 2024Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the Show.Ready to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Ruby on Rails Podcast
Episode 517: Rails Conf 2024 Crossover: Off The Rails

Ruby on Rails Podcast

Play Episode Listen Later Jun 12, 2024 29:32


At Rails Conf 2024, I gathered several of our favorite ruby podcasters for a giant crossover episode! In this episode we have: * Drew Bragg from Code and the Coding Coders Who Code It - https://podcast.drbragg.dev/ * Julie J from Ruby For All - https://www.rubyforall.com/ * Jason Swett from Code with Jason - https://www.codewithjason.com/ * Joël Quenneville from The Bikeshed - https://bikeshed.thoughtbot.com/ Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!

The Bike Shed
428: Ruminating on Ruby Enumerators

The Bike Shed

Play Episode Listen Later Jun 11, 2024 35:44


Joël explains his note-taking system, which he uses to capture his beliefs and thoughts about software development. Stephanie recalls feedback from her recent RailsConf talk, where her confidence stemmed from deeply believing in her material despite limited rehearsal. This leads to a conversation about the value of mental models in building a comprehensive understanding of a topic, which can foster confidence and adaptability during presentations and discussions. The episode then shifts focus to the practical application of enumerators in Ruby, exploring various mental models to understand their functionality better. Joël introduces several metaphors, such as enumerators as cursors, lazy collections, and sequence generators, which help demystify their use cases. Episode on note-taking (https://bikeshed.thoughtbot.com/357) What we believe about software (https://bikeshed.thoughtbot.com/172) Ruby Enumerators (https://ruby-doc.org/3.3.1/Enumerator.html) Enumerator Lazy (https://ruby-doc.org/3.3.1/Enumerator/Lazy.html) Modeling a Paginated API as a lazy stream (https://thoughtbot.com/blog/modeling-a-paginated-api-as-a-lazy-stream) Solving a memory performance issue with enumerator (https://thoughtbot.com/blog/how-we-used-a-custom-enumerator-to-fix-a-production-problem) Find in batches (https://api.rubyonrails.org/classes/ActiveRecord/Batches.html#method-i-find_in_batches) Binary tree implementation with different traversals (https://gist.github.com/JoelQ/02f3ef9f61bebc7c8e5ea67d10ed92c6) Teaching Ruby to Count (https://www.youtube.com/watch?v=PHMOsTK1jSE) Transcript:  STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So, what's new in my world isn't exactly a new thing. I've talked about it on the podcast here before, and it's my note-taking system. I have a system where I try to capture notes that are things I believe about software or things I think are probably true about software. They're chunked up in really small pieces, such that every note is effectively one small thesis statement and a paragraph of text, and maybe a diagram or a code snippet to support that. And then, it's highly hyperlinked to other notes. So, I sort of build out some thoughts on software that way. A thing that I've done recently that's been pretty exciting with that is introducing a sort of separate set of notes that connect to my sort of opinion notes. So, I create individual notes for public works that I've done, things like blog posts or conference talks. Because a lot of those are built on top of ideas that have been sitting in my note system for a while. Readers and listeners get to sort of see the final product, but often sort of built up over several months or even a couple of years as I added different notes that kind of circled a topic and then eventually got to a thing. What I did, though, was actually making those connections explicit. And so I use Obsidian. Obsidian has this cool graph view where it just sort of shows all of the notes, and it circles them with, like, connections between them where the notes connect. So, I can now see in a visual format how my thoughts cluster in different topics, but then also which clusters have talks and blog posts hanging off of them and also which ones don't, which ones are like, oh, I have a lot of thoughts on this topic, and I've not yet written about it in a public forum; maybe that would be a thing to explore. So, seeing that visual got me really excited. I was having a good time. STEPHANIE: Yes, I have several thoughts coming to mind in response, which is, I know you love a visual. I really like the system of, even if you have created content for it, like, you have a space for, like, thoughts about it to evolve. Because you said, like, sometimes content comes out of notes that you've been...or, like, thoughts you've been having over years, but it's like, even afterwards, I'm sure there will still be new thoughts about it, too. I always have a hard time finding a place for that thing kind of once I, I don't know, it's like some of that stuff is never really considered done, right? So, that is really cool. And I also was just thinking about an old episode of The Bike Shed back when Chris Toomey and Steph Viccari hosted the podcast called "What We Believe About Software," I think, is the title. And I was just thinking about how, like, if only we could just dump all of your notes [laughs] into some, you know, stream [laughs], and that would be really cool. If we ever do, like, an episode like that, that would be really fun. And I'm sure, you know, you already have this, like, huge bank of ideas [laughs]. JOËL: Yes. It is really fun because I build up...the thoughts are often sort of interconnected, and so they might have a topic, but they are very focused. So, I might have, like, three or four things I believe about a particular topic that cluster together. So, we could...and, actually, I have used, in the past, some of those clusters as initial food for thought for a Bikeshed episode. STEPHANIE: Yeah, that's really neat. I like this idea of a kind of just, like, a repository for putting down what you believe about software as kind of, like, guiding principles for yourself as a developer a little bit. I remember a piece of feedback I got about my RailsConf talk that I gave a few weeks ago, and someone said like, "Oh, you sounded really confident in what you were talking about." And that surprised me because I, like, didn't practice rehearsing giving the talk all that much [laughs]. It's because they had asked like, "Oh, like, did you practice a lot?" or something like that. And I think I realized that I, like, really believed in what I was sharing and kind of that, I think, was perhaps what they were picking up on. And even though, like, maybe the rehearsal of the presentation itself was not where I had spent a lot of time on, I had spent a lot of time thinking about what I wanted to share and just building up my confidence around that. So, I thought that was an interesting connection. JOËL: Yeah, you fully developed the idea. You kind of explored all the side trails, maybe a little bit on your own as well. You're on very familiar terrain. And so, that is a way of building confidence separate from just sort of memorizing a talk. STEPHANIE: Yeah, yeah. Exactly. JOËL: In a sense, I almost feel like that's a better sense of confidence because then you can sort of...you can roll with the punches. You know, if a slide is out of order or something, sure, it maybe messes up a little bit of the narrative that you're trying to say. But you're not like, "Oh no, what is this content?" You're like, "Oh yeah, this thing," and you can dive right into it. Somebody asks you a question, and you're not like, "Oh no, that was not in the script," because, again, you've sort of mastered your topic. You know the area as a whole, even sort of the blurry edges beyond the talk, and can react in a way that is pretty confident. STEPHANIE: Yeah. I still definitely fear the open Q&A. I've never done it before, but maybe one day I will be able to because I just, you know, know my topic so well inside and out [laughs] that I can roll with the punches, as you say. JOËL: Open Q&A is just...it's a roll of the dice. Sometimes, you get some really good conversation topics there, and sometimes, it's just a waste of everyone's time. STEPHANIE: I like that take [laughs]. JOËL: Maybe that should go into the things I believe about software. So, other than receiving feedback about your RailsConf talk, what is new in your world? STEPHANIE: Yeah, so I am wrapping up a pretty large project on my client work that we're hoping to release soon. And, in fact, it's actually being released along with a big announcement from the client company to their customers. Essentially, at a conference, they're going to say like, "Hey, like, we now have this new feature." And so, I think there's some hype generated around it. And this past week, we've been doing a lot of internal testing of the feature because there are a lot of employees of my client company who are, like, pretty big users of the product, which is cool because I think we're getting, you know, we have easy access to people who can give us good feedback. But I am having a hard time with being on the receiving end of the feedback and figuring out, like, what is stuff I need to attend to now before, you know, this big release? And what is stuff that is just kind of, like, general feedback like, "Oh, like, I wish it did this," but, you know, it turns out that that's not really what we were building? And how do I just kind of, like, accept that? You know, it's coming from a good place, but I can't really help them there, at least right now. And that's hard for me because I like helping people, right? And so, if someone says something like, "Oh, like, I wish it did this," or like, "Oh, that's kind of weird," I'm like, "Oh, I want to just, like, fix that for you right now [laughs]." And I suspect that a lot of other devs can relate to this, especially if, like, you know, you've been working on something for a little bit, and it feels...I'm just going to say it: it feels a little precious to me. So, what I'm trying to do today, actually, is not look at any of the feedback at all [laughs] and come at it tomorrow with a bit of a calmer vibe and be able to separate out, like, you know, I think all feedback is informative, but not all of it is useful for you at any given moment. Like, if there are bugs, then those will be my immediate priority. If there's maybe some small tweaks that we can make the feature just a little bit more polished, then I also think those are good. But then we are discovering a few things, too, about, like, what this feature is or could be. And I think those are the things that, you know, need to be brought into a conversation with a broader group and think about, like, is this the direction we want to go? So, that's kind of how I'm bucketing that feedback right now. JOËL: How do you feel about receiving direct feedback versus having something filtered through something like a product team? STEPHANIE: Ooh, that's an interesting question. Because right now we're doing, I think, a mix of both that I'm not sure that I really like. On one hand, when it's filtered, it's hard to get to the root of what someone is asking for. And oftentimes, like, it may not even include enough information after the fact to be able to come at it from a dev perspective. But then direct feedback, I think, is just a little bit overwhelming sometimes. And it can be hard to figure out what to pay attention to if you don't have that, like, input from a product team about, like, what the roadmap is looking like or where, you know, strategically their heads are at. So, one thing that kind of has emerged from this is like, oh, I was getting, you know, notifications for the feedback coming in. And what we did was set up a meeting [laughs] so that we can...maybe all of us can, like, scan it together ahead of time and then come at it with a little bit of context about what's come in but then maybe coalesce around the things that we feel are important. JOËL: Well, you'll have to keep us updated on how that plays out, and we can kind of hear what is the balance that ends up working well for you. STEPHANIE: Yeah, I hope so. I think this is actually maybe something that's a bit underexplored from the dev perspective, you know, that in-between stage of you're not totally done because it's not shipped to the world yet, but, you know, you're starting to get a little bit of that input. And what you do with that? Because I think there is some value in being engaged in that process. JOËL: So, we were talking earlier about this note-taking system that I use and sort of a renewed excitement that I have about it. And one thing that I did when I was going through and finding clusters of things that hadn't been written about was I found that I had a cluster of notes on different mental models that I had for understanding Ruby enumerators, not the enumerable module, but the enumerator object. And I decided, you know what? This would probably make for a good blog post. So, I drafted a blog post, and I've been thinking about this a little bit more recently. So, I've been really hyped about digging into enumerators because of that experience. STEPHANIE: Yeah, that's very cool. I have to say that I feel like I did not know a lot about enumerators and the API for them kind of before you brought this topic up, and I did a bit of a deep dive in preparation for us to discuss it. I feel like most devs, you know, work with enumerators via methods on enumerable without totally knowing that they are. So, I think that this would be a really interesting episode for people to be like, oh, like, I've been using this stuff, you know, the whole time, and now I can have a different perspective or just more insight on what they can do. JOËL: Before we dig into individual mental models, though, I want to think a little bit about the concept of mental models as a whole. Years ago, someone gave me advice to sort of pay attention to mental models, ways I think about the world or different code structures, different code approaches, and that really stuck with me. So, I've since been, like, kind of, like, collecting mental models. And, in a way, they're like a, for me, a bit more of a concrete way to look at a particular topic. So, I can say I'm looking at this particular topic through the lens of a particular mental model that helps me build more clarity around it. And if I have three or four, then I can kind of look at it from three or four different perspectives. And now, all of a sudden, I feel like I'm seeing in three dimensions. STEPHANIE: Whoa, the Matrix even [laughs]. That's cool. Yeah, I really like that advice. I think I'm going to steal it and start kind of suggesting it to other people because I think, in a way, on this show, that has come through a lot. And talking about things on the podcast has helped me develop a lot of my mental models. And I think we've done a few, like, episodes in the past about various ones we have for just our work because it's like, that's infinite [laughs]. But what I really have been appreciating is that mental models just need to work for you. As long as you're able to understand something, then it's valuable. And that has really helped me also, like, just get on the same understanding with others because the goal is not necessarily to, like, explain it the way that I would think of it, but figure out what would help them kind of develop their own mental model for understanding something, and, you know, kind of as long as we both feel like we have that shared understanding, no matter what lens it's through. And, you know, sometimes it's even more effective when we are able to share it. But I feel like, you know, you can still find ways to collaborate on something with a diversity of mental models. JOËL: Yeah, they're a great way to build self-understanding. They're a great way to sort of build understanding between two people. So, I'm a huge fan of the concept. And part of what I've been doing with my note-taking system is trying to capture those as much as possible. If I'm ever, like, trying to understand a complex topic and I'm like, oh, I think I've got a breakthrough here; I understand it; it's kind of like this, or you can imagine it in this perspective, it's like, write that down. That's gold. STEPHANIE: Very cool. So, Joël, would you be able to share some of your mental models for enumerator? JOËL: So, one way that I look at it is the idea that an enumerator is effectively a cursor over a collection. So, you have an array and a regular array; you're either in the middle of iterating through it using something like each, or you're not. You just have a collection of items. Enumerator introduces the idea that you're actually sort of at a position in the array. So, you're sort of focused on, let's say, the third item or the fourth item. You have a cursor there, and you can move that cursor forward as you sort of step through. But the really cool thing is you can also kind of pause and just pass that cursor on to someone else, and someone else can move the cursor a few steps further down the collection, pause, pass it on to someone else. And it's totally fine. Nobody has to, like, go through an entire, like, each iteration. STEPHANIE: Yeah. So, when you were talking about cursors, that got me thinking a little bit because I actually have struggled with that concept, especially when it comes to, you know, things code-related. Like, when I've had to work with database things and stuff, like, the idea of a cursor was a little, like, difficult for me to wrap my head around. And I was looking at the methods on enumerator, like the instance methods on enumerator. And one of them actually is what helped me develop this mental model. And I'm excited to see what you think. But there is a rewind method that basically rewinds the sequence back to its beginning, right? And what that triggered for me was a VHS tape [laughs] and just those, like, car-shaped rewinders for tapes back in the '90s. I don't know if you ever had one in your house, but I did. And I just thought that was such a cool method name because it was very, I don't know, it was just like a word that we use in the English language, right? So, the idea of, like, tapes, you know, like, cassette tapes or VHS tape kind of also it sounds like it matches well with what you were sharing, too, where it's like, I could pass, I don't know, maybe I, like, listen to a few songs on my cassette tape, and then I give it to someone else, and they can pick up where I left off. And yeah, that was really helpful in understanding, like, a marker of a position a little more than cursor was able to for me. JOËL: That's really interesting because now I wonder, like, how far we could push that metaphor. So, musical data is encoded on magnetic tape. Cassette tapes typically there are sort of two spools. You start off with all of the tape wound up around one spool, and then as it sort of moves across the read head, it gets wound up on sort of the, I don't know, destination spool. I guess you can call them origin and destination. And because of that, you can sort of be in a, like, partly read state where, you know, half the tape is on the destination spool, half of it is on the origin spool, and you have that read head that's in the middle, and you're just kind of paused there. And you can kind of jump forward in that. So, I imagine something like that in your metaphor is like an enumerator. Contrast that to imagine just a single spool, which is just we have musical data encoded on magnetic tape, and we wrapped it up on a spool. I feel like that's almost more like a regular array because you don't have that concept of, like, position, or being able to read parts of it or anything like that. It's just, here's some data. STEPHANIE: Yeah. While you were talking about the two spools, I was thinking about, like, part of what is nice about enumerator is that you can go forward or backwards, right? And that feels a little more possible with that two-spool metaphor [laughs], rather than just unraveling something, where you are kind of discarding what has already been read. JOËL: The one caveat there is that enumerators can move forward one item at a time. They can only move backwards by jumping back to the beginning. So, you can't step forward or step back. STEPHANIE: Yeah, that's fair. JOËL: You step forward, or you, like, rewind to the beginning. I think, in my mind, I was thinking a little bit more about this metaphor. And I think it's also just a metaphor for what's called the External Iterator Pattern. It's one of the classic Gang of Four Patterns, which is what enumerator, the object in Ruby, is an implementation of. I feel like I always see that in the documentation, like, oh, enumerator is an implementation of the External Iterator Pattern. And I just kind of go, what? STEPHANIE: [laughs] JOËL: Or maybe I kind of understand the idea of, like, okay, it's a way to, like, be able to step through a collection. But thinking in terms of a cursor or even your model as a cassette tape, I think that gives me a model, not just for enumerators, but then for better understanding that external iterator pattern. Like, I'm now not going to think of if I'm ever reading through the Gang Of Four book, or some other languages say we're an doing External Iterator Pattern, and I'll immediately be like, oh, that's a cursor, or that's a cassette tape. STEPHANIE: Yeah, very cool. I like it. JOËL: Another mental model that I have is thinking of enumerator in terms of a lazy collection. This is something that you tend to see more in functional programming languages, so the idea that you have a collection of potentially infinite length, or it could even be unknown length. But each element only sort of comes into being as you attempt to read it. So, it's kind of, like, a potentially infinite chain of Schrodinger's boxes. And you've got to open each of them to find out what's inside. STEPHANIE: Do you know what this reminded me of? Like elementary school math questions that were like, "What comes next in this pattern?" And it has, like, you know, the first, like, four or five values in a sequence or something. And then, you have to figure out, like, what the next value is. But then, in some ways, you know, I think it can depend on whether your enumerator is using the previous value to determine the next one. But yeah, it's like, you can't just jump ahead to figure out what the 10th, you know, value in this pattern is without kind of knowing what's come before it. JOËL: And sort of that needing to step through the entire collection, sort of one element at a time. STEPHANIE: Yeah, exactly. JOËL: I think a way that that concept is interesting, to me, is situations where a collection might be expensive, and you don't necessarily need all of it. So, you might have a bunch of calculations, but you can stop when you've hit the first one that succeeds or that matches a certain criteria. And so, it's not worth it to calculate the entire array of calculations if you're going to stop at the third one. And you could do that with some sort of, like, loop or something like that. But having it as a collection means you get to just treat it like an array, and you can call detect on it and do all the nice things that you're used to. It just happens to be a little bit more efficient in terms of not creating more data than you need to. STEPHANIE: Yeah. And I think there's some really cool stuff you can do when you start chaining enumerators with this concept of it being lazy evaluated. So, one of the things I learned in my deep dive is that when you are using the lazy method, you're able to chain enumerators. And they work a bit differently, where the default functionality is, like, everything in the collection gets evaluated through the first method, and then it gets iterated over in the second method. Whereas if you use lazy, I believe how it works is that, like, the first value gets kind of processed by all of the methods. And then, you get, you know, the output before moving on to the second, like, the next value. Does that sound right? JOËL: Yes. And I think that's where there's often a lot of confusion because there's sort of plain enumerator, and then there's a lazy enumerator that Ruby provides. A plain enumerator is a lazy list in the sense that items don't get evaluated unless you try to reach for them. So, if you have an enumerator and you say, "Just give me the first five items," it will do that. And even if the collection was 200 items long, the next 195 don't get evaluated. So, that's very efficient there. Where you would get into trouble is that plain enumerators are not lazy when it comes to traversals. So, any method that would traverse the entire collection, so something like a map or a select, is not going to be lazy because it's going to traverse the entire collection, therefore forcing us to evaluate each of the items in there. Whereas something like enumerable lazy will not actually traverse the collection when you do your map or you're selecting. It will wait for you to say, "Give me the first item," or "Give me the first ten items," or something like that. But you don't always need lazy. You really only need lazy when you're doing a traversal method. STEPHANIE: Okay. Cool, cool, cool. That makes a lot of sense. JOËL: I think a sort of spinoff metaphor that I have there is this idea of a lazy list. Another concept that, in my mind, is very adjacent to lazy lists is the concept of streams. And streams I typically think of them in terms of, like, files or networking, things like that. But a thing that you can do let's say you're working on data that's in a very large file, so big that you can't fit it into memory, a common solution there is streaming it. So, you don't load the entire file into memory and then operate on it. Instead, little chunks of it are loaded into memory. You operate on them, and then you release that memory and load the next chunk. So, you sort of work through that file in chunks, but you'd only have, you know, 1 line or ten lines or however big your chunk is in memory at a time. An enumerator allows you to do that with things that are not files. So, this could be a situation where, let's say, you're reading a lot of data from the database. You just have too many rows. You can't load them all into memory at once. But you do want to traverse through them. You could chunk that using enumerator so that every, you know, it loads 100 rows at a time or 1,000 rows at a time, or something like that. And your enumerator allows you to treat that as though it's a single array, even though, in the background, it's being chunked into pieces so that you never have more than a thousand rows at a time in memory. So, it allows you to do some, like, really nice sort of memory performance things. STEPHANIE: When would you want to use this over kind of something like batching queries? JOËL: So, I think ActiveRecord findinbatches does something like this under the hood. STEPHANIE: Oh, cool. JOËL: I don't know if they use Ruby's enumerator or if they sort of build their own custom extension to it, but it's built on this idea. STEPHANIE: Okay, that's really neat. I have another mental model that I wanted to get your thoughts on. JOËL: Yeah! STEPHANIE: One of the ways that I looked up that you can construct an enumerator, an infinite enumerator like we were talking about a little bit earlier, was with the produce class method. And that actually got me thinking about a production line and this idea that, you know, you have this mechanism for, you know, producing some kind of material or, like, good or something like that. And it's just there and waiting and ready [laughs] for you to, like, kind of ask for it, like, what it needs to do. And you can do that, like, sometimes in batches, right? If you are asking for like, "Okay, I want a thousand units," and then the production line goes to work [laughs]. But yeah, that was another one of those things where I'm like, wow, they really, I think, came up with a cool method name that evoked, like, an image in my head. JOËL: That's the power of naming, right? And I think it's interesting you've mentioned twice how going through the method names on enumerator and finding different method names all of a sudden, like, turned on a light bulb in your mind. So, if you're naming things well, it can be incredibly useful for users of your library to pick up on what you're trying to do. So, I want to circle back to something that you mentioned earlier, the idea of elementary school quizzes where you have to, like, figure out the next item in the sequence. Because that, for me, is very similar to my mental model: the idea that an enumerator is a sequence generator. So, instead of thinking of it as, oh, it's like an array or it's some kind of collection, instead, think of it as a robot that I can just ask it, hey, give me a value, and it will give me a value. And then, it will, like, keep doing that as long as I keep asking it for it. And those values, you know, they could be totally random. You can build one of those. But you can also have it so that the values sort of come from a sequence. It's not like an array where you're like, oh, I'm going to, like, predefine an array of, I don't know, the Fibonacci sequence, and when someone asks me for the third value, I'll just go and read that third value from the array. Instead, it knows the algorithm, and it just says, "Oh, you want the next value in the Fibonacci sequence? Let me calculate it. Here it is. Oh, you want the next value? Here it is." And so, thinking from that perspective helped me really come to terms with the concept that values really do get calculated just in time. It's not really a collection. It's an object that can give you new values if you ask it. STEPHANIE: Yeah, okay. That is making a lot more sense kind of in conjunction with the lazy list model that you shared earlier, and even a little bit with the production line that I was kind of sharing where it's like, you know, in this case, kind of, it's, like, the potential for a value, right? JOËL: Right, exactly. And, you know, these are all mental models that converge on the same ideas because they're all just slightly different perspectives on what the same object does. And so, there is going to be some overlap, some converging between all of them. I have another fun one. Can I throw it at you? STEPHANIE: Please. JOËL: This one's a little bit different, and it's the idea that enumerators are a tool to bring your own iteration to a collection. So, imagine a situation where you're building your own, let's say, binary tree implementation. And there are multiple ways to traverse through a binary tree. In particular, let's say you're doing depth-first search. There are sort of three classic ways to traverse that are called pre-order, post-order, and in-order traversals. And it really is just sort of what order do you visit all the children in your tree? Now, the point of a collection, oftentimes, is you need a way to iterate through it. And a classic solution would be to include enumerable, the module. In order to do that, you have to define a way to iterate through your collection. You call that each. And then, enumerable just gives you all the other nice things for free. The question is, though, for something like a tree where there are multiple valid ways to traverse, which one do you pick to make it the each that gets sort of all the enumerable goodies, and then the others are just, like, random methods you've defined? Because if you define, let's say, pre-order traversal as each, now your detect and select and all those are going to work in pre-order, but the others are not going to get that. So, if you map over a tree, you're forced to map over in pre-order because that's what the library author chose. But what if you want to map over a tree in post-order or in-order? STEPHANIE: Yeah, well, I'm guessing that here's where enumerator comes in handy [laughs]. JOËL: Yes. The approach here is instead of designating sort of one of those traversals as the sort of blessed traversal that gets to have enumerable; you build three of these, one for each of these traversals. And then, what's really nice is that because enumerators are themselves enumerable, they have map and select and all of these things built in. Now you can do something like mytree dot preorder dot map or mytree dot postorder dot map. And you get all the goodies for free, but the users of your library get to basically choose which traversal they want to have. As a library author, you're not forced to pick ahead of time and sort of choose; this is the one I'm going to have. You sort of bring your own traversal by providing an enumerator, and then everything else just kind of falls into place. STEPHANIE: Bring Your Own Traversal (BYOT) [laughter]. I like it. Yeah, that's cool. I can see how that would be really handy. I have not yet encountered a situation where I needed to get that deep into how my iteration is traversed, but that's really interesting. And, I mean, I can start even imagining, like, having an each method defined in these different ways, and then all of that being able to be composed with some of the other...just other methods. And now you have, like, so many different ways to perhaps, like, help, you know, different performance use cases. JOËL: Yeah, it can be performance. I often tend to think of enumerator as a performance thing because of its sort of lazy properties because; it allows you to sort of stream or chunk data that you're working with. But in the case of this mental model of the Bring Your Own Traversal, it actually is more about flexibility and having sort of the beauty of Ruby without having to compromise on, oh, I have to pick a single way to traverse a collection. STEPHANIE: But I really appreciate kind of this discussion about enumerator because this was previously, like, I don't think I have really ever used the class itself to solve a problem, but now I feel a lot more equipped to do so with a couple of the different kind of perspectives. And I think what they helped me do is just prime myself. If I see a problem that might benefit from something being iterated in a lazy way, like, being like, oh, I remember this thing, this mental model. Now I can go kind of look at the documentation for how to use it. And yeah, like, I don't know how I would have stumbled across, like, reaching for it otherwise. JOËL: That's a really interesting thing to notice because we've been talking a lot about how mental models can be a tool for understanding. But once you build an understanding, even though it's somewhat fuzzy, they're also a great tool for sort of recall. So, not only are you thinking, okay, well, this mental model says enumerators are kind of like this, or they function in this way. On the flip side of it, you can say, "Well, lazy evaluation problems are often enumerator problems. Like, streaming or chunked data problems are often enumerator problems. Multiple traversals are enumerator problems." So, now, even though you don't, like, fully understand it in your mind, you've got that recall where you can enter it, where you can come across that problem, and immediately you're like, oh, I'm dealing with multiple traversals here. I don't remember exactly how, but somehow, in my mind, I've got a connection that says, "Enumerators are a solution for this. Let me dig into that." STEPHANIE: Yeah, especially as an alternative to where I would normally reach for something...a more kind of common enumerable method. Because I definitely know that feeling of like, oh, like, I wish it could just, like, do this a little bit differently, you know. And it turns out that, you know, something like that probably exists already. I just needed to know what it was [laughs]. JOËL: On that theme of I wish that I could have something that behaved just a little bit more...like, I'm doing something slightly weird, and I wish they would behave more, like, just plain Ruby does normally with my, like, collections I'm familiar with. I'm going to pitch a talk that I gave at RubyConf Mini called "Teaching Ruby to Count." Some of these mental models actually showed up there. But the whole idea is like, oh, if you're bringing in sort of more custom objects and all of that, how can you just tweak them a little bit so that they're just as joyful to use and interact with as arrays, and numbers, and ranges? And they just sort of fit into that beauty of Ruby that we get out of the box. STEPHANIE: Awesome. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Remote Ruby
Railsconf and Tech Debt

Remote Ruby

Play Episode Listen Later Jun 7, 2024 50:05 Transcription Available


In this episode, Jason and Chris chat about their experiences at various RailsConf andRubyConf's. Then, they have deeper discussions on topics like transitioning from SingleTable Inheritance (STI) to delegated types in coding, addressing technical debts inproduct development, and the challenges and strategies of implementing subscriptionand one-time payment models. Additionally, there's a mention of the 2024 Ruby onRails Community Survey at Planet Argon that you can check out now. Hit download nowto hear more!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Ruby for All
RailsConf 2024 Crossover — Ruby Podcasters Live in Detroit

Ruby for All

Play Episode Listen Later Jun 6, 2024 31:23


In this special crossover episode recorded live from RailsConf 2024 in Detroit, join us for a unique gathering of prominent Ruby podcasters. Julie teams up with Elise from the 'Ruby on Rails' podcast, Jason from 'Code with Jason,' Joël from 'The Bike Shed,' and Drew from 'Code and the Coding Coders Who Code It.' The group discusses their experiences at RailsConf, including workshops, talks about Test Driven Development (TDD), and building dynamic applications with Turbo. They delve into the implications of RailsConf being discontinued after 2025, the thriving local Ruby conference scene, and share candid moments about their interactions with the community. Additionally, they touch upon diverse topics such as Detroit-style pizza, hot dog eating capacities, and food opinions, blending technical insights with light-hearted banter. The episode concludes with gratitude for the well-coordinated event and excitement for future Ruby gatherings. Enjoy![00:00:30] Live from RailsConf Detroit[00:01:04] Meet the Hosts[00:01:38] Conference Highlights and Workshops[00:07:21] The Future of RailsConf[00:12:10] Community Interactions and Podcasting[00:14:21] Exploring Detroit[00:17:30] Exploring Unique Pizza Toppings[00:18:02] Pittsburgh's Pierogi Pizza[00:18:20] The Versatility of Pizza[00:19:17] Controversial Pizza Opinions[00:20:43] Coney Island Hot Dogs in Detroit[00:21:19] Hot Dog Eating Contest[00:21:39] Food Preferences and Eating Habits[00:26:06] Snail Mail Programming Newsletter[00:27:24] Conference Highlights and Expectations[00:30:50] Wrapping Up the PodcastPanelists:Julie J.Elise ShafferJason SwettDrew BraggJoël QuennevilleSponsors:HoneybadgerGoRailsLinks:Julie J. TwitterJulie J. WebsiteDrew Bragg TwitterCode and the Coding Coders who Code it Podcast with Drew BraggJason Swett TwitterCode with Jason WebsiteJoël Quenneville TwitterJoël Quenneville WebsiteThe Bike Shed PodcastElise Shaffer WebsiteThe Ruby on Rails Podcast RailsConf 2024 (00:30) - Live from RailsConf Detroit (01:04) - Meet the Hosts (01:38) - Conference Highlights and Workshops (07:21) - The Future of RailsConf (12:10) - Community Interactions and Podcasting (14:21) - Exploring Detroit (17:30) - Exploring Unique Pizza Toppings (18:02) - Pittsburgh's Pierogi Pizza (18:20) - The Versatility of Pizza (19:17) - Controversial Pizza Opinions (20:43) - Coney Island Hot Dogs in Detroit (21:19) - Hot Dog Eating Contest (21:39) - Food Preferences and Eating Habits (26:06) - Snail Mail Programming Newsletter (27:24) - Conference Highlights and Expectations (30:50) - Wrapping Up the Podcast

Code and the Coding Coders who Code it
Episode 35 - Josh Wood

Code and the Coding Coders who Code it

Play Episode Listen Later Jun 4, 2024 38:00


Ever wondered how a Ruby developer transitions into a successful entrepreneur? Join us as Josh Wood, co-founder of HoneyBadger.io, shares his fascinating journey. From creating notable projects like the Heya gem and Exceptional Creatures to spearheading the launch of HoneyBadger Insights, Josh offers an insider's view of the Ruby community and his experiences at RailsConf. Listen to his unique perspective on balancing developer relations, marketing, and the dynamic energy that fuels his passion for Ruby development.Curious about the latest trends in Ruby conferences? This episode dives into the vibrant world of Ruby events across the U.S. and Europe. We discuss the allure of gatherings like Blue Ridge and Rocky Mountain Ruby, along with smaller, community-driven meetups such as Ruby on Trails and Rawhide Ruby. Learn about the shift in Ruby Central's focus from organizing RailsConf to empowering regional conferences and ponder the potential rise of new events in cities like Philadelphia. Josh and his co-founder Ben Curtis's collaborative dynamics and quarterly strategy meetings are also on the agenda.What does it take to reposition a well-known error tracker into a comprehensive application performance monitoring tool? Josh reveals the strategic shifts at HoneyBadger, unpacking the challenges of marketing and design from the vantage point of a technical founder. Hear about the creative process behind naming projects, the nostalgic joy of using older technologies, and the interplay between personal hobbies and professional life. This episode promises a rich blend of insights and stories that will captivate anyone interested in the intersection of technology, entrepreneurship, and community culture.Send us some love.HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Support the Show.Ready to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Remote Ruby
RailsConf 2024 Recap with the GoRails crew

Remote Ruby

Play Episode Listen Later May 31, 2024 45:45


In this episode of Remote Ruby, host Chris is joined by guests Kent Crutchfield andCollin Jilbert, sharing their experiences and reflections from the recent RailsConf inDetroit, MI. They discuss various aspects of the conference, including the engagingtalks, the announcement of RailsConf's impending conclusion in favor of focusing onRubyConf and regional events, and their personal interactions with other attendees. Theepisode highlights how RailsConf facilitated meaningful community interactions,supported professional growth through the Scholars and Guides Program, providedinsights into the practical applications and potential of Ruby on Rails technology,acknowledgements of the hard work behind RailsConf organization, and a call tocontinue supporting Ruby Central. Press download to hear much more!HoneybadgerHoneybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

The Bike Shed
427: RailsConf Recap and Conversing About Coupling

The Bike Shed

Play Episode Listen Later May 28, 2024 37:03


Joël and Stephanie talk RailsConf! (https://railsconf.org/). Joël shares how he performed as a D&D character, Glittersense the gnome, to make his Turbo features talk entertaining and interactive. Stephanie's talk focused on addressing test pain by connecting it to code coupling, offering practical insights and solutions. They agree on the importance of continuous improvement as speakers and developers and trying new approaches in talks and code design, and recommend Jared Norman's RailsConf talk on design patterns, too! That One Thing: Reduce Coupling for More Scalable and Sustainable Software (https://www.informit.com/articles/article.aspx?p=2222816) Connascence.io (https://connascence.io/) [Connascence as a vocabulary to discuss coupling](https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling](https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling) The value of specialized vocabulary (https://bikeshed.thoughtbot.com/356?t=0) Transcript: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I think I can speak for both of us and say what's new in our world is that you and I just came back from RailsConf in Detroit. JOËL: Yeah, we were there for, I guess, it's a three-day conference. Both of us were giving talks. STEPHANIE: Yeah. I don't think we've both spoken at a conference for at least a little over a year, so that was really fun kind of to catch up in person. And there was a whole crew of thoughtboters who were there. Yeah, I feel like we were hanging out, like, a lot [chuckles] all of last week, just seeing each other, talking about, you know, rehearsing our talks and spending time together on...there was, like, a hack day, and we were sitting at the table together. So, I feel like I'm totally caught up on everything that's new in your world, and that's it. That's the end of the show [laughs]. JOËL: On that note, shall we wrap up? STEPHANIE: [laughs] That would not be very fair to our listeners. [laughter] JOËL: Yeah. So, how was the conference speaking experience for you? STEPHANIE: Ooh, it was really great this year. I have not spoken at a RailsConf before, so this was actually, I think, a bigger stage than I had experienced before, and I had a great time. I met Ruby friends, new and old, and, yeah, I left feeling very gooeyed, and very energized, and just so grateful for the Rails community [laughs]. Yeah, I had a very lovely time, kind of being a little bit outside my normal life for a few days. And I think my favorite part about these things is just like, anywhere you go, you can kind of just have a shared interest with someone, and you can start a conversation with them. JOËL: That's really interesting. Do you find yourself just reaching out to strangers at conferences like this? Or do you tend to just hang out with the people that you know? STEPHANIE: Oh, I think a little bit of both. I like to get meals with people I know. But if I'm just hanging out in, like, the lobby or if I happen to get a seat for a talk and I'm sitting next to someone that I don't know, I find it quite easy to just be like, "Hi, like, I'm Stephanie. Are you excited for this talk?" Or, like, "What good talks have you seen recently?" There's an aspect of, like, the social butterfly that comes out of me when I'm at these things. Because I just don't get to have, like, easy access to, I don't know, people with, like, that shared interest or people who are willing to just have a conversation with you normally, I think. JOËL: Yeah, would you describe yourself more as an introvert or an extrovert? STEPHANIE: I am an extroverted introvert [laughter]. I feel like maybe that might be interpreted as a non-answer, but I think I lean more on the introvert side. But you know when you're with a group of people, and there's not, like, a very clear extrovert in that conversation, and then you're like, oh, I have to do the heavy [chuckles] lifting of the social lubrication [laughs] in this conversation, I can step into that role, reluctantly [laughs]. JOËL: Okay. I like the label that you used, the extrovert introvert, in that I enjoy social situations. I do well in social situations. But they also consume a lot of energy for me. I don't necessarily get sort of recharged by doing social events. So, people will be surprised when they find out that I tend to talk about myself as an introvert because, like, "Oh, but you're, like, you know, you're not awkward. You engage very well in different group situations." STEPHANIE: You have a podcast [laughs]. JOËL: And the truth is I enjoy those things, right? I really like social interaction, but it does, after a while, wear me out. STEPHANIE: Yeah, that makes sense. I did want to spend a little bit of time talking about the talk you gave at RailsConf this year: "Dungeons & Dragons & Rails." JOËL: I got to have a lot of fun with the theme. The actual content was introducing people to Turbo by building an interactive Dungeons & Dragons character sheet using vanilla Rails and a little bit of Turbo. So, we're not even writing any JavaScript. We're just using the Turbo helpers, a little bit of Action Cable to mimic something a little bit like...people who are in the know might be familiar with the site D&D Beyond, which is kind of the official D&D online character sheet website. Of course, it wasn't anywhere near as fancy because it's a 30-minute talk and showcasing different features, but that's what we were aiming for. STEPHANIE: Yeah, you know, you've talked a bit about giving talks on the show before, but I wanted to get into what made this one different because I think it could be fun for our listeners. [laughter] JOËL: The way I structured this talk so it has a theme. It's about Dungeons & Dragons, and we're building a character sheet. The way I wrote the talk was it's broken up into chapters. Each chapter is teaching a new feature in Turbo that I want to show off. In order to motivate learning each of these features...because I don't like to just say, "Oh, here's a thing that technology can do. Oh, here's a thing that technology can do." That's boring. You need a reason to learn that. So, I needed a reason to say, "We need to add this to a character sheet." So, every sort of chapter of the talk opens up with a little narrative portion. We're following this character, Glittersense, the gnome, and he's on adventures. And at different points in the adventures, he's going to do different types of roles or need different stats and things. And so, when we reach the point in the adventure where we need that, we sort of freeze frame and then say, "Okay, let's add that as a feature to the character sheet." And then, oh no, it turns out that this feature is a little bit more complicated. We're going to have to learn a new Turbo feature to do that. Who would have guessed? And then, we learn a new Turbo feature together. And then, we go back to the narrative portion. The adventures of Glittersense continue. And then, oh no, we're going to need to add another feature to the character sheet. And that's sort of how the talk is structured. STEPHANIE: Yeah. And you did a really cool thing with the narrative portions, which was you basically performed as Glittersense, the gnome, voice and posture, and a lot of really great acting from you [laughs], in my opinion. JOËL: That is something that came out pretty late in the talk preparation. So, I knew I wanted this kind of alternating story and code structure. Then, like, the weekend before RailsConf, I'm running through my slide deck, and I realized, you know what? What if instead of narrating Glittersense's adventures, what if I went first person for those sections? Glittersense tells his own story. And then, from there, it wasn't a big jump to say, you know what? This is D&D. If I'm going first person and narrating, I really should do a voice. And this is a conversation I had with a couple of people at the speaker dinner. And, of course, everyone's like, "You should 100% do the voice." And I was really not feeling confident in my ability to pull it off. So, for the next two nights, because I was speaking on the third day, the next two nights at the conference, in the evenings, I'm in the hotel room in front of the mirror just practicing my gnome voice to try to get something that got the persona of Glitterense, the gnome, across to the audience. STEPHANIE: How would you describe the persona? JOËL: Very extra. STEPHANIE: [laughs] JOËL: Very high energy. STEPHANIE: Yes. The name Glittersense is very extra, after all. JOËL: [laughs]. I punctuated a lot of the things that he says with just high-pitched laughter. He's also...so, the framing device for all of this is that you're in a tavern listening to him tell his adventures. I wanted a little bit of the sense that Glittersense is maybe embellishing a little bit. I think it may be too much to say he's full of himself, but he's definitely making himself to be the hero of the story, and maybe making himself to be slightly cooler than he really was. STEPHANIE: Yeah. I definitely got, like, a little bit of eccentricity, too, from the persona. And you know when you just, I don't know, meet an older person who has, like, a lot of life experience, and they want to tell you about it [laughter], but you do kind of maybe have a little bit of suspicion around how much they're exaggerating [laughs]. But it was really fun. Everyone I talked to afterwards, like, loved it. And I got to share the little nugget that, like, oh yeah, and Joël only, like, started doing the voice, like, decided that he was going to do it two days ago. And they were just all really, like, blown away because it seemed so well practiced, and it was really fun. JOËL: I got to do something really fun, also, with physical space because Glittersense narrates his portion, sort of the story portions, but then the code portions where we're talking about Turbo, I'm talking in my own voice. And so, when I'm talking about Turbo, I'm standing at the lectern. And when I'm Glittersense, I'm kind of off to the side on the stage and doing the voice. And so, there's this almost, like, two worlds that are inhabited: one by Joël, the speaker, and one by Glittersense, the gnome. And it got to the point where I don't say or do anything. I only move from the lectern to the, like, portion of the stage where Glittersense lives. And the audience starts chuckling and, like, nothing has happened yet, like, no jokes have been told. No voice has happened. No slides have changed. But the anticipation, people know what's coming. STEPHANIE: Yeah. And I think the best part, what I really found just really fun and, I don't know, every time it happened, I just really enjoyed it, when you transitioned out of Glittersense, the gnome, and back to Joël because you were so nonchalant about it. You kind of, like, straighten up rather than having your little kind of crouchy gnome posture, and then just walk across back to the podium. And then, in your normal voice, go back to just, you know, sharing very...not necessarily dry, but just, like, straight to the point. "And this is, like, how you, you know, create a frame in [laughs] Turbo," as if nothing happened [laughs] when even just, like, you know, 20 seconds ago, you were just enthusing about, like, slaying the bandit, chieftain [laughter] known as Glittersense. JOËL: Uh-huh. I think, especially when I open, so I get introduced. I'm off stage. I walk onto the stage, and I'm immediately Glittersense. And I'm telling a story, and the intro goes on for, like, quite a while. It's a big story chunk. And then, at some point, I just walk over to the lectern, drop the voice, hit next slide, and it's my title slide. I'm just like, "Okay, now welcome to Dungeons & Dragons on Rails. We're going to build a character sheet together." STEPHANIE: Yeah, that's exactly the moment I'm thinking of. JOËL: The walking in as Glittersense and just immediately going to the voice caught everyone by surprise. And then, the, like, oh, he keeps going for this. Is the whole talk going to be like this? And then, the, like, just when you think, oh, he's really going for it, the, like, dropping it and going to the podium and title slide. It wasn't intended to be a funny moment, but I think the contrast and the fact that I just switched over was one of the biggest laughs I got. STEPHANIE: Yeah, I mean, I think that attests to how good the delivery of it was because that contrast was very felt. So, props to you. JOËL: I love the idea of, you know, the thought that you put into building a talk and, like, the narrative structure and the pedagogy of the stuff. And, I think, in this particular case, this is almost like a narrative approach called in media res, where you start kind of in the middle. You open your book, or your movie, or whatever in the middle of the story. And then, you kind of come back to the beginning at some point later. So, it starts with some kind of action scene that grabs your attention. So, in this case, my title slide is 10, 15 slides into the talk. We get immediately started with Glittersense and his adventures. And then, once we're sort of all bought into this world, then we move to the title slide and talk about, okay, we're here to build a character sheet and all that stuff. And I think that it wouldn't have had the same impact if I'd, like, opened with that and then gone into Glittersense's adventures. And that's something that was not the case at the beginning. I really reworked the talk to make it in that order. And I think that the talk had a lot more impact for doing that. STEPHANIE: Yeah, definitely. I guess I also just wanted to point out that this is very different from all your other talks. And I think it's really cool that, you know, you are a veteran speaker, but you still find ways to do something new and try something that you've never done before, and yeah, find ways, new ways to, like, speak and engage people and teach. I don't know, do you have just any thoughts about why or how you got into a position to be like, "Oh, you know, I'm going to do something super different this time around" [laughs]? JOËL: So, every talk I give, I try to do something new, something different, to push myself as a speaker to get better. That might be in the writing of the talk; that might be in the delivery. More recently, I've been trying to do more with dynamic presence on stage. So, when I spoke at RubyConf San Diego, I was trying to not just stand at the lectern but to learn to be able to give my talk while also, you know, walking around the stage, looking at the audience, making pauses where it's necessary, not to just be so into the delivery of the talk by just standing at the podium and, like, going through my deck, which is a small thing but I think is an area I wanted to improve in. This time, I was playing around with some more narrative framing and ended up, yeah, like, pushing it to an extreme. And it works with the theme because inhabiting a character and role-playing is the core part of D&D. Not everybody plays a D&D character by doing a voice. You are a little bit extra if you do that. But it's not uncommon for people to do a voice. And so, it kind of fit perfectly with my theme. I just needed to get the self-confidence to do it. So, thank you to everyone at the speaker dinner that was like, "No, you totally got this. You should do this," because I was feeling very unsure. STEPHANIE: It really paid off, so... JOËL: I'd like to circle back to your talk, though. So, you gave, basically, the first talk of the conference. You were the first session after the keynote. A theme that came up multiple times in your talk was this idea of coupling and how it affects different parts of our code and, particularly the way that we structure tests or the way that we feel test pain. How did you, when you were prepping this talk, discover that theme and decide to lift it up? Was that something that you knew ahead of time you wanted to talk about, or did it just sort of emerge as part of the talk preparation process? STEPHANIE: That's a really great question, and I'm glad you picked up on that. So, my talk was called: "So, Writing Tests Feels Painful. What Now?" Originally, when I came up with this idea, it actually started with coupling. I realized that I wanted to give a talk about coupling because it's just something that I was struggling with or, like, had seen other people struggle with and really wanting kind of a discrete resource, wanting to provide that. But as I was just thinking about it, I was like, oh, like, there are so many different ways that this could go. On one hand, it was a very like important topic to me, but also maybe too big of a topic. And so, I actually, like, kind of put that on the back burner. And it wasn't until later when I connected it to another...it wasn't necessarily different at all, but just, like, an extension of this idea is, oh, like, people are struggling with coupling in tests or, like, it manifests in tests. And so, I thought maybe that could be the angle that I took on this topic that kind of gave me a little bit more focus. And I didn't even end up saying like, "Yeah, this talk was, like, born out of just, you know, wrestling with coupling or anything like that." So, it's cool, to me, that you picked up on it as a theme because it was...I had, you know, ended up not being super explicit about it, but it was certainly, like, a thing that was driving the content from my perspective. JOËL: Interesting. So, it started as a coupling talk and then got sort of focused through the lens of testing. STEPHANIE: Yeah. And I think there was a part of me that was like, you know, I don't know if I could just teach the concept of coupling, like, by itself without the framing of testing for people who this is, like, a new concept for them. I realized that maybe it would be more effective to be like, "Hey, like, have you experienced test pain? You know, have you had to mock out a billion objects or changed, you know, made one change and then had to fix, like, a million tests subsequently? Then this talk is for you." And then weave in the idea of coupling in it to kind of start to help people feel familiar with it or just, like, identify it without as much, like, jargon as kind of I've seen when I've tried to figure out, like, how to manage it. JOËL: It's interesting because I think it gives you a, like, concrete, valuable thing to optimize for as opposed to, like, hey, let's lower coupling because then you're writing, you know, quote, unquote, "better code." And you get to feel better about yourself as a programmer because you're doing things the, quote, unquote, "right way." That's very kind of hand-wavy, and I think sometimes leads people down a bad path where they're optimizing things that they shouldn't be. But the tests give you this very concrete way to say, "Hey, we're not just trying to reach the, like, low score record for the app in terms of coupling. We're trying to reduce test pain. Tests are painful. And that pain is telling us something. It's telling us that we've crossed some sort of threshold for coupling. Let's find ways to reduce it, not so that we can feel good about ourselves, but so that our tests are actually manageable." STEPHANIE: Yeah, I am really glad you picked up on that, too, because I feel the exact same way when someone just tells me to decouple something or, like, makes a note that, like, oh, this feels really coupled. I don't know what that means necessarily. And it's not very convincing to just be like, "Oh, you should write loosely coupled code [laughs]," at least for me. What you said just now, it's like, it's not to feel good about ourselves, you know, to write code that way, but, actually, to just feel good about our code, period [laughs]. And, yeah, finding that validation through just, like, actually working with code that is easier to change that is the goal, not necessarily to, yeah, kind of pursue some totally subjective, like, metric. JOËL: So, one of the kinds of coupling that you called out, I think, was where you hardcode a class name of some other class in your object. And that feels, like, really sort of innocuous. Like, of course, my objects can talk to other objects. And maybe I want to, like, refer to a class somewhere. Why is that such a like tricky piece of coupling to work with? STEPHANIE: It's not necessarily intentional sometimes. Like, you just do it because you're like, well, I need access to this class somewhere, and I happen to already be in this file. So, why not just hard-code it here? I do think it's a little tricky because the file that you're writing might be, like, very far down in, like, your code flow or, like, your code path, like, very far from, like, a controller or any kind of entry point into your system, at least based on what I've seen in a lot of modern Rails apps. And so, I think that coupling gets really, really obscured. I have found that, like, if I have to kind of write a more, like, a higher level test, like, maybe a request spec or something, there are times when I'm, like, having to deal with a lot of classes just to set stuff up in a test like that that I didn't think I would have to [chuckles] when I first went about trying to just be like, oh, like, let's just figure out how to get a 200 response [laughs] from this request. So, you're really burying perhaps the things that are needed to set up, like, that full path of execution. And sometimes, it only comes out when you're writing a test for it. JOËL: And you mentioned briefly, in passing, the idea that oftentimes this sort of coupling manifests as a lot of extra test setup because your object that you're trying to test now also needs all these other things that are related in order to be tested. But sometimes even when you hard code a class, though, you can't even just say, "Oh, I want this particular user or something returned." So, you have to then do something like allow this class to receive class method and return, and now you're stubbing. And I don't know how you feel about stubs in RSpec. I always treat them a little bit like a code smell in the like classic sense of it's not necessarily bad, but maybe pause, take a look, and ask yourself, "Why is that there, and should I do things differently?" STEPHANIE: Yeah. I ended up having, like, a lot of examples of stubbing in my example because the code had just been set up where that was the only way that you could access those collaborators, essentially, to, like, make an assertion on them, or have them do something different because you actually needed to go into a different path, right? And I was like, yeah, this should feel weird. You should feel a little bad [laughs] or at least, you know, kind of just pay attention to that feeling, even if you can't really do anything about it in that particular instance. But on the flip side, you know, it's like, yes, it feels a bit strange, you know, but it's not all bad, right? Like, you're kind of learning like, oh, hey, like, I am coupled to this hard-coded class because I am needing to stub, like, a class method that returns it, or that constructs it. And at least you've exposed that, you know, for yourself. One thing that I was running into a lot in my example, too, was that those things, like, weren't obvious when you were just reading maybe, like, the public methods and trying to figure out what was happening in them because they were wrapped in private methods. I was a little bit conflicted about this because there were times when it was already just a single method call, but then it was just kind of wrapped in a private method that actually hid [laughs] the things, like all the dependencies that were passed as arguments. And I found that to be, sure, it looks kind of cleaner. But then all you need to do is scroll down [laughs], and then you're like, oh, actually, there's all these other things involved, but it was kind of hidden away for me. And I found that, actually, like, at least when I actually needed to change things, less helpful than I imagine what the, you know, code author intended. Do you have any thoughts about hiding details like that? JOËL: I'm kind of a big fan. STEPHANIE: Hmmm. JOËL: The general idea, I think, is called the single level of abstraction principle. Whatever sort of public method that you're calling is often implemented in terms of...let's say it does a few different things. It's implemented in terms of, like, these sort of high-level concepts. So, whoever is reading the public method doesn't need to like care about the details of how each step is implemented. So, maybe you're fetching something from an API, and then you're making a database call, and then you're doing some transformation and creating some new objects from it. Having all of the, like, HTTP calls and the ActiveRecord stuff and the, like, transformation all in the public method, yes, there's a lot of complexity happening there, and it makes that obvious. But it also makes it really hard to get a sense of what is happening. So, I like to say, "Hey, there are four steps. Let's wrap them all each in a private method then you can call all of those in the public method." The public method now sort of reads like a very simple sort of script. First, fetch data from the HTTP API, then fetch some data from the database, then apply this transformation, then create this object. And if I'm mostly caring about what this object does and not the how let's say I'm building some other objects that interact with this, that is the information I want to know. Where I care about the actual implementation of, oh, well, exactly how is the ActiveRecord stuff done when I'm doing internal changes to the object, that's when I care about those private methods. I think where it gets tricky, and I think that's the point that you were bringing up, is that if you write code in that way, it has to change the heuristics of how you read code to detect complexity. Because, oftentimes, I think a very classic heuristic for code complexity is just line length. If you have a 50-line method, probably there's a lot of complexity there. Maybe there's a lot of coupling. If it's a four-line method that is written at a high level of abstraction that just calls out to private methods, you scan over. You're like, oh, nice and clean. Nothing to see here. Move on. And so, that heuristic doesn't really hold up in a codebase where you're applying this single level of abstraction. Do you think that lines up with your experience? STEPHANIE: Hmm. As I was listening to you, I was like, yeah, like, that makes total sense to me. But then I also clearly disagreed a little bit [laughs] in my initial...kind of what I was saying initially. And I think it's because that single layer of abstraction was not very well defined. JOËL: Hmm. That's fair. STEPHANIE: Yeah. Where, in fact, it was actually misleading. Like, it wanted to be at that level of abstraction, but it really wasn't. Like, it was operating on things at, like, a lower level and wasn't designed with that kind of readability in mind. So, it was more, like, it was just hiding stuff a little bit, at least for me. And, I think, it certainly would have taken, like, more work to figure out what that code, like, really was meant to convey. It might have taken some refactoring to coalesce at that single level. And that was essentially kind of what I was showing in my talk as, like, how to get to saying, like, "Hey, we actually are operating in the lower level, but I don't think we need to." There was some amount of, like, looking at all of the how to figure out, like, oh, maybe these things we don't even need to expose in this class. And we kind of got to a place where those details weren't, like, needed in that class at all. So, it's one of those things where it's harder than it sounds [laughs]. JOËL: It's definitely an art. STEPHANIE: Yeah. JOËL: And I think what you're saying about some of the coupling being, like, scattered throughout the class, it's something that I see a lot with situations where you're coupled, not so much to, like, a single class, but to something side effectful. So, you're building some kind of integration with a third-party API, and you're going to have to make a lot of HTTP calls. And each of those might be individually simple, and they're all sort of maybe in different private methods or whatever, or they're interspersed among a larger chunk of logic. And that makes your tests really complicated. But there's no, like, one place you can point at and be like, ooh, that's the one place where there's a lot of complexity. What's happening here, though, is that your business object that's doing stuff is coupled to the network, and that coupling is going to force you to do some stubbing. It's going to force you to deal with a bunch of side effects that are non-deterministic in your code. And you used the word coalesce earlier that I really liked because I think that's often a situation where you do have to stand back and say, "Look, there's a lot of HTTP going on here. What if I coalesced it all into an object? Now I have two objects: one that's responsible for business logic, and one that's responsible for just the HTTP calls." And, all of a sudden, the tests just totally simplify. And we've removed some coupling, but that's not something that you would have seen just from reading the code. Because, as you were saying, it's sort of scattered in little bits and pieces throughout your file that don't necessarily catch your eye. STEPHANIE: Yeah. Which brings me to a blog post that I had found a lot of inspiration from in the talk that I'll link. It's called "That One Thing: Reduce Coupling for More Scalable and Sustainable Software." But it's actually about tests [laughs], even though it doesn't make an appearance in the title of the blog post at all. But this is where I kind of got the idea of necessary versus unnecessary coupling in test. Because I had never thought about how, yeah, like, when you write a test, you are very correctly coupling yourself to at least the method and class under test [laughs], if not also the arguments, right? Or anything else needed to construct what you're testing. And literally having that listed out for me in this blog post I think it's a...they use some examples in Java. And so, there's, like, a little bit more [laughs] setup involved. But I think they're like, yeah, these are six things that, like, it's mostly fine if you're coupled to these because that's kind of what needs to happen in a test. But, like, even having something to compare a test I wrote to just, like, okay, these are the things I know I need. And then, you can start to see when you've diverged from that list, when you are finding yourself coupled to some internals of your class. I really...that was actually, like, really helpful for me because, as we talked about earlier, like, it can be kind of communicated so abstractly. But here is, like, a very clear heuristic for when you should at least, like, start to pay attention or be like, oh, this is something that was needed to get the test to run but is now starting to feel a little unnecessary because it's not on this list. JOËL: That list reminds me, or the idea of a list of things to check out for when thinking about coupling, reminds me of the concept of connascence, which is a fancy word for almost a, like, categorization of different types of coupling because coupling comes in different flavors, some of which are tighter forms of coupling than others. And so, having that vocabulary has been really helpful for me when I'm looking at PRs and code review, or even when I'm refactoring my own code. Kind of like that list that you mentioned that you have, now I have some heuristics to look at that and say, "Oh, can I go from a connascence of position to a connascence of naming, and does that help me?" STEPHANIE: Yeah, I like that you mentioned the positional connascence because I also came across a really great metaphor for kind of things that need to change together, like, when that makes sense. And it was basically the idea of a dishwasher and a laundry machine [laughs]. I wish I could recall, like, what book this was from. But it was basically like, oh yeah, like, in theory, you're washing two things. So, maybe they are similar, but then you're like, no, actually, you want these to be a little bit separate because, you know, you don't want to wash your dishes and your clothes in the same machine. I don't know, maybe that exists [laughs], but I don't think it would do a very good job for either goal. And I think that was really helpful, for me, in imagining, like, the difference between kind of coupling and cohesion, like things that...even just imagining, like, kind of where I'm doing those things in the house, right? It's like, okay, that lives in a separate room. And, like, the kitchen is for the dishes, and that could be like, you know, a module if you will. And, like, laundry happens in the laundry room, and how to kind of just separate those things, even though they also do share some qualities, too. Like, they're both appliances, right? And so, that's the way that they are similar, but they're not the same. JOËL: You just mentioned the sort of keyword cohesion. And for our listeners who are not familiar with that term, it refers to an object sort of having one thing that it does well. Like, everything in that class sort of works towards the same goal, kind of similar to the idea of the single responsibility principle. So, in my earlier example, where we're sort of interspersing some business logic, a lot of HTTP requests, and pulling out an object that's focused on HTTP, like everything is based around that, now that object has higher cohesion because it's all doing one thing. So, if you read classic object-oriented literature, the recommendations that you'll typically see are that objects should have high cohesion and low coupling. STEPHANIE: Yeah. Think of a dishwasher and a washing machine next time [laughs] you come across something like that. Because I feel like those are really great, like, real-life examples of that separation. JOËL: Did you go to Jared Norman's talk on the third day: "Undervalued: The Most Useful Design Pattern"? STEPHANIE: No, I didn't. Can you tell me about it? JOËL: It felt like he was addressing a lot of the same themes as you were but from more of a code perspective than a test perspective. Talking a lot about, again, forms of coupling, dependencies, and then, specifically, one of the tools that he focused on to reduce the coupling that we see is value objects and factory methods to construct those. So, for any of our listeners who, when the talks come out, watch Stephanie's talk and are like, "Wow, I would love to learn more about this," a great follow-up, Jared Norman's talk: "Undervalued: The Most Useful Design Pattern." STEPHANIE: Yeah, that's neat because I can see that being a solution to the hard code did class names that we were talking about earlier. And I like how that is kind of, like, a progressive lesson in coupling a little bit. I'm really glad you shared that talk with me because now I'm excited to watch it when it comes out. And in general, I just love learning new vocabulary or finding new ways to speak about this topic with clarity. So, if any of our listeners have just additional mental models for coupling [laughs] different metaphors, different household appliances [laughs], or something like that, I would love to know. JOËL: You would like that, given that our first episode together was about "The Value Of Specialized Vocabulary." STEPHANIE: Yeah, it's clearly undervalued. JOËL: Haha, I see what you did there. STEPHANIE: Thank you. Thank you very much [laughs]. JOËL: On that terrible/wonderful pun, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
426: Bringing "Our Selves" to Work

The Bike Shed

Play Episode Listen Later May 14, 2024 33:04


Joël shares his preparations for his RailsConf talk, which is D&D-themed and centered around a gnome character named Glittersense. Stephanie expresses her delight in creating pod-related puns within thoughtbot's internal team structure, like "cross-podination" for inter-pod meetings and the adorable observation that her pod resembles "three peas in a pod" when using the git co-authored-by feature. Together, Stephanie and Joël discuss bringing one's authentic self to work, balancing personal disclosure with professional boundaries, and fostering psychological safety. They highlight the value of shared interests and personal anecdotes in enhancing team cohesion, especially remotely, and stress the importance of an inclusive culture that respects individual preferences and boundaries. Transcript: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So, at the time of this recording, we're recording this the week before RailsConf. I've been working on some of the visuals for my RailsConf talk and leaning on AI to generate some of these. So, my talk is D&D-themed, and it's very narrative-based. We follow the adventures of this gnome named Glittersense throughout the talk as we learn about how to use Turbo to build a D&D character sheet. And so, I wanted the AI to generate images for me. And the problem I've had with a lot of AI-generated images is that you're like, okay, I need a gnome, you know, in a fight doing this, doing that. But then, like, every time, you get, like, totally different images. You're like, "Oh, I need an image where it's this," but then, like, the character is different in all the scenes, and there's no consistency. So, I've been leaning a little bit more into the memory aspect of ChatGPT, where you can sort of tell it, "Look, these are the things. Now, whenever I refer to Glittersense, whenever you draw an image, do it with these characteristics that we've established what the character looks like." Sometimes I'll have, like, a text conversation kind of, like, setting up the physical characteristics. And then, it's like, okay, now every time you draw him, draw him like this, or now every time you draw him, draw him with this particular piece of equipment that we've created. And so, leaning into that memory has allowed me to create a series of images that feel a little bit more consistent in a way that's been really interesting. STEPHANIE: Cool. Yeah, that makes sense because you are telling a story, right? And you need it to have a through line and the imagery be matching as you progress in your presentation. I actually don't know a lot about how that memory works. Does it persist across sessions? Do you have to do it all in one [laughs] go, or how does that work? JOËL: So, there's, like, a persistent chat. So, you can start sort of multiple conversations, but each conversation is its own thread with its own memory. And it will sort of keep track of certain things. And sometimes I'll even say, "Hey..." instead of, like, prompting it for something to get a response, you could prompt it to add things to its memory. So say like, "From now on, when I ask you these types of questions, I want you to respond in this way," or, "From now on, when I ask you to generate an image, I want it done in this format." So, for example, RailsConf requires all of their slides to be 16 by 9. If I want, like, a kind of cover image or, like, something full-screen, I need an image that is 16 by 9. So, one of the things I prompt the AI with is just, "From now on, whenever you generate an image, give me an image in 16:9 aspect ratio." STEPHANIE: Cool. I also was intrigued by your gnome's name, Glittersense. And I was wondering what the story behind that character is. JOËL: The story behind the name is that I was playing D&D with a friend who was this very kind of eclectic Dragonborn character. And I did some sort of valiant deed and got the name Glittersense bestowed upon me by this Dragonborn for having helped him out in some, like, cool way. So, that's a fun name. And so, when I was searching for a name for my character in this talk, I was like, you know what? Let's bring back Glittersense. I like that. I think it captures a little bit of, like, the wonder and the whimsy of a gnome. STEPHANIE: That's really cute. I like that a lot. JOËL: So, Stephanie, what's been new in your world? STEPHANIE: So, lately, I've been having a lot of fun with coming up with names of things. You know the saying how naming is one of the hardest things in software? Well, okay, I'm not actually going to talk about anything that I named very particularly well in my code, but I've been just coming up with a lot of puns. It's just, I don't know, my brain is kind of in that space. And one thing that...I can't recall if I have talked about this on the show before, but our team at thoughtbot is experimenting with kind of smaller sub-teams within it called pods. We have now kind of been split into pods with other people who are working on maybe similar client projects. I have been having some really good naming ideas around [laughs] pod-related puns. So, one thing that we did as part of this experiment was setting up meetings for pods to meet each other, and spend time together, and kind of share what each other was up to. And I was the first to coin the term cross-podination, kind of like cross-pollination. And I think I just, like, said it offhand one day, and then it caught on. And I was very pleasantly surprised to see that people just leaned into it and started naming those meetings cross-podination meetings. And then, another one that came about recently was my pod there's three of us in it, and we were pairing, or I guess it's not really called a pairing if there's three. We were mobbing or ensembling, whatever you want to call it. And sometimes we like to use the git co-authored-by feature where you can attribute, you know, commits to people that you worked on them with. And in GitHub when you, you know, add people's emails to the commit, you know, you see your little GitHub profile picture in a little circle. And when you have multiple people shared on a commit, it is just, like, squished together. And since we're a trio, I was like, "Oh, it's like we're, like, three peas in a pod." JOËL: [chuckles] STEPHANIE: And I realized that it was an excellent missed opportunity for our pod name. We're something else. But I am hereby reserving that name for the next pod that I am in. You heard it here first [laughs]. It looked exactly like just three snug little peas. And I, yeah, it was very cute. I was very delighted. And yeah, that's what's new for me. JOËL: I'll also point out the fact that you are currently talking on a podcast. STEPHANIE: Whoa, whoa. So, you and I are a pod [laughter]. We're a podcasting pod [laughs]. Wow, I didn't even think about that. My world is just pods right now [laughs], folks. JOËL: How do you feel about puns as an art form? STEPHANIE: [laughs] Wow, art form is a strong phrase to use. I don't hate them. I think it depends. Sometimes I will cringe, and other times I'm like, that's great. That's excellent. Yeah, I think it depends. But I guess, clearly, I'm in my pun era, so I've just accepted it. JOËL: Are you the kind of person who is, like, ashamed but secretly proud when you make a really good pun? STEPHANIE: Yeah, that's a very good way to describe it. I'm sure there are other people out there [laughs]. JOËL: What's interesting with puns, right? Like, some people love them, some people hate them. Some people really lean into them, like, that becomes almost, like, part of their personality. We had a former teammate who his...we made a custom Slack emoji with his face, and it was the pun emoji because he always had a good pun ready for any situation. And so, that's sort of a way that I feel like sometimes you get to bring an aspect of your personality or at least a persona to work. What parts of yourself do you like to bring to work? What parts do you like to maybe leave out? STEPHANIE: Yeah, I am really excited about this topic because I feel like it's a little bit evergreen, maybe was kind of a trendy thing to talk about in terms of team culture in the past couple of years, but this idea of bringing an authentic or whole self to work as, like, an ideal. And I don't know that I totally agree with that [laughs] because, like you said, sometimes you have a different kind of persona, or you have a kind of way that you want to present yourself at work. And that doesn't necessarily mean it's a bad thing. I personally like some kind of separation in terms of my work self and my rest of life self [laughs]. Yeah, I just think that should be fine. JOËL: So, you might secretly be the pun master, but you don't want your colleagues to know. STEPHANIE: [laughs] That's true. Or I save my puns only for work [laughter]. If I ever have, like, a shower thought where I think of a really good pun, I will, like, send a Slack message to myself to find [laughs] the perfect opportunity to use this pun in a meeting [laughs]. I don't actually do that, but that would be very funny. JOËL: I feel like there's probably a sense in which nobody is a hundred percent their authentic self or their full self in a work situation, you know, it varies by person. But I'm sure everybody, to a certain extent, has a professional persona that they inhabit during work hours. STEPHANIE: Yeah, and I like that the way we're talking about it, too, is a professional persona doesn't necessarily mean that you're just a little...matching kind of a business speak bot [laughs], where it's kind of devoid of personality, but just using all the right language in their emails [laughs] and the correct business jargon or whatever. To me, what is important is that people are able to choose how they show up or present themselves at work. That's, like, an active choice that they're making, not out of obligation or fear of consequences. You know, like, it's fine to be a little more private at work if that's just how you want to operate. And it's also fine to be more open about sharing things going on in your personal life. Because I've seen ways in which both have been more enforced or, like, there's pressure to perform one way or another. And that could mean, like, when people kind of encourage others to try to be more of themselves or, like, share more things about personal life. That's not always necessarily a good thing if it's not something that people are comfortable with. And I suspect that we have kind of pulled back a little bit from that, but there was certainly a time when that was a bit of an expectation. And I'm not sure that that was quite [chuckles] what we wanted to aim for in terms of just the modern workplace. JOËL: It is interesting because I think there can be some advantages to maybe building connection with people by sharing a little bit more about your life. But, again, if there's pressure to do it, that becomes really unwholesome. STEPHANIE: Yeah. Unwholesome is a good word to use. Like, I want that wholesome content [laughs] at work. And I actually have a couple of thoughts about how I prefer to share, like, just personal things with my team members. And I'm curious kind of where you fall on this as well. But a couple of things that our team does that I really like is we have a quarterly newsletter that one of our team leads puts together. She has an open call for submissions, and people just share any, like travel plans, any professional wins, any kind of personal life things that they want to share. People love talking about their home improvement adventures [laughs] on our team, which is really fun. And yeah, like, just share photos and a little blurb about what they've been up to. And this happens every quarter. And it's always such a delight to remember a little bit like, oh yeah, my co-workers have lives outside of work. But I really like that it's opt-in and also not that frequent, you know? It's kind of like, this is the time to share any like, special things that have happened in the past three months. And yeah, I think every time a new dispatch of it comes out, everyone kind of gets the warm and fuzzy feelings of appreciating their co-workers and what they've been up to. JOËL: Do you think that that kind of sharing sort of maybe helps personalize a little bit of our colleagues, especially because we're all remote and we're interacting with each other through a screen? STEPHANIE: Yes. Yeah. That's another good distinction. I think it is, like, a little more important that there are touch points like these when we are working remote because, yeah, the water cooler conversation just doesn't really happen nearly as much as it does when you're in an office. And I feel like that's the kind of thing that I would talk about at the water cooler [laughs]. It's like, "Oh yeah, I went to Disney World, or traveled for this conference, or I built new garden beds for my yard," just stuff like that. I don't know, I don't find that...like, when you're just communicating over Slack and email, there's not a good place for that kind of stuff. And that's why I really like the newsletter. JOËL: One thing that's interesting about the difference between in-person and remote is that, in person, a way that you can express personality in the office is you can do some things with your workspace. You might have some items on your desk that are of personal interest. And, you know, you might still do that when you're working remote, but those don't get captured by your webcam unless it's in your background. Your background you can get real creative with. But you can also, like, really curate that to, like, show practically nothing. Whereas if you were putting things on your desk in the office, there's kind of no way for your colleagues not to see that. So, you had to be...like, it had to be things that you were willing for everyone to see. But at the same time, sometimes it's nice to be able to say, hey, I'm going to put a touch of, like, things that are meaningful to me in my work life. STEPHANIE: Yeah, I really like that. I mean, Joël, your background is always these framed maps on the wall, hanging on the wall, and that is very you, I think. Did you kind of think about how they'll just be your background whenever you're in a meeting, or they just happened to be there? JOËL: So, these I had set up pre-pandemic. I like the décor. And then, when I started working from home in 2020, I was trying to figure out, like, where do I want to be to take meetings? And I was like, you know what? The math wall is pretty cool. I think that's going to be my background. I guess now it's almost become, like, a bit of a trademark. STEPHANIE: Yeah, I feel that. My trademark...I have a few because I like to move around when I take meetings. So, when I'm at my desk, it's the plants in my office. When I'm in my kitchen, it's either my jars [laughs]. So, I have, like, open shelving and just all of these jars of, you know, some of it is ingredients like nuts, and grains, and stuff like that, and some of it is just empty jars that I use for drinking water. So, I have my jar collection. And then, occasionally, if I'm sitting on the other side of the table [chuckles], all of my pots and pans are hanging in the background from above my stove. So, yeah, I'm the jars, pots, and plants person [laughs] at the company. JOËL: You know, we were talking earlier about the idea that it's harder to see your sort of workspace in a remote world. And I just remembered that we do a semi-regular...there's, like, a thread at thoughtbot where people just share pictures of their workspace, and it's opt-in. You don't have to put anything in there. But you get a little bit of, like, oh, the other side of the camera. That's pretty cool. STEPHANIE: Yeah, I love seeing those threads. And I think a lot of people in our industry are also gear nerds, so [laughter] they love to see people's, like, fancy monitor and keyboard setups, maybe some cool lighting, oh, like, wire organization [laughs]. JOËL: Cable management. STEPHANIE: Yep. Yep. Those are fun. And I actually think another one that we've lost since going remote is laptop stickers because that was such a great way for people to show some personality and things that they love, like programming stuff, maybe, like, you know, language stickers or organizations like thoughtbot stickers, too, and also, more personal stuff if they want. At a previous company, we were also remote, and someone came up with a really fun game where people anonymously submitted pictures of their laptop stickers. And we got together and tried to guess whose laptop belonged to who just based on the stickers. JOËL: Oh, that's fun. STEPHANIE: Yeah, that was really fun. I keep forgetting that I wanted to organize something like that for thoughtbot. But now I'm just thinking about it, and I feel the need to decorate my laptop with some stickers after this [laughs]. JOËL: One thing I do want to highlight, though, is the fact that several years back, when people were talking a lot about the importance of bringing your sort of authentic or whole self to work, one of the really valuable parts of that conversation was giving people the ability to do that, not forcing people to sort of hide parts of themselves, especially if they don't fit into a dominant culture or demographic, in order to be able to even function at work, right? That's a sort of key aspect of, I guess, basic inclusivity. And so, I think that's still a hundred percent true today. We want to build cultures that are inclusive, both in our in-person professional situations and for remote teams. STEPHANIE: Yeah, 100%. I think, for me, what I think is a good measure of that is, you know, how comfortable are people disagreeing at the company kind of in public or sharing an alternative perspective? Like, that should be okay and celebrated, even, and considered, you know, with equal weight as kind of what you're saying, the dominant identity or even just opinion. Like, especially in tech, I think people have very strongly held opinions, and when they're disagreed with...I've become a little skeptical of the idea of, like, this is how we do things here or, like, we don't do that. And I think that rather than sticking to a, like, stance like that, there's always room to incorporate, like, new approaches, new perspectives, new ways of thinking to a given problem. And that can only happen when people are comfortable with going there, you know, and kind of saying, like, "This is important to me," or, like, "This is how I feel about it." And that, in and of itself, is just equally valid [laughs] as whatever is taking the airtime currently. JOËL: That's really interesting because I feel like now you've leaned into almost the idea of psychological safety for a team. And if you're having to sort of repress or hide elements of the way you think, or maybe even sort of core elements of your identity to fit in with a team, that's not psychologically safe, and you can't have those deeper conversations. STEPHANIE: Yeah, 100%. I think it's two sides of the same coin, you know, it's like two ways of saying the same thing, that people should be able to conduct themselves in the way they choose to [laughs]. And I can't imagine anyone really disagreeing with a statement like that. JOËL: So, I know you choose to not always share everything about your life or sort of...I don't want to say bring your authentic self but, like, bring everything about yourself to the workplace. Do you have a sort of a heuristic for what you decide to share or not share? STEPHANIE: Yeah. I don't know if it's necessarily a heuristic so much as it's just what I do [laughs]. But I tend to do better with, like, smaller groups, and, actually, that's why I think pods has been working really well for me personally because I can share personal information just in a more intimate setting, which is helpful for me. And yeah, I tend to, like, find once, like, either Slack channels or Spaces, meetings are starting to get into the, like, 10, 11, 12 people territory is when I hold it back a little bit more, not because of any sort of, like, reason that I don't want to share. It's just, like, that's just not the venue for me. But I do love when other people are, like, open, even in, like, larger spaces like that. I appreciate when other people do it just to, you know, signal that it's okay [laughs]. And I enjoy throwing a reaction or responding in a thread about, you know, something that someone shared in a bigger channel. And I think that diversity is actually really helpful because it conveys that, like, there's different ways of existing online in your work environment and that they're all acceptable. What about you? How do you kind of choose where to share things about your personal life? JOËL: I think, kind of like you, I don't really have a heuristic. I just sort of go with gut feeling. I think I, sort of by nature, have always been maybe a little bit of having, like, separate professional and personal lives and keeping those a little bit more distinct. And, you know, there's some things that kind of cross over, like, oh, you know, I tried out this fun, new restaurant, or I did a cool activity over the weekend, or something like that. I think I've come to see that there can be a lot of value in sharing parts of yourself with other colleagues. And so, from time to time, I'll, like, maybe bring in something a little bit deeper. And, like you said, sometimes that's more easily done in a smaller context. And then yeah, for some things, it's like, okay, I'm going to share photos from a vacation in that, you know, quarterly newsletter. That's kind of fun. But also knowing that there's no pressure that's nice. STEPHANIE: Yeah. I think you're really good about finding the right avenues for that. I like, love when you show photos in the travel channel, even though I have that channel muted [laughs]. You'll, like, send me the link to the post in that channel. And yeah, I love that because it's a way for you to kind of, like, find the right place for it, and then also share it with any particular people if you choose to. JOËL: I think, also, personal connections can be a way to build deeper relationships, especially in smaller groups. And you can form deeper connections with colleagues over a particular project, or a particular technology, or a tech topic, or, you know, just a passion about mechanical keyboards, or something like that. But if you're people who chat kind of more on the regular for different things, maybe separate from a client project you're on or something like that, and you do find yourself exchanging a little bit more about, oh, you know, what you're doing in your life, or what are the things that are going on for you, that often does tend to build, I think, a deeper connection between colleagues, which can be really nice. STEPHANIE: Yeah. And I like that those relationships can also change. Like, there's different seasons in which you're more connected to some people and then less connected. Sometimes a colleague that you have shared interests with becomes someone that you kind of are in touch with more regularly, and then maybe you switch projects, and you aren't so much kind of as up to date. But, I don't know, I always think that there's, like, the right time for that kind of stuff, and it emerges. JOËL: I'm going to throw a bit of a buzzword at you, and I'd love to get your reaction. The idea of belonging, the feeling of belonging on a team, is that a good thing, something that we should seek out? And if so, how much of that is responsibility of, like, management or, like, a property of the team or the group to make you sort of feel that belonging? And how much of that is on you having to maybe disclose things about yourself or share a little bit of your personal life to, like, create that sense of belonging? STEPHANIE: Whoa. Yeah, that is a good way to frame it. I think there's a balance. There've been some, like, periods of my work life where I'm like, oh, I need more of a detachment from work and other times where I'm like, oh, I feel really disconnected, like, I want to feel like more of a part of this team. But I do think it's a management responsibility. And one thing that I know people to be cautious of is, you know, becoming too close at work. I don't know if your work being treated like a family, like, that kind of language can be a little bit borderline. JOËL: Almost manipulative. STEPHANIE: Right. Yeah, exactly. I do think there's something to be said about community at work and feeling like that kind of belonging, right? But also, that you can choose how much, like, you want to engage with that community and that being okay. I don't think it necessarily needs to be only through what you share about yourself. Like, you can have that sense of connection just by being a good colleague [chuckles], right? Like, even if the things you talk about are just within the realm of the project you're working on, like, there's still a sense of commitment and, yeah, in that relationship. And I think that is what matters when it comes to belonging. In the past, ways that I've seen that work well in regards to kind of how you share information is just, like, I don't know, share how you're doing. Like, you don't have to provide too many details. But it could be like, "Oh, I'm kind of distracted in my personal life right now, and that's why I wasn't able to get this done." People should be understanding of that, even if you don't kind of let them in on the more personal aspects of it. JOËL: Right. And you don't have to give any details, right? STEPHANIE: Yeah. JOËL: You should be in a place where people are comfortable with not knowing and not be like, "Ooh, what's going on with Stephanie's life? " STEPHANIE: [laughs] Yeah. But I do also think, like, the knowing that, like, something is going on is, like, also important context, right? Because you don't necessarily want that to impact the commitments you do have at work. JOËL: Right. And people tend to be a little bit more understanding if you're having to maybe shift some meetings around, or if you're struggling to focus on a particular day, or something like that. STEPHANIE: Yeah. 100%. JOËL: Yeah, we should normalize it of just like, "Hey, I'm having a hard day. I don't want to give details, but you know." STEPHANIE: Yeah. Yeah. I think a way that that is always kind of weird is how people communicate they're taking a sick day [laughs]. I actually had someone tell me that they really appreciated a time when I just said, "You know, I need to take care of myself today," and didn't really say anything else [laughs] about why. Because they're like, "Oh, like, that helped normalize this idea that, like, that is fine just kind of as is." There's no need to, you know, supply any additional reasoning. JOËL: Sometimes I feel like people almost feel the need to like, justify taking sick time. So, you've got to, like, say just how bad things are that now I'm actually taking sick time. STEPHANIE: Yeah, which is...that's not the point, right? You know, we have it because we need it [laughs]. So, yeah, I'm glad you mentioned that because I think that's actually a really good example of the ways that people, like, approach kind of bringing themselves to work like that. JOËL: Yeah, sometimes it's setting a boundary. An aspect I'm curious to look at is you, and I do a little bit of this with this podcast, right? Every week, we share a little bit of what's new in our world, and it goes out into the public internet. How do you tend to pick those topics and, like, how personal are you willing to get? STEPHANIE: Yeah. Oh, that's so hard. It's always hard [laughs], I think. I generally am pretty open. You know, I have talked about plans that I have for moving. I don't know, things about my gardening. I think I've also been a little vulnerable on the show before when I've, like, had a challenge, like, at work. But yeah, it's important, to me, I think, to be, like, true. Like, I think part of what our listeners like about this show is that we show up every week, and it's just a chat between two friends [laughs]. JOËL: Uh-huh. STEPHANIE: It also is kind of weird to know that it's just, like, out there, right? And I don't really know who's listening on the other side. I do know that, like, a lot of my friends listen. And, in some ways, I like to think that I'm talking to them, right? But yeah, sometimes I think about just, like, in a decade [laughs], it will still be out there. And on one hand, I think maybe it's kind of cool because I can listen back and be like, oh, like, that's what was going on for me in 2024. And other times I'm like, oh my God, what if I'm one day just, like, deeply embarrassed by things I've talked about on this show [laughs]? But that's a risk, I guess, I'm willing to take because I do think that the sense of connection that we foster with our audience is really meaningful. And it gives me a lot of joy whenever I meet a listener who's like, "Oh, you, you know, talked about this one thing, and I really related to it." And yeah, I guess that's what I do this for. What about you? JOËL: Yeah, I think kind of similar to you; tend to talk about things at work, interesting technical challenges, interesting sort of work, or even sometimes client-related challenges. Of course, you know, never calling out any clients by name, you know, talk about some hobbies and things like that. I think where I tend to draw the line a little bit is things that are a little bit more people-oriented in my personal life. So, I tend to not talk about family, and friends, and relationships, and things like that. And, you know, there are some times where there's like, those things intermix a little bit, where I'll, like, have shared, like, "This is what's new in my world." And then, like, off air, I'll follow up with you and say, "So, I didn't tell the whole story on air. STEPHANIE: [laughs] Yeah. JOËL: Here's what actually happened." Or, you know, "Here's this extra anecdote that I wanted you to know, but I didn't want everyone in the audience to hear." STEPHANIE: Yeah. I think the weirdest part for me, too, is I certainly have my, like, parasocial relationships with people that I follow on the internet [laughs], like, people on YouTube, or other podcasts, and stuff like that. But I haven't thought a whole lot about just, like, what that looks like for me as a host of a podcast. I think, kind of the size of the show now it feels right for me, where it's like I run into people who listen at conferences and stuff like that, but it is kind of contained to a work-related thing. So, that feels good because it, I think, for me, helps just give the work stuff a little bit of a deeper meaning, but otherwise isn't spilling over to my regular life. JOËL: And it's always fun when, you know, we get a listener email connecting to, you know, one of the random hobbies or something we've talked about and sharing a little bit of their experiences. I think last spring, I talked about getting a pair of bike shorts and, like, trying it out and seeing how that worked. And a listener called in and shared their experience with bike shorts, and, like, that's a lot of fun. It kind of creates that connection. So, I do enjoy that aspect. STEPHANIE: Yeah. And just to plug, you can write in to us at hosts@bikeshed.fm, and if you have anything you want to share that was inspired by what you heard us talk about on the show. JOËL: We'd love to have you. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

The Bike Shed
425: Modeling Associations in Rails

The Bike Shed

Play Episode Listen Later May 7, 2024 29:39


Stephanie shares an intriguing discovery about the origins of design patterns in software, tracing them back to architect Christopher Alexander's ideas in architecture. Joël is an official member of the Boston bike share system, and he loves it. He even got a notification on the app this week: "Congratulations. You have now visited 10% of all docking stations in the Boston metro area." #AchievementUnlocked, Joël! Joël and Stephanie transition into a broader discussion on data modeling within software systems, particularly how entities like companies, employees, and devices interconnect within a database. They debate the semantics of database relationships and the practical implications of various database design decisions, providing insights into the complexities of backend development. Christopher Alexander and Design Patterns (https://www.designsystems.com/christopher-alexander-the-father-of-pattern-language/) Rails guide to choosing between belongsto and hasone (https://edgeguides.rubyonrails.org/association_basics.html#choosing-between-belongs-to-and-has-one) Making impossible states impossible (https://www.youtube.com/watch?v=IcgmSRJHu_8) Transcript: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I learned a very interesting tidbit. I don't know if it's historical; I don't know if I would label it that. But, I recently learned about where the idea of design patterns in software came from. Are you familiar with that at all? JOËL: I read an article about that a while back, and I forget exactly, but there is, like, a design patterns movement, I think, that predates the software world. STEPHANIE: Yeah, exactly. So, as far as I understand it, there is an architect named Christopher Alexander, and he's kind of the one who proposed this idea of a pattern language. And he developed these ideas from the lens of architecture and building spaces. And he wrote a book called A Pattern Language that compiles, like, all these time-tested solutions to how to create spaces that meet people's needs, essentially. And I just thought that was really neat that software design adopted that philosophy, kind of taking a lot of these interdisciplinary ideas and bringing them into something technical. But also, what I was really compelled by was that the point of these patterns is to make these spaces comfortable and enjoyable for humans. And I have that same feeling evoked when I'm in a codebase that's really well designed, and I am just, like, totally comfortable in it, and I can kind of understand what's going on and know how to navigate it. That's a very visceral feeling, I think. JOËL: I love the kind of human-centric approach that you're using and the language that you're using, right? A place that is comfortable for humans. We want that for our homes. It's kind of nice in our codebases, too. STEPHANIE: Yeah. I have really enjoyed this framing because instead of just saying like, "Oh, it's quote, unquote, "best practice" to follow these design patterns," it kind of gives me more of a reason. It's more of a compelling reason to me to say like, "Following these design patterns makes the codebase, like, easier to navigate, or easier to change, or easier to work with." And that I can get kind of on board with rather than just saying, "This way is, like, the better way, or the superior way, or the way to do things." JOËL: At the end of the day, design patterns are a means to an end. They're not an end in of itself. And I think that's where it's very easy to get into trouble is where you're just sort of, I don't know, trying to rack up engineering points, I guess, for using a lot of design patterns, and they're not necessarily in service to some broader goal. STEPHANIE: Yeah, yeah, exactly. I like the way you put that. When you said that, for some reason, I was thinking about catching Pokémon or something like filling your Pokédex [laughs] with all the different design patterns. And it's not just, you know, like you said, to check off those boxes, but for something that is maybe a little more meaningful than that. JOËL: You're just trying to, like, hit the completionist achievement on the design patterns. STEPHANIE: Yeah, if someone ever reaches that, you know, gets that achievement trophy, let me know [laughs]. JOËL: Can I get a badge on GitHub for having PRs that use every single Gang of Four pattern? STEPHANIE: Anyway, Joël, what's new in your world? JOËL: So, on the topic of completing things and getting badges for them, I am a part of the Boston bike share...project makes it sound like it's a, I don't know, an exclusive club. It's Boston's bike share system. I have a subscription with them, and I love it. It's so practical. You can go everywhere. You don't have to worry about, like, a bike getting stolen or something because, like, you drop it off at a docking station, and then it's not your responsibility anymore. Yeah, it's very convenient. I love it. I got a notification on the app this week that said, "Congratulations. You have now visited 10% of all docking stations in the Boston metro area." STEPHANIE: Whoa, that's actually a pretty cool accomplishment. JOËL: I didn't even know they tracked that, and it's kind of cool. And the achievement shows me, like, here are all the different stations you've visited. STEPHANIE: You know what I think would be really fun? Is kind of the equivalent of a Spotify Wrapped, but for your biking in a year kind of around the city. JOËL: [laughs] STEPHANIE: That would be really neat, I think, just to be like, oh yeah, like, I took this bike trip here. Like, I docked at this station to go meet up with a friend in this neighborhood. Yeah, I think that would be really fun [laughs]. JOËL: You definitely see some patterns come up, right? You're like, oh yeah, well, you know, this is my commute into work every day. Or this is that one friend where, you know, every Tuesday night, we go and do this thing. STEPHANIE: Yeah, it's almost like a travelogue by bike. JOËL: Yeah. I'll bet there's a lot of really interesting information that could surface from that. It might be a little bit disturbing to find out that a company has that data on you because you can, like, pick up so much. STEPHANIE: That's -- JOËL: But it's also kind of fun to look at it. And you mentioned Spotify Wrapped, right? STEPHANIE: Right. JOËL: I love Spotify Wrapped. I have so much fun looking at it every year. STEPHANIE: Yeah. It's always kind of funny, you know, when products kind of track that kind of stuff because it's like, oh, like, it feels like you're really seen [laughs] in terms of what insights it's able to come up with. But yeah, I do think it's cool that you have this little badge. I would be curious to know if there's anyone who's, you know, managed to hit a hundred percent of all the docking stations. They must be a Boston bike messenger or something [laughs]. JOËL: Now that I know that they track it, maybe I should go for completion. STEPHANIE: That would be a very cool flex, in my opinion. JOËL: [laughs] And, you know, of course, they're always expanding the network, which is a good thing. I'll bet it's the kind of thing where you get, like, 99%, and then it's just really hard to, like, keep up. STEPHANIE: Yeah, nice. JOËL: But I guess it's very appropriate, right? For a podcast titled The Bike Shed to be enthusiastic about a bike share program. STEPHANIE: That's true. So, for today's topic, I wanted to pick your brain a little bit on a data modeling question that I posed to some other developers at thoughtbot, specifically when it comes to associations and associations through other associations [laughs]. So, I'm just going to kind of try to share in words what this data model looks like and kind of see what you think about it. So, if you had a company that has many employees and then the employee can also have many devices and you wanted to be able to associate that device with the company, so some kind of method like device dot company, how do you think you would go about making that association happen so that convenience method is available to you in the code? JOËL: As a convenience for not doing device dot employee dot company. STEPHANIE: Yeah, exactly. JOËL: I think a classic is, at least the other way, is that it has many through. I forget if you can do a belongs to through or not. You could also write, effectively, a delegation method on the device to effectively do dot employee dot company. STEPHANIE: Yeah. So, I had that same inkling as you as well, where at first I tried to do a belongs to through, but it turns out that belongs to does not support the through option. And then, I kind of went down the next path of thinking about if I could do a has one, a device has one company through employee, right? But the more I thought about it, the kind of stranger it felt to me in terms of the semantics of saying that a device has a company as opposed to a company having a device. It made more sense in plain English to think about it in terms of a device belonging to a company. JOËL: That's interesting, right? Because those are ways of describing relationships in sort of ActiveRecord's language. And in sort of a richer situation, you might have all sorts of different adjectives to describe relationships. Instead of just belongs to has many, you have things like an employee owns a device, an employee works for a company, you know because an employee doesn't literally belong to a company in the literal sense. That's kind of messed up. So, I think what ActiveRecord's language is trying to use is less trying to, like, hit maybe, like, the English domain language of how these things relate to, and it's more about where the foreign keys are in the database. STEPHANIE: Yeah. I like that point where even though, you know, these are the things that are available to us, that doesn't actually necessarily, you know, capture what we want it to mean. And I had gone to see what Rails' recommendation was, not necessarily for the situation I shared. But they have a section for choosing between which model should have the belongs to, as opposed to, like, it has one association on it. And it says, like you mentioned, you know, the distinction is where you place the foreign key, but you should kind of think about the actual meaning of the data. And, you know, we've talked a lot about, I think, domain modeling [chuckles] on the show. But their kind of documentation says that...the has something relationship says that one of something is yours, that it can, like, point back to you. And in the example I shared, it still felt to me like, you know, really, the device wanted to point to the company that it is owned by. And if we think about it in real-world terms, too, if that device, like, is company property, for example, then that's a way that that does make sense. But the couple of paths forward that I saw in front of me were to rework that association, maybe add a new column onto the device, and go down that path of codifying it at the database level. Or kind of maybe something as, like, an in-between step is delegating the method to the employee. And that's what I ended up doing because I wasn't quite ready to do that data migration. JOËL: Adding more columns is interesting because then you can run into sort of data consistency issues. Let's say on the device you have a company ID to see who the device belongs to. Now, there are sort of two different independent paths. You can ask, "Which company does this device belong to?" You can either check the company ID and then look it up in the company table. Or you can join on the employee and join the employee back under company. And those might give you different answers and that can be a problem with data consistency if those two need to stay in sync. STEPHANIE: Yeah, that is a good point. JOËL: There could be scenarios where those two are allowed to diverge, right? You can imagine a scenario where maybe a company owns the device, but an employee of a potentially different company is using the device. And so, now it's okay to have sort of two different chains because the path through the employee is about what company is using our devices versus which company actually owns them. And those are, like, two different kinds of relationships. But if you're trying to get the same thing through two different paths of joining, then that can set you up for some data inconsistency issues. STEPHANIE: Wow. I really liked what you said there because I don't think enough thought goes into the emergent relationships between models after they've been introduced to a codebase. At least in my experience, I've seen a lot of thought go up front into how we might want to model an ActiveRecord, but then less thought into seeing what patterns kind of show up over time as we introduce more functionality to these models, and kind of understand how they should exist in our codebase. Is that something that you find yourself kind of noticing? Like, how do you kind of pick up on the cue that maybe there's some more thought that needs to happen when it comes to existing database tables? JOËL: I think it's something that definitely is a bit of a red flag, for me, is when there are multiple paths to connect to sort of establish a relationship. So, if I were to draw out some sort of, like, diagram of the models, boxes, and arrows or something like that, and then I could sort of overlay different paths through that diagram to connect two models and realize that those things need to be in sync, I think that's when I started thinking, ooh, that's a potential danger. STEPHANIE: Yeah, that's a really great point because, you know, the example I shared was actually a kind of contrived one based on what I was seeing in a client codebase, not, you know, I'm not actually working with devices, companies, and employees [laughs]. But it was encoded as, essentially, a device having one company. And I ended up drawing it out because I just couldn't wrap my head around that idea. And I had, essentially, an arrow from device pointing to company when I could also see that you could go take the path of going through employee [laughs]. And I was just curious if that was intentional or was it just kind of a convenient way to have that direct method available? I don't currently have enough context to determine but would be something I want to pay attention to. Like you said, it does feel like, if not a red flag, at least an orange one. JOËL: And there's a whole kind of science to some of this called database normalization, where they're sort of, like, they all have rather arcane names. They're the first normal form, the second normal form, the third normal form, you know, it goes on. If you look at the definition, they're all also a little bit arcane, like every element in a relation must depend solely upon the primary key. And you're just like, well, what does that mean? And how do I know if my table is compliant with that? So, I think it's worth, if you're Googling for some of these, find an article that sort of explains these a little bit more in layman's terms, if you will. But the general idea is that there are sort of stricter and stricter levels of the amount of sort of duplicate sources of truth you can have. In a sense, it's almost like DRY but for databases, and for your database schema in particular. Because when you have multiple sources of truth, like who does this device belong to, and now you get two different answers, or three different answers, now you've got a data corruption issue. Unlike bugs in code where it's, you know, it can be a problem because the site is down, or users have incorrect behavior, but then you can fix it later, and then go to production, and disruption to your clients is the worst that happened, this sort of problem in data is sometimes unrecoverable. Like, it's just, hey, -- STEPHANIE: Whoa, that sounds scary. JOËL: Yeah, no, data problems scare me in a way that code problems don't. STEPHANIE: Whoa. Could you...I think I interrupted you. But where were you going to go about once you have corrupted data? Like, it's unrecoverable. What happens then? JOËL: Because, like, if I look at the database, do I know who the real owner of this...if I want to fix it, let's say I fix my schema, but now I've got all this data where I've got devices that have two different owners, and I don't know which one is the real one. And maybe the answer is, I just sort of pick one and say, "Oh, the one that was through this association is sort of the canonical one, and we can just sort of ignore the other one." Do I have confidence in that decision? Well, maybe depending on some of the other context maybe, I'm lucky that I can have that. The doomsday scenario is that it's a little bit of one, a little bit of the other because there were different code paths that would write to one way or another. And there's no real way of knowing. If there's not too many devices, maybe I do an audit. Maybe I have to, like, follow up with all of my customers and say, "Hey, can you tell me which ones are really your devices?" That's not going to scale. Like, real worst case scenario, you almost have to do, like, a bit of a bankruptcy, where you say, "Hey, all the data prior to this date there's a bit of a question mark on it. We're not a hundred percent sure about it." And that does not feel great. So, now you're talking about mitigation strategies. STEPHANIE: Oof. Wow. Yeah, you did make it sound [laughs] very scary. I think I've kind of been on the periphery of a situation like this before, where it's not just that we couldn't trust the code. It's that we couldn't trust the data in the database either to tell us how things work, you know, for our users and should work from a product perspective. And I was on a previous client project where they had to, yeah, like, hire a bunch of people to go through that data and kind of make those determinations, like you said, to kind of figure out it out for, you know, all of these customers to determine the source of truth there. And it did not sound like an easy feat at all, right? That's so much time and investment that you have to put into that once you get to that point. JOËL: And there's a little bit of, like, different problems at different layers. You know, at the database layer, generally, you want all of that data to be really in a sort of single source of truth. Sometimes that makes it annoying to query because you've got to do all these joins. And so, there are various denormalization strategies that you can use to make that. Or sometimes it's a risk you're going to take. You're going to say, "Look, this table is not going to be totally normalized. There's going to be some amount of duplication, and we're comfortable with the risk if that comes up." Sometimes you also build layers of abstractions on top, so you might have your data sort of at rest in database tables fully normalized and separated out, but it's really clunky to query. So, you build out a database view on top of that that returns data in sort of denormalized fashion. But that's okay because you can always get your correct answer by querying the underlying tables. STEPHANIE: Wow. Okay. I have a lot of thoughts about this because I feel like database normalization, and I guess denormalization now, are skills that I am certainly not an expert at. And so, when it comes to, like, your average developer, how much do you think that people need to be thinking about this? Or what strategies do you have for, you know, a typical Rails dev in terms of how deep they should go [laughs]? JOËL: So, the classic advice is you probably want to go to, like, third to fourth normal form, usually three. There's also like 3.5 for some reason. That's also, I think, sometimes called BNF. Anyway, sort of levels of how much you normalize. Some of these things are, like, really, really basic things that Rails just builds into its defaults with that convention over configuration, so things like every table should have a primary key. And that primary key should be something that's fixed and unique. So, don't use something like combination of first name, last name as your primary key because there could be multiple people with the same name. Also, people change their names, and that's not great. But it's great that people can change their names. It's not great to rely on that as a primary key. There are things like look for repeating columns. If you've got columns in your schema with a number prefix at the end, that's probably a sign that you want to extract a table. So, I don't know, you have a movie, and you want to list the actors for a movie. If your movie table has actor 1, actor 2, actor 3, actor 4, actor 5, you know, like, all the way up to actor 20, and you're just like, "Yeah, no, we fill, like, actor 1 through N, and if there's any space left over, we just put nulls in those columns," that's a pretty big sign that, hey, why don't you instead have a, like, actor's table, and then make a, like, has many association? So, a lot of the, like, really basic normalization things, I think, are either built into Rails or built into sort of best practices around Rails. I think something that's really useful for developers to get as a sense beyond learning the actual different normal forms is think about it like DRY for your schema. Be wary of sort of multiple sources of truth for your data, and that will get you most of the way there. When you're designing sort of models and tables, oftentimes, we think of DRY more in terms of code. Do you ever think about that a little bit in terms of your tables as well? STEPHANIE: Yeah, I would say so. I think a lot of the time rather than references to another table just starting to grow on a certain model, I would usually lean towards introducing a join table there, both because it kind of encapsulates this idea that there is a connection, and it makes the space for that idea to grow if it needs to in the future. I don't know if I have really been disciplined in thinking about like, oh, you know, there should really...every time I kind of am designing my database tables, thinking about, like, there should only be one source of truth. But I think that's a really good rule of thumb to follow. And in fact, I can actually think of an example right now where we are a little bit tempted to break that rule. And you're making me reconsider [laughter] if there's another way of doing so. One thing that I have been kind of appreciative of lately is on my current client project; there's just, like, a lot of data. It's a very data-intensive and sensitive application. And so, when we introduce migrations, those PRs get tagged for review by someone over from the DevOps side, just to kind of provide some guidance around, you know, making sure that we're setting up our models to scale well. One of the things that he's been asking me on my couple of code changes I introduced was, like, when I introduced an index, like, it happened to be, like, a composite index with a couple of different columns, and the particular order of those columns mattered. And he kind of prompted me to, like, share what my use cases for this index were, just to make sure that, like, some thought went into it, right? Like, it's not so much that the way that I had done it was wrong, but just that I had, like, thought about it. And I like that as a way of kind of thinking about things at the abstraction that I need to to do my dev work day to day and then kind of mapping that to, like you were saying, those best practices around keeping things kind of performant at the database level. JOËL: I think there's a bit of a parallel world that people could really benefit from dipping a toe in, and that's sort of the typed programming world, this idea of making impossible states impossible or making illegal states unrepresentable. That in the sort of now it's not schemas of database tables or schemas of types that you're creating but trying to prevent data coming into a state where someone could plausibly construct an instance of your object or your type that would be nonsensical in the context of your app, kind of trying to lock that down. And I think a lot of the ways that people in those communities think about...in a sense, it's kind of like database normalization for developers. So, if you're not wanting to, like, dip your toe in more of the sort of database-centric world and, like, read an article from a DBA, it might be worthwhile to look at some of those worlds as well. And I think a great starting point for that is a talk by Richard Feldman called Making Impossible States Impossible. It's for the Elm language. And there are equivalents, I think, in many others as well. STEPHANIE: That's really cool that you are making that connection. I know we've kind of briefly talked about workshops in the past on the show. But if there were a workshop for, you know, that kind of database normalization for developers, I would be the first to sign up [laughs]. JOËL: Hint, hint, RailsConf idea. There's something from your original question that I think is interesting to circle back to, and that's the fact that it was awkward to work through in Ruby to do the work that you wanted to do because the tables were laid out in a certain way. And sometimes, there's certain ways that you need the tables to be in order to be sort of safe to represent data, but they're not the optimal way that we would like to interact with them at the Ruby level. And I think it's okay for not everything in Ruby to be 100% reflective of the structure of the tables underneath. ActiveRecord gives us a great pattern, but everything is kind of one-to-one. And it's okay to layer on some things on top, add some extra methods to build some, like, connections in Ruby that rely on this normalized data underneath but that make life easier for you, or they better just represent or describe the relationships that you have. STEPHANIE: 100%. I was really compelled by your idea of introducing helpers that use more descriptive adjectives for what that relationship is like. We've talked about how Rails abstracted things from the database level, you know, for our convenience, but that should not stop us from, like, leaning on that further, right? And kind of introducing our own abstractions for those connections that we see in our domain. So, I feel really inspired. I might even kind of reconsider the way I handled the original example and see what I can make of it. JOËL: And I think your original solution of doing the delegation is a great example of this as well. You want the idea that a device belongs to a company or has an association called company, and you just don't want to go through that long chain, or at least you don't want that to be visible as an implementation detail. So, in this case, you delegate it through a chain of methods in Ruby. It could also be that you have a much longer chain of tables, and maybe they don't all have associations in Rails and all that. And I think it would be totally fine as well to define a method on an object where, I don't know, a device, I don't know, has many...let's call it technicians, which is everybody who's ever touched this device or, you know, is on a log somewhere for having done maintenance. And maybe that list of technicians is not a thing you can just get through regular Rails associations. Maybe there's a whole, like, custom query underlying that, and that's okay. STEPHANIE: Yeah, as you were saying that, I was thinking about that's actually kind of, like, active models are the great spot to put those methods and that logic. And I think you've made a really good case for that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Ruby for All
RailsConf 2024 Workshop Spotlight — Build High Performance Active Record Apps with Andy Atkinson

Ruby for All

Play Episode Listen Later May 2, 2024 43:52


In this episode of 'Ruby for All', Andrew and Julie discuss drawing inspiration from MC Escher through games like Monument Valley, to dealing with the intricacies of Discord roles and authorization, and the importance of immediate and continuous feedback through tools like Google Docs during talks. Then, guest Andrew Atkinson joins us and shares insights from his new book, “High Performance PostgreSQL for Rails,” detailing his journey from initial drafts to publishing and his shift towards independent consulting. He emphasizes the significance of understanding database operations, schema design, and efficient querying for optimizing Rails applications. Also, Andy talks about preparing a workshop for RailsConf, aiming to educate participants on query performance improvement techniques and the utility of using multiple Postgres instances. The conversation also touches upon the learning strategies, potential challenges, and benefits of workshops versus talks at conferences. Hit download now to hear more! [00:00:10] Julie started drawing again inspired by MC Escher and playing a game called Monument Valley, and Andrew mentions he's on a tilt dues to issues with Discord roles. [00:01:59] Andrew introduces the git command ‘git instaweb' as a cool new find and shares something he remembered going back to the getting feedback for talks topic.[00:04:24] Andrew “Andy” Atkinson introduces himself and discusses the completion of his book, “High Performance PostgreSQL for Rails,” the positive response in beta sales, and his new venture into independent consulting. [00:08:16] Andy talks about his shift from development work to more educational and consultative roles, considering diving deeper into Postgres development. [00:09:48] There's a discussion about Andy balancing work-life commitments, creating content-like videos and tutorials, and leveraging these for marketing and educational purposes in the tech community. [00:11:29] Andy considers the idea of making short videos for platforms like TikTok and Instagram Reels, and he talks about his preference for watching conference talks on YouTube over popular content creators. He also talks about Hussein Nassar's videos on Udemy and how he encouraged him to make short videos. [00:15:01] Andy is conducting a workshop at RailsConf and expresses his excitement about presenting at RailsConf and the opportunity to connect with people interested in query and database optimization.[00:17:09] Julie shares her preference for learning through hands-on workshops and looks forward to participating in Andy's workshop. Andy gives us a sneak peak of his workshop which will focus on query performance, query running, and index support, as well as exploring the benefits of having multiple Postgres instances.[00:20:19] Andrew asks if Docker is necessary for the workshop, leading to a discussion on the practicality of simulating different database instances. [00:22:10] Andy plans to prepare for potential challenges such as internet issues by possibly providing content on USB drives and ensuring attendees can access prerequisites before the workshop. He emphasizes the workshop format will be more hands-on with less lecturing. [00:24:06] Julie asks about the prerequisites needed for audience members attending the workshop, especially if they're new to Rails or databases. Andy clarifies that attendees should have at least built a database-backed Rails app or have similar experience with another language or framework,[00:25:44] Julie mentions that there's a desire for more advanced content in talks and having a range allows participants to engage at different levels. Andrew shares his preference for advanced topics in workshops.[00:29:45] Andrew explains his preference for collaborative learning and anticipates the second day of RailsConf to be different and beneficial for those who like to pair and bounce ideas off others. Andy wants to ensure that the workshop content is new and valuable, different from what attendees might learn elsewhere. [00:32:11] Andy outlines the key takeaways he hopes attendees will leave with, including skills to improve the speed and scalability of their web apps, understanding database operations, and leveraging multiple databases with Rails Active Record. [00:34:04] Andrew shares while reading Andy's talk outline, he realized he wasn't sure when to use indexes outside of standard use cases. Andy acknowledges the importance of not just solving existing problems with indexes, but also identifying where problems may arise in Postgres by tracking queries not using indexes. [00:36:35] Andrew discusses the existence of gems like lol_dba, which suggest potential indexing opportunities, but notes the difficulty in validating those suggestions. Andy mentions other tools like Rails PG Extras and tells us the workshop will demonstrate how to use the ‘explain' command to evaluate the use and impact if indexes on individual query performance. [00:38:44] We end with Andrew inquiring why Postgres does not allow control over the query plan selection. Andy responds that Postgres' declarative paradigm aims for the planner to continually adapt and choose the lowest cost plan and mentions an extension called pg_hint_plan.[00:40:54] Find out where you can follow Andy online, where to get his book, and his upcoming conference plans.Panelists:Andrew MasonJulie J.Guest:Andrew AtkinsonSponsors:HoneybadgerGoRailsLinks:Andrew Mason X/TwitterAndrew Mason WebsiteJulie J. X/TwitterJulie J. WebsiteAndrew Atkinson WebsiteAndrew Atkinson X/TwitterAndrew Atkinson ConsultingM.C. Escher Monument Valley git instawebRuby for All-Episode 26: The Database Wizard with Andrew AtkinsonHigh Performance PostgreSQL for Rails: Reliable, Scalable, Maintainable Database Applications by Andrew AtkinsonHussein Nassar (Udemy)Rideshare-Rails app for “High Performance PostgreSQL for Rails” RailsConf 2024RailsConf 2024 Schedule: Andy Atkinson, May 8th, 2:30-Build High Performance Active Record Appslol_dba

The Bike Shed
424: The Spectrum of Automated Processes for Your Dev Team

The Bike Shed

Play Episode Listen Later Apr 30, 2024 36:47


Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes. The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions. Transcript: AD: We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP. Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue. STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky. And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error. STEPHANIE: That sounds reasonable to me. JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions. And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice. STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that. JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate. Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern. STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain? JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that. STEPHANIE: Gotcha. JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors. But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that. STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that? JOËL: This is more about the developer experience. STEPHANIE: Got it. JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape. And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that. So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated. STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense. JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on. STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts. JOËL: Yeah. STEPHANIE: Cool. JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system. STEPHANIE: Absolutely. That's very exciting. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet. JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor? STEPHANIE: Yeah, it's the latter. JOËL: Okay. STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs]. JOËL: Oh no! STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears. JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me. STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky. And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding. JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk. STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective. JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell? STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it. JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th. STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else. JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top? STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app. So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing. JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right? STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right? JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming. And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior. STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that. JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work? STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally. JOËL: But it's an optional test failure, so you're allowed to let that test fail. STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically. JOËL: I see. STEPHANIE: Or an ignore list, yeah. JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync. STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs]. JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you. STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods. JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different? STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done. JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant. STEPHANIE: Ooh. JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app. STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here? JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime? Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match. So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it. STEPHANIE: Hmm. That is very compelling to me. JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that? STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous. When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check. JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool. STEPHANIE: Ooh. JOËL: And what makes sense in a linter. STEPHANIE: What was the argument for the linter being a worse tool by doing that? JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article. STEPHANIE: I'll have to revisit it after the show [laughs]. JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes. STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team? JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you." STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen." I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking. JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go. STEPHANIE: Interesting. It's kind of like the honor system, then [laughs]. JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production. Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide. STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum. JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool. STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work? JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build. You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up." STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice. JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that. STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers? JOËL: I'm going to throw a fancy phrase at you. STEPHANIE: Ooh, I'm ready. JOËL: Signal-to-noise ratio. STEPHANIE: Whoa, uh-huh. JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds. So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful. STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process. JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check. And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem." STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system? [laughter] JOËL: I may be too much of a developer. STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations. JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up. STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Ruby for All
RailsConf 2024 Countdown — Behind the Scenes with Kevin Murphy

Ruby for All

Play Episode Listen Later Apr 18, 2024 28:46


In this episode of 'Ruby For All', hosts Andrew and Julie welcome guest Kevin Murphy, Software Developer at Pubmark and member of the RailsConf program committee. The discussion kicks off with Andrew and Julie catching up, then transitions into an in-depth conversation about the RailsConf planning process. Kevin and Julie, the Speaker Liaison, share insights into the workings of the program committee, the selection criteria for conference talks, and the challenges and rewards of organizing RailsConf. Additionally, Kevin elaborates on his role in the committee, the theme for this year's conference, and his goals for impact, and Julie looks forward to supporting speakers and managing workshops. The episode emphasizes the importance of volunteer contributions to the success of RailsConf and encourages attendees to express their gratitude to the organizers, and to go check out all the details at RailsConf.org and buy your tickets now!  Press download to hear more! [00:00:15] Andrew and Julie catch up, Andrew overslept after finishing a project, Julie was up late watching videos, and Andrew recommends the show “Invincible.”[00:02:22] Kevin introduces himself and explains why he likes working with Ruby on Rails. [00:03:37] Kevin discusses the role of the RailsConf program committee, explains their responsibilities, including reviewing proposals and scheduling talks. [00:05:10] We learn what Kevin looks for in a conference talk proposal, emphasizing relevance to the theme and potential audience interest. Julie shares her perspective on reviewing proposals, considering both her emotional response and broader interests. [00:07:38] Kevin shares his first experience on the committee and discusses the time commitment involved and talks about the fairness of reviewing all proposals at once after the submission deadline. [00:11:03] Julie expresses her difficulty with the proposal reviewing process, suggesting that a grading scale might have been more effective for her. Kevin reflects on the surprises of the reviewing process and the difference between his perceptions and the rankings generated by the review system. [00:12:41] Julie adds that the difficulty in having to reject good talks due to overlapping topics or because they might fit better at another conference like RubyConf,[00:13:09] Andrew asks if the proposers receive feedback on why their talks may be more suited for RubyConf, and Kevin explains that if they ask, Ruby Central will make their best effort to provide it.[00:14:47] What's been the most rewarding part of this experience for Kevin and Julie? Kevin finds the opportunity to impact the community through the program committee rewarding, and Julie says she's waiting to see the full impact of her role as Speaker Liaison, which involves making speakers feel supported and pairing them with mentors.[00:16:24] Kevin and Julie both explain how they were invited to join the program committee by Ufuk, who's a member of the Ruby Central board, and Julie brings up a previous episode with Kevin on conference speaking. [00:17:52] Andrew asks what Kevin and Julie think the hardest part of will be being on the program committee at the conference. Kevin hopes his committee responsibilities won't impact his conference experience too much, and Julie anticipates the challenge of not having as much personal downtime during the conference due to her responsibilities. [00:19:41] Kevin reflects on the subjective nature of selecting talks and how different perceptions among committee members can affect decisions. He emphasizes that rejected talks are not necessarily of poor quality but may not fit due to other reasons.[00:21:02] Julie inquires about Kevin's role on the program committee and how he feels so far. His role involves scheduling and organizing accepted talks and workshops, reviewing and giving feedback on rejected proposals, and just being available.[00:22:00] Julie's role is Speaker Liaison, helping speakers with their needs and feeling special, and helping with scheduling workshops. Kevin clarifies the concept of tracks at conferences and since there aren't any this year, the goal is to align all talks with the overall theme of building with Rails. Julie mentions a blog post written by Kevin about the absence of tracks at RailsConf.[00:23:28] Kevin shares his aspirations for his impact on RailsConf: ensuring a safe, educational experience for attendees, seeing first-time speakers succeed, and enjoying the mentorship process. Julie describes her motivation for becoming a Speaker Liaison: to provide a supportive experience for speakers. [00:25:03] RailsConf is happening in Detroit, May 7-9. Kevin expresses his excitement for various aspects, including the strong program and meeting friends, and urges everyone to visit RailsConf.org, check the schedule, and get tickets. [00:28:19] Find out where you can follow Kevin online. Panelists:Andrew MasonJulie JGuest:Kevin MurphySponsors:HoneybadgerGoRailsLinks:Andrew Mason X/TwitterAndrew Mason WebsiteJulie J. X/TwitterJulie J. WebsiteKevin Murphy WebsitePubmarkRuby for All Podcast-Episode 50: The Art of Conference Speaking with Kevin MurphyTracks Not At RailsConf 2024-by Kevin MurphyRailsConf-Detroit May 7-9, 2024RailsConf ScheduleRailsConf SpeakersInvincible: Season 2 (Amazon Prime) (00:00) - Introduction and Casual Catch-Up (02:22) - Guest Introduction: Kevin Murphy on Ruby and Rails (03:37) - Inside the RailsConf Program Committee: Roles and Responsibilities (05:10) - What Makes a Great Conference Talk? Selection Criteria Explored (07:38) - Behind the Scenes: Reviewing Conference Proposals (11:03) - Challenges in Proposal Review: Fairness and Surprises (12:41) - Rejection Reasons: When Good Talks Don't Make the Cut (13:09) - Feedback for Rejected Proposals: Bridging to RubyConf (14:47) - Rewarding Aspects of Program Committee Work (16:24) - Joining the Program Committee: Invitations and Insights (17:52) - Preparing for RailsConf: Expectations and Challenges (19:41) - The Subjectivity of Talk Selection: A Committee Perspective (21:02) - Roles Deep Dive: Organizing Talks and Supporting Speakers (23:28) - Goals and Aspirations for RailsConf Impact (25:03) - RailsConf 2023 Preview: What to Expect in Detroit

Code and the Coding Coders who Code it
Episode 34 - Ufuk Kayserilioglu

Code and the Coding Coders who Code it

Play Episode Listen Later Apr 16, 2024 44:14


Discover the heartbeat of Ruby with Ufuk Kayserilioglu, the engineering maestro at Shopify, as we unravel the layers of Ruby infrastructure and the bright future of Rails development. Ufuk's insights into the meticulous work of maintaining CRuby reveal the fine art of balancing performance with modernity while also diving into the exciting realms of TruffleRuby and the Prism compiler. His recent appointment to the Ruby Central board brings a fresh perspective to the community's cherished events, igniting innovative visions for RailsConf that are sure to resonate with enthusiasts and professionals alike.Feel the buzz of RailsConf as we praise the lineup of visionary speakers set to grace the stage, each chosen with the utmost care to embody the conference's core Rails theme. The anticipation bubbles over as we discuss the wealth of knowledge awaiting attendees, from software craftsmanship to the intricate web of professional relationships that flourish within the Rails ecosystem. Get a glimpse of what Hack Days promises, an unparalleled opportunity for real-time collaboration that could very well become the centerpiece of your RailsConf experience.As we gear up for one of the most pivotal gatherings in the Ruby community, we highlight the essential role of Ruby Central—our steward of Ruby's shared resources and the architect of iconic events like RailsConf and RubyConf. The chapter closes with a nod to the potential of supporting local meetups and regional conferences, an endeavor that strengthens the fabric of our community. And for a touch of tech magic, we share tales of Tailscale's impact on developer networking, a pathway to career ascension for many. Stay connected with us online for the latest on RailsConf and more, and prepare to accelerate your professional journey in the world of Ruby.Support the showReady to start your own podcast?This show is hosted on Buzzsprout and it's awesome, not to mention a Ruby on Rails application. Let Buzzsprout know we sent you and you'll get a $20 Amazon gift card if you sign up for a paid plan, and it helps support our show.

Remote Ruby
Irina Nazarova from Evil Martians

Remote Ruby

Play Episode Listen Later Apr 12, 2024 50:21


In today's episode, Jason, Chris, and Andrew, along with their guest, Irina Nazarova, CEO of Evil Martians, engage in a candid discussion that covers the intricacies of using Rails and integrating it with technologies like React, and the challenges of marketing developer-facing products. The discussion also touches on open-core business models, the relevance of Docker in current tech companies, and the future of software deployment. Also, Irina touches on a new tool from Thoughtbot called Superglue, a new open source product called Skooma, and she invites listeners to come to RailsConf and some Ruby meetups in San Francisco coming soon. Press download to hear more! Panelists:Jason CharnesChris OliverAndrew MasonGuest:Irina NazarovaSponsor:HoneybadgerLinks:Jason Charnes X/TwitterChris Oliver X/TwitterAndrew Mason X/TwitterIrina Nazarova X/TwitterEvil Martians X/TwitterEvil MartiansEvil Martians SkoomaThoughtbot SuperglueThrusterSupabase“Image processing servers benchmark”-imgproxy blogRailsConf -May 7-9, 2024Rails World-Sept 26-27, 2024RubyConf AU-April 11-12 2024Honeybadger Honeybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

The Bike Shed
422: Listener Topics Grab Bag

The Bike Shed

Play Episode Listen Later Apr 9, 2024 35:23


Joël conducted a thoughtbot mini-workshop on query plans, which Stephanie found highly effective due to its interactive format. They then discuss the broader value of interactive workshops over traditional talks for deeper learning. Addressing listener questions, Stephanie and Joël explore the strategic use of if and else in programming for clearer code, the importance of thorough documentation in identifying bugs, and the use of Postgres' EXPLAIN ANALYZE, highlighting the need for environment-specific considerations in query optimization. Episode mentioning query plans (https://bikeshed.thoughtbot.com/418) Query plan visualizer (https://explain.dalibo.com/) RailsConf 2024 (https://railsconf.org/) Episode 349: Unpopular Opinions (https://bikeshed.thoughtbot.com/349) Squint test (https://www.youtube.com/watch?v=8bZh5LMaSmE) Episode 405: Retro on Sandi Metz rules (https://bikeshed.thoughtbot.com/405) Structuring conditionals in a wizard (https://thoughtbot.com/blog/structuring-conditionals-in-a-wizard) Episode 417: Module docs (https://bikeshed.thoughtbot.com/417) Episode 416: Multidimensional numbers (https://bikeshed.thoughtbot.com/416) ruby-units gem (https://github.com/olbrich/ruby-units) Solargraph (https://marketplace.visualstudio.com/items?itemName=castwide.solargraph) parity (https://github.com/thoughtbot/parity) Transcript: STEPHANIE:  Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: Just recently, I ran a sort of mini workshop for some colleagues here at thoughtbot to dig into the idea of query plans and, how to read them, how to use them. And, initially, this was going to be more of a kind of presentation style. And a colleague and I who were sharing this decided to go for a more interactive format where, you know, this is a, like, 45-minute slot. And so, we set it up so that we did a sort of intro to query plans in about 10 minutes then 15 minutes of breakout rooms, where people got a chance to have a query plan. And they had some sort of comprehension questions to answer about it. And then, 15 minutes together to have each group share a little bit about what they had discovered in their query plan back with the rest of the group, so trying to balance some understanding, some application, some group discussion, trying to keep it engaging. It was a pretty fun approach to sharing information like that. STEPHANIE: Yeah. I wholeheartedly agree. I got to attend that workshop, and it was really great. Now that I'm hearing you kind of talk about the three different components and what you wanted people attending to get out of it, I am impressed because [laughs] there is, like, a lot more thought, I think, that went into just participant engagement that reflecting on it now I'm like, oh yeah, like, I think that was really effective as opposed to just a presentation. Because you had, you know, sent us out into breakout rooms, and each group had a different query that they were analyzing. You had kind of set up links that had the query set up in the query analyzer. I forget what the tool was called that you used. JOËL: I forget the name of it, but we will link it in the show notes. STEPHANIE: Yeah. It was helpful for me, though, because, you know, I think if I were just to have learned about it in a presentation or even just looked at, you know, screenshots of it on a slide, that's different still from interacting with it and feeling more confident to use it next time I find myself in a situation where it might be helpful. JOËL: It's really interesting because that was sort of the goal of it was to make it a bit more interactive and then, hopefully, helping people to retain more information than just a straight up, like, presentation would be. I don't know how you feel, I find that often when I go to a place like, let's say, RailsConf, I tend to stay away from more of the workshop-y style events and focus more on the talks. Is that something that you do as well? STEPHANIE: Yeah. I have to confess that I've never attended a workshop [laughs] at a conference. I think it's partly my learning style and also partly just honestly, like, my energy level when I'm at the conference. I kind of just want to sit back. It's on my to-do list. Like, I definitely want to attend one just to see what it's like. And maybe that might even inspire me to want to create my own workshop. But it's like, once I'm in it, and, you know, like, everyone else is also participating, I'm very easily peer pressured [laughs]. So, in a group setting, I will find myself enjoying it a lot more. And I felt that kind of same way with the workshop you ran for our team. Though, I will say a funny thing that happened was that when I went out into my breakout group with another co-worker, and we were trying to grok this query that you gave us, we found out that we got the hardest one, the most complicated one [laughs] because there were so many things going on. There was, like, multiple, like, you know, unions, some that were, like, nested, and then just, like, a lot of duplication as well, like, some conditions that were redundant because of a different condition happening inside of, like, an inner statement. And yeah, we were definitely scratching our heads for a bit and were very grateful that we got to come back together as a group and be like, "Can someone please help? [laughs] Let's figure out what's going on here." JOËL: Sort of close that loop and like, "Hey, here's what we saw. What does everybody else see?" STEPHANIE: Yeah, and I appreciated that you took queries from actual client projects that you were working on. JOËL: Yeah, that was the really fun part of it was that these were not sort of made-up queries to illustrate a point. These were actual queries that I had spent some time trying to optimize and where I had had to spend a lot of time digging into the query plans to understand what was going on. And it sounds like, for you, workshops are something that is...they're generally more engaging, and you get more value out of them. But there's higher activation energy to get started. Does that sound right? STEPHANIE: Yeah, that sounds right. I think, like, I've watched so many talks now, both in person and on YouTube, that a lot of them are easily forgettable [laughs], whereas I think a workshop would be a lot more memorable because of that interactivity and, you know, you get out of it what you put in a little bit. JOËL: Yeah, that's true. Have you looked at the schedule for RailsConf 2024 yet? And are there any workshops on there that you're maybe considering or that maybe have piqued your interest? STEPHANIE: I have, in fact, and maybe I will check off attending a workshop [laughs] off my bucket list this year. There are two that I'm excited about. Unfortunately, they're both at the same time slot, so I -- JOËL: Oh no. You're going to have to choose. STEPHANIE: I know. I imagine I'll have to choose. But I'm interested in the Let's Extend Rails With A Gem by Noel Rappin and Vision For Inclusion Workshop run by Todd Sedano. The Rails gem one I'm excited about because it's just something that I haven't had to do really in my dev career so far, and I think I would really appreciate having that guidance. And also, I think that would be motivation to just get that, like, hands-on experience with it. Otherwise, you know, this is something that I could say that I would want to do and then never get [chuckles] around to it. JOËL: Right, right. And building a gem is the sort of thing that I think probably fits better in a workshop format than in a talk format. STEPHANIE: Yeah. And I've really appreciated all of Noel's content out there. I've found it always really practical, so I imagine that the workshop would be the same. JOËL: So, other than poring over the RailsConf schedule and planning your time there, what has been new for you this week? STEPHANIE: I have a really silly one [laughs]. JOËL: Okay. STEPHANIE: Which is, yesterday I went out to eat dinner to celebrate my partner's birthday, and I experienced, for the first time, robots [laughter] at this restaurant. So, we went out to Hot Pot, and I guess they just have these, like, robot, you know, little, small dish delivery things. They were, like, as tall as me, almost, at least, like, four feet. They were cat-themed. JOËL: [laughs] STEPHANIE: So, they had, like...shaped like cat...they had cat ears, and then there was a screen, and on the screen, there was, like, a little face, and the face would, like, wink at you and smile. JOËL: Aww. STEPHANIE: And I guess how this works is we ordered our food on an iPad, and if you ordered some, like, side dishes and stuff, it would come out to you on this robot cat with wheels. JOËL: Very fun. STEPHANIE: This robot tower cat. I'm doing a poor job describing it because I'm still apparently bewildered [laughs]. But yeah, I was just so surprised, and I was not as...I think I was more, like, shocked than delighted. I imagine other people would find this, like, very fun. But I was a little bit bewildered [laughs]. The other thing that was very funny about this experience is that these robots were kind of going down the aisle between tables, and the aisles were not quite big enough for, like, two-way traffic. And so, there were times where I would be, you know, walking up to go use the restroom, and I would turn the corner and find myself, like, face to face with one of these cat robot things, and, like, it's starting to go at me. I don't know if it will stop [laughs], and I'm the kind of person who doesn't want to find out. JOËL: [laughs] STEPHANIE: So, to avoid colliding with this, you know, food delivery robot, I just, like, ran away from it [laughs]. JOËL: You don't know if they're, like, programmed to yield or something like that. STEPHANIE: Listen, it did not seem like it was going to stop. JOËL: [laughs] STEPHANIE: It got, like, I was, you know, kind of standing there frozen in paralysis [laughs] for a little while. And then, once it got, I don't know, maybe two or three feet away from me, I was like, okay, like, this is too close for comfort [laughs]. So, that was my, I don't know, my experience at this robot restaurant. Definitely starting to feel like I'm in the, I don't know, is this the future? Someone, please let me know [laughs]. JOËL: Is this a future that you're excited or happy about, or does this future seem a little bit dystopian to you? STEPHANIE: I was definitely alarmed [laughter]. But I'm not, like, a super early adopter of new technology. These kinds of innovations, if you will, always surprise me, and I'm like, oh, I guess this is happening now [laughs]. And I will say that the one thing I did not enjoy about it is that there was not enough room to go around this robot. It definitely created just pedestrian traffic issues. So, perhaps this could be very cool and revolutionary, but also, maybe design robots for humans first. JOËL: Or design your dining room to accommodate your vision for the robots. I'm sure that flying cars and robots will solve all of this, for sure. STEPHANIE: Oh yeah [laughter]. Then I'll just have to worry about things colliding above my head. JOËL: And for the listeners who cannot see my face right now, that was absolutely sarcasm [laughs]. Speaking of our listeners, today we're going to look at a group of different listener questions. And if you didn't know that, you could send in a question to have Stephanie and I discuss, you can do that. Just send us an email at hosts@bikeshed.fm. And sometimes, we put it into a regular episode. Sometimes, we combine a few and sort of make a listener question episode, which is what we're doing today. STEPHANIE: Yeah. It's a little bit of a grab bag. JOËL: Our first question comes from Yuri, and Yuri actually has a few different questions. But the first one is asking about Episode 349, which is pretty far back. It was my first episode when I was coming on with Chris and Steph, and they were sort of handing the baton to me as a host of the show. And we talked about a variety of hot takes or unpopular opinions. Yuri mentions, you know, a few that stood out to him: one about SPAs being not so great, one about how you shouldn't need to have a side project to progress in your career as a developer, one about developer title inflation, one about DRY and how it can be dangerous for a mid-level dev, avoiding let in RSpec specs, the idea that every if should come with an else, and the idea that developers shouldn't be included in design and planning. And Yuri's question is specifically the question about if statements, that every if should come with an else. Is that still an opinion that we still have, and why do we feel that way? STEPHANIE: Yeah, I'm excited to get into this because I was not a part of that episode. I was a listener back then when it was still Steph and Chris. So, I am hopefully coming in with a different, like, additional perspective to add as well while we kind of do a little bit of a throwback. So, the one about every if should come with an else, that was an unpopular opinion of yours. Do you mind kind of explaining what that means for you? JOËL: Yeah. So, in general, Ruby is an expression-oriented language. So, if you have an if that does not include an else, it will implicitly return nil, which can burn you. There may be some super expert programmers out there that have never run into undefined method for nil nil class, but I'm still the kind of programmer who runs into that every now and then. And so, implicit nils popping up in my code is not something I generally like. I also generally like having explicit else for control flow purposes, making it a little bit clearer where flow of control goes and what are the actual paths through a particular method. And then, finally, doing ifs and elses instead of doing them sort of inline or as trailing conditionals or things like that, by having them sort of all on each lines and balancing out. The indentation itself helps to scan the code a little bit more. So, deeper indentation tells you, okay, we're, like, nesting multiple conditions, or something like that. And so, it makes it a little bit easier to spot complexity in the code. You can apply, and I want to say this is from Sandi Metz, the squint test. STEPHANIE: Yeah, it is. JOËL: Where you just kind of, like, squint at your code so you're not looking at the actual characters, and more of the structure, and the indentation is actually a friend there rather than something to fight. So, that was sort of the original, I think, idea behind that. I'm curious, in your experience, if you would like to balance your conditionals, ifs with something else, or if you would like to do sort of hanging ifs. STEPHANIE: Hanging ifs, I like that phrase that you just coined there. I agree with your opinion, and I think it's especially true if you're returning values, right? I mean, in Ruby, you kind of always are. But if you are caring about return values, like you said, to avoid that implicit nil situation, I find, especially if you're writing tests for that code, it's really easy, you know, if you spot that condition, you're like, okay, great. Like, this is a path I need to test. But then, oftentimes, you don't test that implicit path, and if you don't enter the condition, then what happens, right? So, I think that's kind of what you're referring to when you talk about both. It's, like, easier to spot in terms of control flow, like, all the different paths of execution, as well as, yeah, like, saving you the headaches of some bugs down the line. One thing that I thought about when I was kind of revisiting that opinion of yours is the idea of like, what are you trying to communicate is different or special about this condition when you are writing an if statement? And, in my head, I kind of came up with a few different situations you might find yourself in, which is, one, where you truly just have, like, a special case, and you're treating that completely differently. Another when you have more of a, like, binary situation, where it's you want to kind of highlight either...more of a dichotomy, right? It's not necessarily that there is a default but that these are two opposite things. And then, a third situation in which you have multiple conditions, but you only happen to have two right now. JOËL: Interesting. And do you think that, like, breaking down those situations would lead you to pick different structures for writing your conditionals? STEPHANIE: I think so. JOËL: Which of those scenarios do you think you might be more likely to reach for an if that doesn't have an else that goes with it? STEPHANIE: I think that first one, the special case one. And in Yuri's email, he actually asked, as a follow-up, "Do we avoid guard clauses as a result of this kind of heuristic or rule?" And I think that special case situation is where a guard clause would shine because you are highlighting it by putting it at the top of a method, and then saying like, you know, "Bail out of this" or, like, "Return this particular thing, and then don't even bother about the rest of this code." JOËL: I like that. And I think guard clauses they're not the first thing I reach for, but they're not something I absolutely avoid. I think they need to be used with care. Like you said, they have to be in the top of your method. If you're adding returns and things that break out of your method, deep inside a conditional somewhere, 20 lines into your method, you don't get to call that a guard clause anymore. That's something else entirely. I think, ideally, guard clauses are also things that will break out of the method, so they're maybe raising exception. Maybe they're returning a value. But they are things that very quickly check edge cases and bail so that the body of the method can focus on expecting data in the correct shape. STEPHANIE: I have a couple more thoughts about this; one is I'm reminded of back when we did that episode on kind of retroing Sandi Metz's Rules For Developers. I think one of the rules was: methods should only be five lines of code. And I recall we'd realized, at least I had realized for the first time, that if you write an if-else condition in Ruby, that's exactly five lines [laughs]. And so, now that I'm thinking about this topic, it's cool to see that a couple of these rules converge a little bit, where there's a bit of explicitness in saying, like, you know, if you're starting to have more conditions that can't just be captured in a five-line if-else statement, then maybe you need something else there, right? Something perhaps like polymorphic or just some way to have branched earlier. JOËL: That's true. And so, even, like, you were talking about the exceptional edge cases where you might want to bail. That could be a sign that your method is doing too much, trying to like, validate inputs and also run some sort of algorithm. Maybe this needs to be some sort of, like, two-step thing, where there's almost, like, a parsing phase that's handled by a different object or a different method that will attempt to standardize your inputs and raise the appropriate errors and everything. And then, your method that has the actual algorithm or code that you're trying to run can just assume that its inputs are in the correct shape, kind of that pushing the uncertainty to the edges. And, you know, if you've only got one edge case to check, maybe it's not worth to, like, build this in layers, or separate out the responsibilities, or whatever. But if you're having a lot, then maybe it does make sense to say, "Let's break those two responsibilities out into two places." STEPHANIE: Yeah. And then, the one last kind of situation I've observed, and I think you all talked about this in the Unpopular Opinions episode, but I'm kind of curious how you would handle it, is side effects that only need to be applied under a certain condition. Because I think that's when, if we're focusing less on return values and more just on behavior, that's when I will usually see, like, an if something, then do this that doesn't need to happen for the other path. JOËL: Yes. I guess if you're doing some sort of side effect, like, I don't know, making a request to an API or writing to a file or something, having, like, else return nil or some other sentinel value feels a little bit weird because now you're caring about side effects rather than return values, something that you need to keep thinking of. And that's something where I think my thing has evolved since that episode is, once you start having multiple of these, how do they compose together? So, if you've got if condition, write to a file, no else, keep going. New if condition, make a request to an API endpoint, no else, continue. What I've started calling independent conditions now, you have to think about all the different ways that they can combine, and what you end up having is a bit of a combinatorial explosion. So, here we've got two potential actions: writing to a file, making a request to an API. And we could have one or the other, or both, or neither could happen, depending on the inputs to your method, and maybe you actually want that, and that's cool. Oftentimes, you didn't necessarily want all of those, especially once you start going to three, four, five. And now you've got that, you know, explosion, like, two to the five. That's a lot of paths through your method. And you probably didn't really need that many. And so, that can get really messy. And so, sometimes the way that an if and an else work where those two paths are mutually exclusive actually cuts down on the total number of paths through your method. STEPHANIE: Hmm, I like that. That makes a lot of sense to me. I have definitely seen a lot of, like, procedural code, where it becomes really hard to tell how to even start relating some of these concepts together. So, if you happen to need to run a side effect, like writing to a file or, I don't know, one common one I can think of is notifying something or someone in a particular case, and maybe you put that in a condition. But then there's a different branching path that you also need to kind of notify someone for a similar reason, maybe even the same reason. It starts to become hard to connect, like, those two reasons. It's not something that, like, you can really scan for or, like, necessarily make that connection because, at that point, you're going down different paths, right? And there might be other signals that are kind of confusing things along the way. And it makes it a lot harder, I think, to find a shared abstraction, which could ultimately make those really complicated nested conditions a little more manageable or just, like, easier to understand at a certain complexity. I definitely think there is a threshold. JOËL: Right. And now you're talking about nested versus non-nested because when conditions are sort of siblings of each other, an if-else behaves differently from two ifs without an else. I think a classic situation where this pops up is when you're structuring code for a wizard, a multi-step form. And, oftentimes, people will have a bunch of checks. They're like, oh, if this field is present, do these things. If this field is present, do these things. And then, it becomes very tricky to know what the flow of control is, what you can expect at what moment, and especially which actions might get shared across multiple steps. Is it safe to refactor in one place if you don't want to break step three? And so, learning to think about the different paths through your code and how different conditional structures will impact that, I think, was a big breakthrough for me in terms of taking the next logical step in terms of thinking, when do I want to balance my ifs and when do I not want to? I wrote a whole article on the topic. We'll link it in the show notes. So, Yuri, thanks for a great question, bringing us back into a classic developer discussion. Yuri also asks or gives us a bit of a suggestion: What about revisiting this topic and doing an episode on hot takes or unpopular topics? Is that something that you'd be interested in, Stephanie? STEPHANIE: Oh yeah, definitely, because I didn't get to, you know, share my hot topics the last episode [laughs]. [inaudible 24:23] JOËL: You just got them queued up and ready to go. STEPHANIE: Yeah, exactly. So, yeah, I will definitely be brainstorming some spicy takes for the show. JOËL: So, Yuri, thanks for the questions and for the episode suggestion. STEPHANIE: So, another listener, Kevin, wrote in to us following up from our episode on Module Docs and about a different episode about Multi-dimensional Numbers. And he mentioned a gem that he maintains. It's called Ruby Units. And it basically handles the nitty gritty of unit conversions for you, which I thought was really neat. He mentioned that it has a numeric class, and it knows how to do math [laughs], which I would find really convenient because that is something that I have been grateful not to have to really do since college [laughs], at least those unit conversions and all the things that I'd probably learned in math and physics courses [laughs]. So, I thought that was really cool, definitely is one to check out if you frequently work with units. It seemed like it would be something that would make sense for a domain that is more science or deals in that kind of domain. JOËL: I'm always a huge fan of anything that tags raw numbers that you're working with with a quantity rather than just floating raw numbers around. It's so easy to make a mistake to either treat a number as a quantity you didn't think of, or make some sort of invalid operation on it, or even to think you have a value in a different size than you do. You think you're dealing with...you know you have a time value, but you think it's in seconds. It's actually in milliseconds. And then, you get off by some big factor. These are all mistakes that I have personally made in my career, so leaning on a library to help avoid those mistakes, have better information hiding for the things that really aren't relevant to the work that I'm trying to do, and also, kind of reify these ideas so that they have sort of a name, and they're, like, their own object, their own thing that we can interact with in the app rather than just numbers floating around, those are all big wins from my perspective. STEPHANIE: I also just thought of a really silly use case for this that is, I don't know, maybe I'll have to experiment with this. But every now and then, I find the need to have to convert a unit, and I just pop into Google, and I'm like, please give me, you know, I'll search for 10 kilometers in miles or something [laughs]. But then I have to...sometimes Google will figure it out for me, and sometimes it will just list me with a bunch of weird conversion websites that all have really old-school UI [laughs]. Do you know what I'm talking about here? Anyway, I would be curious to see if I could use this gem as a command-line interface [laughs] for me without having to go to my browser and roll the dice with freecalculator.com or something like that [laughs]. JOËL: One thing that's really cool with this library that I saw is the ability to define your own units, and that's a thing that you'll often encounter having to deal with values that are maybe not one of the most commonly used units that are out there, dealing with numbers that might mean a thing that's very particular to your domain. So, that's great that the library supports that. I couldn't see if it supports multi-dimensional units. That was the episode that inspired the comment. But either way, this is a really cool library. And thank you, Kevin, for sharing this with us. STEPHANIE: Kevin also mentions that he really enjoys using YARD docs. And we had done that whole episode on Module Docs and your experience writing them. So, you know, your people are out there [laughs]. JOËL: Yay. STEPHANIE: And we talked about this a little bit; I think that writing the docs, you know, on one hand, is great for future readers, but, also, I think has the benefit of forcing the author to really think about their inputs and outputs, as Kevin mentions. He's found bugs by simply just going through that process in designing his code, and also recommends Solargraph and Solargraph's VSCode extension, which I suspect really kind of makes it easy to navigate a complex codebase and kind of highlight just what you need to know when working with different APIs for your classes. So, I recently kind of switched to the Ruby LSP, build with Shopify, but I'm currently regretting it because nothing is working for me right now. So, that might be the push that I need [laughs] to go back to using Solargraph. JOËL: It's interesting that Kevin mentions finding bugs while writing docs because that has absolutely been my experience. And even in this most recent round, I was documenting some code that was just sort of there. It wasn't new code that I was writing. And so, I had given myself the rule that this would be documentation-only PRs, no code changes. And I found some weird code, and I found some code that I'm 98% sure is broken. And I had to have the discipline to just put a notice in the documentation to be like, "By the way, this is how the method should work. I'm pretty sure it's broken," and then, maybe come back to it later to fix it. But it's amazing how trying to document code, especially code that you didn't write yourself, really forces you to read it in a different way, interact with it in a different way, and really, like, understand it in a deep way that surprised me a little bit the first time I did it. STEPHANIE: That's cool. I imagine it probably didn't feel good to be like, "Hey, I know that this is broken, and I can't fix it right now," but I'm glad you did. That takes a lot of, I don't know, I think, courage, in my opinion [laughs], to be like, "Yeah, I found this, and I'm going to, you know, like, raise my hand acknowledging that this is how it is," as supposed to just hiding behind a broken functionality that no one [laughs] has paid attention to. JOËL: And it's a thing where if somebody else uses this method and it breaks in a way, and they're like, "Well, the docs say it should behave like this," that would be really frustrating. If the docs say, "Hey, it should behave like this, but it looks like it's broken," then, you know, I don't know, I would feel a little bit vindicated as a person who's annoyed at the code right now. STEPHANIE: For sure. JOËL: Finally, we have a message from Tim about using Postgres' EXPLAIN ANALYZE. Tim says, "Hey, Joël, in the last episode, you talked a bit about PG EXPLAIN ANALYZE. As you stated, it's a great tool to help figure out what's going on with your queries, but there is a caveat you need to keep in mind. The query planner uses statistics gathered on the database when making decisions about how to fetch records. If there's a big difference between your dev or staging database and production, the query may make different decisions. For example, if a table has a low number of records in it, then the query planner may just do a table scan, but in production, it might use an index. Also, keep in mind that after a schema changes, it may not know about new indexes or whatever unless an explicit ANALYZE is done on the table." So, this is really interesting because, as Tim mentions, EXPLAIN ANALYZE doesn't behave exactly the same in production versus in your local development environment. STEPHANIE: When you were trying to optimize some slow queries, where were you running the ANALYZE command? JOËL: I used a combination. I mostly worked off of production data. I did a little bit on a staging database that had not the same amount of records and things. That was pretty significant. And so, I had to switch to production to get realistic results. So, yes, I encountered this kind of firsthand. STEPHANIE: Nice. For some reason, this comment also made me think of..., and I wanted to plug a thoughtbot shell command that we have called Parity, which lets you basically download your production database into your local dev database if you use Heroku. And that has come in handy for me, obviously, in regular development, but would be really great in this use case. JOËL: With all of the regular caveats around security, and PII, and all this stuff that come with dealing with production data. But if you're running real productions on production, you should be cleared and, like, trained for access to all of that. I also want to note that the queries that you all worked with on Friday are also from the production database. STEPHANIE: Really? JOËL: So, you got to see what it actually does, what the actual timings were. STEPHANIE: I'm surprised by that because we were using, like, a web-based tool to visualize the query plans. Like, what were you kind of plugging into the tool for it to know? JOËL: So, the tool accepts a query plan, which is a text output from running a SQL query. STEPHANIE: Okay. So, it's just visualizing it. JOËL: Correct. Yeah. So, you've got this query plan, which comes back as this very intimidating block of, like, text, and arrows, and things like that. And you plug it into this web UI, and now you've got something that is kind of interactive, and visual, and you can expand or collapse nodes. And it gives you tooltips on different types of information and where you're spending the most time. So, yeah, it's just a nicer way to visualize that data that comes from the query plan. STEPHANIE: Gotcha. That makes sense. JOËL: So, that's a very important caveat. I don't think that's something that we mentioned on the episode. So, thank you, Tim, for highlighting that. And for all of our listeners who were intrigued by leaning into EXPLAIN ANALYZE and query plan viewers to debug your slow queries, make sure you try it out in production because you might get different results otherwise. STEPHANIE: So, yeah, that about wraps up our listener topics in recent months. On that note, Joël, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Ruby on Rails Podcast
Episode 512: RailsConf With Ufuk Kayserilioglu

Ruby on Rails Podcast

Play Episode Listen Later Mar 27, 2024 30:46


RailsConf is coming up fast! The program committee has released the schedule and keynote speakers. Ufuk Kayserilioglu joins the show to talk about the program and Ruby Central Show Notes - Kevin's blog post about RailsConf https://kevinjmurphy.com/posts/tracks-not-at-railsconf-2024/ - RailsConf website & registration https://railsconf.org/ If you have comments about this episode, send an email to comments@therubyonrailspodcast.com. You can include a text comment or attach a file from Voice Memos or Google Recorder and we'll respond to some of them on a future show. Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!

The Bike Shed
420: Test Database Woes

The Bike Shed

Play Episode Listen Later Mar 26, 2024 28:16


Joël shares his recent project challenge with Tailwind CSS, where classes weren't generating as expected due to the dynamic nature of Tailwind's CSS generation and pruning. Stephanie introduces a personal productivity tool, a "thinking cap," to signal her thought process during meetings, which also serves as a physical boundary to separate work from personal life. The conversation shifts to testing methodologies within Rails applications, leading to an exploration of testing philosophies, including developers' assumptions about database cleanliness and their impact on writing tests. Avdi's classic post on how to use database cleaner (https://avdi.codes/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/) RSpec change matcher (https://rubydoc.info/gems/rspec-expectations/RSpec%2FMatchers:change) Command/Query separation (https://martinfowler.com/bliki/CommandQuerySeparation.html) When not to use factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot) Why Factories? (https://thoughtbot.com/blog/why-factories) Transcript:  STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I'm working on a new project, and this is a project that uses Tailwind CSS for its styling. And I ran into a bit of an annoying problem with it just getting started, where I was making changes and adding classes. And they were not changing the things I thought they would change in the UI. And so, I looked up the class in the documentation, and then I realized, oh, we're on an older version of the Tailwind Rails gem. So, maybe we're using...like, I'm looking at the most recent docs for Tailwind, but it's not relevant for the version I'm using. Turned out that was not the problem. Then I decided to use the Web Inspector and actually look at the element in my browser to see is it being overwritten somehow by something else? And the class is there in the element, but when I look at the CSS panel, it does not show up there at all or having any effects. And that got me scratching my head. And then, eventually, I figured it out, and it's a bit of a facepalm moment [laughs]. STEPHANIE: Oh, okay. JOËL: Because Tailwind has to, effectively, generate all of these, and it will sort of generate and prune the things you don't need and all of that. They're not all, like, statically present. And so, if I was using a class that no one else in the app had used yet, it hadn't gotten generated. And so, it's just not there. There's a class on the element, but there's no CSS definition tied to it, so the class does nothing. What you need to do is there's a rake task or some sort of task that you can run that will generate things. There's also, I believe, a watcher that you can run, some sort of, like, server that will auto-generate these for you in dev mode. I did not have that set up. So, I was not seeing that new class have any effect. Once I ran the task to generate things, sure enough, it worked. And Tailwind works exactly how the docs say they do. But that was a couple of hours of my life that I'm not getting back. STEPHANIE: Yeah, that's rough. Sorry to hear. I've also definitely gone down that route of like, oh, it's not in the docs. The docs are wrong. Like, do they even know what they're talking about? I'm going to fix this for everyone. And similarly have been humbled by a facepalm solution when I'm like, oh, did I yarn [laughs]? No, I didn't [laughs]. JOËL: Uh-huh. I'm curious, for you, when you have sort of moments where it's like the library is not behaving the way you think it is, is your default to blame yourself, or is it to blame the library? STEPHANIE: [laughs]. Oh, good question. JOËL: And the follow-up to that is, are you generally correct? STEPHANIE: Yeah. Yep, yep, yep. Hmm, I will say I externalize the blame, but I will try to at least do, like, the basic troubleshooting steps of restarting my server [laughter], and then if...that's as far as I'll go. And then, I'll be like, oh, like, something must be wrong, you know, with this library, and I turn to Google. And if I'm not finding any fruitful results, again, you know, one path could be, oh, maybe I'm not Googling correctly, but the other path could be, maybe I've discovered something that no one else has before. But to your follow-up question, I'm almost, like, always wrong [laughter]. I'm still waiting for the day when I, like, discover something that is an actual real problem, and I can go and open an issue [chuckles] and, hopefully, be validated by the library author. JOËL: I think part of what I heard is that your debugging strategy is basic, but it's not as basic as Joël's because you remember to restart the server [chuckles]. STEPHANIE: We all have our days [laughter]. JOËL: Next time. So, Stephanie, what is new in your world? STEPHANIE: I'm very excited to share this with you. And I recognize that this is an audio medium, so I will also describe the thing I'm about to show you [laughs]. JOËL: Oh, this is an object. STEPHANIE: It is an object. I got a hat [laughs]. JOËL: Okay. STEPHANIE: I'm going to put it on now. It's a cap that says "Thinking" on it [laughs] in, like, you know, fun sans serif font with a little bit of edge because the thinking is kind of slanted. So, it is designy, if you will. It's my thinking cap. And I've been wearing it at work all week, and I love it. As a person who, in meetings and, you know, when I talk to people, I have to process before I respond a lot of the time, but that has been interpreted as, you know, maybe me not having anything to say or, you know, people aren't sure if I'm, you know, still thinking or if it's time to move on. And sometimes I [chuckles], you know, take a long time. My brain is just spinning. I think another funny hat design would be, like, the beach ball, macOS beach ball. JOËL: That would be hilarious. STEPHANIE: Yeah. Maybe I need to, like, stitch that on the back of this thinking cap. Anyway, I've been wearing it at work in meetings. And then, when I'm just silently processing, I'll just point to my hat and signal to everyone what's [laughs] going on. And it's also been really great for the end of my work day because then I take off the hat, and because I've taken it off, that's, like, my signal, you know, I have this physical totem that, like, now I'm done thinking about work, and that has been working. JOËL: Oh, I love that. STEPHANIE: Yeah, that's been working surprisingly well to kind of create a bit more of a boundary to separate work thoughts and life thoughts. JOËL: Because you are working from home and so that boundary between professional life and personal life can get a little bit blurry. STEPHANIE: Yeah. I will say I take it off and throw it on the floor kind of dramatically [laughter] at the end of my work day. So, that's what's new. It had a positive impact on my work-life balance. And yeah, if anyone else has the problem of people being confused about whether you're still thinking or not, recommend looking into a physical thinking cap. JOËL: So, you are speaking at RailsConf this spring in Detroit. Do you plan to bring the thinking cap to the conference? STEPHANIE: Oh yeah, absolutely. That's a great idea. If anyone else is going to RailsConf, find me in my thinking cap [laughs]. JOËL: So, this is how people can recognize Bikeshed co-host Stephanie Minn. See someone walking around with a thinking cap. STEPHANIE: Ooh. thinkingbot? JOËL: Ooh. STEPHANIE: Have I just designed new thoughtbot swag [laughter]? We'll see if this catches on. JOËL: So, we were talking recently, and you'd mentioned that you were facing some really interesting dilemmas when it came to writing tests and particularly how tests interact with your test database. STEPHANIE: Yeah. So, I recently, a few weeks ago, joined a new client project and, you know, one of the first things that I do is start to run those tests [laughs] in their codebase to get a sense of what's what. And I noticed that they were taking quite a long time to get set up before I even saw any progress in terms of successes or failures. So, I was kind of curious what was going on before the examples were even run. And when I tailed the logs for the tests, I noticed that every time that you were running the test suite, it would truncate all of the tables in the test database. And that was a surprise to me because that's not a thing that I had really seen before. And so, basically, what happens is all of the data in the test database gets deleted using this truncation strategy. And this is one way of ensuring a clean slate when you run your tests. JOËL: Was this happening once at the beginning of the test suite or before every test? STEPHANIE: It was good that it was only running once before the test suite, but since, you know, in my local development, I'm running, like, a file at a time or sometimes even just targeting a specific line, this would happen on every run in that situation and was just adding a little bit of extra time to that feedback loop in terms of just making sure your code was working if that's part of your workflow. JOËL: Do you know what version of Rails this project was in? Because I know this was popular in some older versions of Rails as a strategy. STEPHANIE: Yeah. So, it is Rails 7 now, recently upgraded to Rails 7. It was on Rails 6 for a little while. JOËL: Very nice. I want to say that truncation is generally not necessary as of Rails...I forget if it's 5 or 6. But back in the day, specifically for what are now called system tests, the sort of, like, Capybara UI-driven browser tests, you had, effectively, like, two threads that were trying to access the database. And so, you couldn't have your test data wrapped in a transaction the way you would for unit tests because then the UI thread would not have access to the data that had been created in a transaction just for the test thread. And so, people would use tools like Database Cleaner to use a truncation strategy to clear out everything between tests to allow a sort of clean slate for these UI-driven feature specs. And then, I want to say it's Rails 5, it may have been Rails 6 when system tests were added. And one of the big things there was that they now could, like, share data in a transaction instead of having to do two separate threads and one didn't have access to it. And all of a sudden, now you could go back to transactional fixtures the way that you could with unit tests and really take advantage of something that's really nice and built into Rails. STEPHANIE: That's cool. I didn't know that about system tests and that kind of shift happening. I do think that, in this case, it was one of those situations where, in the past, the database truncation, in this case, particular using the Database Cleaner gem was necessary, and that just never got reassessed as the years went by. JOËL: That's one of the classic things, right? When you upgrade a Rails app over multiple versions, and sometimes you sort of get a new feature that comes in for free with the new version, and you might not be aware of it. And some of the patterns in the app just kind of keep going. And you don't realize, hey, this part of the app could actually be modernized. STEPHANIE: So, another interesting thing about this testing situation is that I learned that, you know, if you ran these tests, you would experience this truncation strategy. But the engineering team had also kind of played around with having a different test setup that didn't clean the database at all unless you opted into it. JOËL: So, your test database would just...each test would just keep writing to the database, but they're not wrapped in transactions. Or they are wrapped in transactions, but you may or may not have some additional data. STEPHANIE: The latter. So, I think they were also using the transaction strategy there. But, you know, there are some reasons that you would still have some data persisted across test runs. I had actually learned that the use transactional fixtures config for RSpec doesn't roll back any data that might have been created in a before context hook. JOËL: Yep, or a before all. Yeah, the transaction wraps the actual example, but not anything that happens outside of it. STEPHANIE: Yeah, I thought that was an interesting little gotcha. So, you know, now we had these, like, two different ways to run tests. And I was chatting with a client developer about how that came to be. And we then got into an interesting conversation about, like, whether or not we each expect a clean database in the first place when we write our tests or when we run our tests, and that was an area that we disagreed. And that was cool because I had not really, like, thought about like, oh, how did I even arrive at this assumption that my database would always be clean? I think it was just, you know, from experience having only worked in Rails apps of a certain age that really got onto the Database [laughs] Cleaner train. But it was interesting because I think that is a really big assumption to make that shapes how you then approach writing tests. JOËL: And there's kind of a couple of variations on that. I think the sort of base camp approach of writing Rails with fixtures, you just sort of have, for the most part, an existing set of data that's there that you maybe layer on a few extra things on. But there's base level; you just expect a bunch of data to exist in your test database. So, it's almost going off the opposite assumption, where you can always assume that certain things are already there. Then there's the other extreme of, like, you always assume that it's empty. And it sounds like maybe there's a position in the middle of, like, you never know. There may be something. There may not be something, you know, spin the wheel. STEPHANIE: Yeah. I guess I was surprised that it, you know, that was just a question that I never really asked myself prior to this conversation, but it could feel like different testing philosophies. But yeah, I was very interested in this, you know, kind of opinion that was a little bit different from mine about if you assume that your database, your test database, is not clean, that kind of perhaps nudges you in the direction of writing tests that are less coupled to the database if they don't need to be. JOËL: What does coupling to the database mean in this situation? STEPHANIE: So, I'm thinking about Rails tests that might be asserting on a change in database behavior, so the change matcher in RSpec is one that I see maybe sometimes used when it doesn't need to be used. And we're expecting, like, account to have changed the count of the number of records on it for a model have changed after doing some work, right? JOËL: And the change matcher from RSpec is one that allows you to not care whether there are existing records or not. It sort of insulates you from that. STEPHANIE: That's true. Though I guess I was thinking almost like, what if there was some return value to assert on instead? And would that kind of help you separate some side effects from methods that might be doing too much? And kind of when I start to see tests that have both or are asserting on something being returned, and then also something happening, that's one way of, like, figuring out what kind of coupling is going on inside this test. JOËL: It's the classic command-query separation principle from object-oriented design. STEPHANIE: I think another one that came to mind, another example, especially when you're talking about system tests, is when you might be using Capybara and you end up...maybe you're going through a flow that creates a record. But from the user perspective, they don't actually know what's going on at the database level. But you could assert that something was created, right? But it might be more realistic at that level of abstraction to be asserting some kind of visual element that had happened as a result of the flow that you're testing. JOËL: Yeah. I would, in fact, go so far as to say that asserting on the state of your database in a system test is an anti-pattern. System tests are sort of, by design, meant to be all about user behavior trying to mimic the experience of a user. And a user of a website is not going to be able to...you hope they're not able to SSH into [chuckles] your database and check the records that have been created. If they can, you've got another problem. STEPHANIE: I wonder if you could take this idea to the extreme, though. And do you think there is a world where you don't really test database-level concerns at all if you kind of believe this idea that it doesn't really matter what the state of it should be? JOËL: I guess there's a few different things on, like, what it matters about the state of it because you are asserting on its state sort of indirectly in a sort of higher level integration test. You're asserting that you see certain things show up on the screen in a system test. And maybe you want to say, "I do certain tasks, and then I expect to see three items in an unordered list." Those three items probably come from the database, although, you know, you could have it where they come from an API or something like that. So, the database is an implementation level. But if you had random data in your database, you might, in some tests, have four items in the list, some tests have five. And that's just going to be a flaky test, and that's going to be incredibly painful. So, while you're not asserting on the database, having control over it during sort of test setup, I think, does impact the way you assert. STEPHANIE: Yeah, that makes sense. I was suddenly just thinking about, like, how that exercise can actually tell you perhaps, like, when it is important to, in your test setup, be persisting real records as opposed to how much you can get away with, like, not interacting with it because, like, you aren't testing at that integration level. JOËL: That brings up a good point because a lot of tests probably you might need models, but you might not need persisted models to interact with them, if you're testing a method on a model that just does things based off its internal state and not any of the ActiveRecord database queries, or if you have some other service or something that consumes a model that doesn't necessarily need to query. There's a classic blog post on the thoughtbot blog about when you should not reuse. There's a classic blog post on the thoughtbot blog about when not to use FactoryBot. And, you know, we are the makers of FactoryBot. It helps set up records in your database for testing. And people love to use it all the time. And we wrote an article about why, in many cases, you don't need to create something into the database. All you need is just something in memory, and that's going to be much faster than using FactoryBot because talking to the database is expensive. STEPHANIE: Yeah, and I think we can see that in the shift from even, like, fixtures to factories as well, where test data was only persisted as needed and as needed in individual tests, rather than seeding it and having all of those records your entire test run. And it's cool to see that continuing, you know, that idea further of like, okay, now we have this new, popular tool that reduce some of that. But also, in most cases, we still don't need...it's still too much. JOËL: And from a performance perspective, it's a bit of a see-saw in that fixtures are a lot faster because they get inserted once at the beginning of your test run. So, a SQL execution at the beginning of a test run and then every test after that is just doing its thing: maybe creating a record inside of a transaction, maybe not creating any records at all. And so, it can be a lot faster as opposed to using FactoryBot where you're creating records one at a time. Every create call in a test is a round trip to the database, and those are expensive. So, FactoryBot tests tend to be more expensive than those that rely on fixtures. But you have the advantage of more control over what data is present and sort of more locality because you can see what has been created at the test level. But then, if you decide, hey, this is a test where I can just create records in memory, that's probably the best of all worlds in that you don't need anything created ahead with fixtures. You also don't need anything to be inserted using FactoryBot because you don't even need the database for this test. STEPHANIE: I'm curious, is that the assumption that you start with, that you don't need a persisted object when you're writing a basic unit test? JOËL: I think I will as much as possible try not to need to persist and only if necessary use persist records. There are strategies with FactoryBot that will allow you to also, like, build stubbed or just build in memory. So, there's a few different variations that will, like, partially do things for you. But oftentimes, you can just new up an object, and that's what I will often start with. In many cases, I will already know what I'm trying to do. And so, I might not go through the steps of, oh, new up an object. Oh no, I'm getting a I can't do the thing I need to do. Now, I need to write to the database. So, if I'm testing, let's say, an ActiveRecord scope that's filtering down a series of records, I know that's a wrapper around a database query. I'm not going to start by newing up some records and then sort of accidentally discovering, oh yeah, it does write to the database because that was pretty clear to me from the beginning. STEPHANIE: Yeah. Like, you have your mental shortcuts that you do. I guess I asked that question because I wonder if that is a good heuristic to share with maybe developers who are trying to figure out, like, should they create persisted records or, you know, use just regular instance in memory or, I don't know, even [laughs] use, like, a double [laughs]? JOËL: Yeah, I've done that quite a bit as well. I would say maybe my heuristic is, is the method under test going to need to talk to the database? And, you know, I may or may not know that upfront because if I'm test driving, I'm writing the test first. So, sometimes, maybe I don't know, and I'll start with something in memory and then realize, oh, you know, I do need to talk to the database for this. And this is for unit tests, in particular. For something more like an integration test or a system test that might require data in the database, system tests almost always do. You're not interacting with instances in memory when you're writing a system test, right? You're saying, "Given the database state is this when I visit this URL and do these things, this page reacts in such and such a way." So, system tests always write to the database to start with. So, maybe that's my heuristic there. But for unit tests, maybe think a little bit about does your method actually need to talk to the database? And maybe even almost give yourself a challenge. Can I get away with not talking to the database here? STEPHANIE: Yeah, I like that because I've certainly seen a lot of unit tests that are integration tests in disguise [laughs]. JOËL: Isn't that the truth? So, we kind of opened up this conversation with the idea of there are different ways to manage your database in terms of, do you clean or not clean before a test run? Where did you end up on this particular project? STEPHANIE: So, I ended up with a currently open PR to remove the need to truncate the database on each run of the test suite and just stick with the transaction for each example strategy. And I do think that this will work for us as long as we decide we don't want to introduce something like fixtures, even though that is actually also a discussion that's still in the works. But I'm hoping with this change, like, right now, I can help people start running faster tests [chuckles]. And should we ever introduce fixtures down the line, then we can revisit that. But it's one of those things that I think we've been living with this for too long [laughs]. And no one ever questioned, like, "Oh, why are we doing this?" Or, you know, maybe that was a need, however many years ago, that just got overlooked. And as a person new to the project, I saw it, and now I'm doing something about it [laughs]. JOËL: I love that new person energy on a project and like, "Hey, we've got this config thing. Did you know that we didn't need this as of Rails 6?" And they're like, "Oh, I didn't even realize that." And then you add that, and it just moves you into the future a little bit. So, if I understand the proposed change, then you're removing the truncation strategy, but you're still going to be in a situation where you have a clean database before each test because you're wrapping tests in transactions, which I think is the default Rails behavior. STEPHANIE: Yeah, that's where we're at right now. So, yeah, I'm not sure, like, how things came to be this way, but it seemed obvious to me that we were kind of doing this whole extra step that wasn't really necessary, at least at this point in time. Because, at least to my knowledge [laughs], there's no data being seeded in any other place. JOËL: It's interesting, right? When you have a situation where this was sort of a very popular practice for a long time, a lot of guides mentioned that. And so, even though Rails has made changes that mean that this is no longer necessary, there's still a long tail of apps that will still have this that may be upgraded later, and then didn't drop this, or maybe even new apps that got created but didn't quite realize that the guide they were following was outdated, or that a best practice that was in their head was also outdated. And so, you have a lot of apps that will still have these sort of, like, relics of the past. And you're like, "Oh yeah, that's how we used to do things." STEPHANIE: So yeah, thanks, Joël, for going on this journey with me in terms of, you know, reassessing my assumptions about test databases. I'm wondering, like, if this is common, how other people, you know, approach what they expect from the test database, whether it be totally clean or have, you know, any required data for common flows and use cases of your system. But it does seem that little in between of, like, maybe it is using transactions to reset for each example, but then there's also some persistence that's happening somewhere else that could be a little tricky to manage. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Rails with Jason
216 - Andy Croll, Co-Chair of RailsConf 2024

Rails with Jason

Play Episode Listen Later Mar 26, 2024 37:49


In this episode I talk with Andy Croll about Brighton Ruby Conference, RailsConf, and why attending a conference is an investment in your career.RailsConfBrighton Ruby ConferenceAndy Croll on TwitterAndy Croll on MastodonAndyCroll.comFirst Ruby FriendCoverageBook

Remote Ruby
RailsConf 2024 with Ufuk Kayserilioglu

Remote Ruby

Play Episode Listen Later Mar 21, 2024 47:27


Today's episode features a detailed discussion about the upcoming RailsConf 2024, itsprogramming, and significant updates in the Ruby community, particularly regardingRuby Central's contributions. Jason, Chris, and Andrew dive into a conversation withguest, Ufuk Kayserilioglu, Engineering Manager at Shopify's Ruby Infrastructure Team,who recently joined the board of Ruby Central and co-chairs RailsConf 2024. Ufukshares insights on the planned enhancements for the conference to make it morepractical and focused on Rails. He also highlights the formation of the Ruby DeveloperExperience team at Shopify, aimed at improving developer experiences within the Rubyecosystem. The conversation further dives into the financial support for Ruby's opensource projects, such as RubyGems.org and the efforts to sustain and secure Ruby'sinfrastructure. The conversation wraps up with details on RailsConf, an open invitationfor community interaction, and a teaser for special experiences awaiting in-personattendees. Press download now to hear more!Honeybadger Honeybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

The Bike Shed
419: What's New in Your World? (Extended Edition)

The Bike Shed

Play Episode Listen Later Mar 19, 2024 37:13


Stephanie introduces her ideal setup for enjoying coffee on a bike ride. Joël describes his afternoon tea ritual. Exciting news from the hosts: both have been accepted to speak at RailsConf! Stephanie's presentation, titled "So, Writing Tests Feels Painful. What now?" aims to tackle the issues developers encounter with testing while offering actionable advice to ease these pains. Joël's session will focus on utilizing Turbo to create a Dungeons & Dragons character sheet, combining his passion for gaming with technical expertise. Their conversation shifts to artificial intelligence and its potential in code refactoring and other applications, such as enhancing the code review process and solving complex software development problems. Joël shares his venture into combinatorics, illustrating how this mathematical approach helped him efficiently refactor a database query by systematically exploring and testing all potential combinations of query segments. Transcript:  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn, and together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, today I went out for a coffee on my bike, and I feel like I finally have my perfect, like, on-the-go coffee setup. We have this thoughtbot branded travel mug. So, it's one of the little bits of swag that we got from the company. It's, like, perfectly leak-proof. I'll link the brand in the show notes. But it's perfectly leak-proof, which is great. And on my bike, I have a little stem bag, so it's just, like, a tiny kind of, like, cylindrical bag that sits on the, like, vertical part of my handlebars that connects to the rest of my bag. And it's just, like, the perfect size for a 12-ounce coffee. And so, I put my little travel mug in there, and I just had a very refreshing morning. And I'd gone out on my bike for a little bit, stopping by for coffee and headed home to work. And I got to drink my coffee during my first meeting. So, it was a wonderful way to start the day. JOËL: Do you just show up at the coffee shop with your refillable mug and say, "Hey, can you pour some coffee in this?" STEPHANIE: Yeah. I think a lot of coffee places are really amenable to bringing your own travel mugs. So yeah, it's really nice because I get to use less plastic. And also, you know, when you get a to-go mug, it is not leak-proof, right? It could just slosh all over the place and spill, so not bike-friendly. But yeah, bring your own mug. It's very easy. JOËL: Excellent. STEPHANIE: So, Joël, what's new in your world? JOËL: Also, warm beverages. Who would have thought? It's almost like it's cold in North America or something. I've been really enjoying making myself tea in the afternoons recently. And I've been drinking this brand of tea that is a little bit extra. Every flavor of tea they have comes with a description of how the tea feels. STEPHANIE: Ooh. JOËL: I don't know who came up with these, but they're kind of funny. So, one that I particularly enjoy is described as feels like stargazing on an empty beach. STEPHANIE: Wow. That's very specific. JOËL: They also give you tasting notes. This one has tastes of candied violet, elderberry, blackberry, and incense. STEPHANIE: Ooh, that sounds lovely. Are you drinking, like, herbal tea in the afternoon, or do you drink caffeinated tea? JOËL: I'll do caffeinated tea. I limit myself to one pot of coffee that I brew in the morning, and then, whenever that's done, I switch to tea. Tea I allow myself anything: herbal, black tea; that's fine. STEPHANIE: Yeah, I can't have too much caffeine in the afternoon either. But I do love an extra tea. I wish I could remember, like, what even was in this tea or what brand it was, but once I had a tea that was a purplish color. But then, when you squeeze some lemon in it, or I guess maybe anything with a bit of acid, it would turn blue. JOËL: Oh, that's so cool. STEPHANIE: Yeah, I'll have to find what this tea was [laughs] and update the podcast for any tea lovers out there. But yeah, it was just, like, a little bit of extra whimsy to your regular routine. JOËL: I love adding a little whimsy to my day, even if it's just seeing a random animated GIF that a coworker has sent or Tuple has some of the, like, reactions you can send if you're pairing with someone. And I don't use those very often, so whenever one of those comes through, and it's like, ship it or yay, that makes me very happy. STEPHANIE: Agreed. JOËL: This week is really fun because as we were prepping for this episode, we both realized that there is a lot that's been new in our world recently. And Stephanie, in particular, you've got some pretty big news that recently happened to you. STEPHANIE: Yeah, it turns out we're making the what's new in your world segment the entire episode today [laughs]. But my news is that I am speaking at RailsConf this year, so that is May 7th through 9th in Detroit. And so, yeah, I haven't spoken at a RailsConf before, only a RubyConf. So, I'm looking forward to it. My talk is called: So, Writing Tests Feels Painful. What now? JOËL: Wait, is writing tests ever painful [laughs]? STEPHANIE: Maybe not for you, but for the rest of us [laughs]. JOËL: No, it absolutely is. I, right before this recording, came from a pairing session where we were scratching our heads on an, like, awkward-to-write test. It happens to all of us. STEPHANIE: Yeah. So, I was brainstorming topics, and I kind of realized, especially with a lot of our consulting experience, you know, we hear from developers or even maybe, like, engineering managers a lot of themes around like, "Oh, like, development is slowing down because our test suite is such a headache," or "It's really slow. It's really flaky. It's really complicated." And that is a pain point that a lot of tech leaders are also looking to address for their teams. But I was really questioning this idea that, like, it always had to be some effort to improve the test suite, like, that had to be worked on at some later point or get, like, an initiative together to fix all of these problems, and that it couldn't just be baked into your normal development process, like, on an individual level. I do think it is really easy to feel a lot of pain when trying to write tests and then just be like, ugh, like, I wish someone would fix this, right? Or, you know, just kind of ignore the signals of that pain because you don't know, like, how to manage it yourself. So, my talk is about when you do feel that pain, really trying to determine if there's anything you can do, even in just, like, the one test file that you're working in to make things a little bit easier for yourself, so it doesn't become this, like, chronic issue that just gets worse and worse. Is there something you could do to maybe reorganize the file as you're working in it to make some conditionals a little bit clearer? Is there any, like, extra test setup that you're like, "Oh, actually, I don't need this anymore, and I can just start to get rid of it, not just for this one example, but for the rest in this file"? And do yourself a favor a little bit. So yeah, I'm excited to talk about that because I think that's perhaps, like, a skill that we don't focus enough on. JOËL: Are you going to sort of focus in on the side of things where, like, a classic TDD mantra is that test pain reflects underlying code complexity? So, are you planning to focus on the idea of, oh, if you're feeling test pain, maybe take some time to refactor some of the code that's under test, maybe because there's some tight coupling? Or are you going to lean a little bit more into maybe, like, the Boy Scout rule, you know, 'Leave the campsite cleaner than you found it' for your test files? STEPHANIE: Ooh, I like that framing. Definitely more of the former. But one thing I've also noticed working with a lot of client teams is that it's not always clear, like, how to refactor. I think a lot of intermediate developers start to feel that pain but don't know what to do about it. They don't know, like, maybe the code smells, or the patterns, or refactoring strategies, and that can certainly be taught. It will probably pull from that. But even if you don't know those skills yet, I'm wondering if there's, like, an opportunity to teach, like, developers at that level to start to reflect on the code and be like, "Hmm, what could I do to make this a little more flexible?" And they might not know the names of the strategies to, like, extract a class, but just start to get them thinking about it. And then maybe when they come across that vocabulary later, it'll connect a lot easier because they'll have started to think about, you know, their experiences day to day with some of the more conceptual stuff. JOËL: I really like that because I feel we've probably all heard that idea that test pain, especially when you're test driving, is a sign of maybe some anti-patterns or some code smells in the underlying code that you're testing. But translating that into something actionable and being able to say, "Okay, so my tests are painful. They're telling me something needs to be refactored. I'm looking at this code, and I don't know what to refactor." It's a big jump. It's almost the classic draw two circles; draw the rest of the owl meme. And so, I think bridging that gap is something that is really valuable for our community. STEPHANIE: Yeah, that's exactly what I hope to do in my talk. So, Joël, you [chuckles] also didn't quite mention that you have big news as well. JOËL: So, I also got accepted to speak at RailsConf. I'm giving a talk on Building a Dungeons & Dragons Character Sheet Using Turbo. STEPHANIE: That's really awesome. I'm excited because I want to learn more about Turbo. I want someone else to tell me [laughs] what I can do with it. And as a person with a little bit of Dungeons & Dragons experience, I think a character sheet is kind of the perfect vehicle for that. JOËL: Building a D&D character sheet has been kind of my go-to project to experiment with a new front-end framework because it's something that's pretty dynamic. And for those who don't know, there's a bunch of fields that you fill in with stats for different attributes that your character has, but then those impact other stats that get rendered. And sometimes there can be a chain two or three long where different numbers kind of combine together. And so, you've got this almost dependency tree of, like, a particular number. Maybe your skill at acrobatics might depend on a number that you entered in the dexterity field, but it also depends on your proficiency bonus, and maybe also depends on the race that you picked and a few other things. And so, calculating those numbers all of a sudden becomes not quite so simple. And so, I find it's a really fun exercise to build when trying out a new interactive front-end technology. STEPHANIE: Have you done this with a different implementation or a framework? JOËL: I've done this, not completely, but I've attempted some parts of a D&D character sheet, I think, with Backbone.js with Ember. I may have done an Angular one at some point in original Angular, so Angular 1. I did this with Elm. Somehow, I skipped React. I don't think I did React to build a D&D character sheet. And now I'm kind of moving a little bit back to the backend. How much can we get done just with Turbo? Or do we need to pull in maybe Stimulus? These are all things that are going to be really fun to demonstrate. STEPHANIE: Yeah. Speaking of injecting some whimsy earlier, I think it's kind of like just a little more fun than a regular to-do app, you know, or a blog to show how you can build, you know, something that people kind of understand with a different technology. JOËL: Another really fun thing that I've been toying with this week has been using AI to help me refactor code. And this has been using just sort of a classic chat AI, not a tool like Copilot. And I was dealing with a query that was really slow, and I wanted to restructure it in a different way. And I described to the AI how I wanted it to refactor and explicitly said, "I want this to be the same before and after." And I asked it to do the refactor, and it gave me some pretty disappointing results where it did some, like, a couple of really obvious things that were not that useful. And I was talking to a colleague about how I was really disappointed. I was thinking, well, AI should be able to do something better than this. And this colleague suggested changing the way I was asking for things and specifically asking for a step-by-step and asking it to prove every step using relational algebra, which is the branch of math that deals with everything that underlies relational databases, so the transformations that you would do where you keep everything the same, but you're saying, "Hey, these equations are all equivalent." And it sure did. It gave me a, like, 10-step process with all these, like, symbols and things. My relational algebra is not that strong, and so I couldn't totally follow along. But then I asked it to give me a code example, like, show me the SQL at every step of this transformation and at the end. And, you know, it all kind of looked all right. I've not fully tested the final result it gave me to see if it does what it says on the tin. But I'm cautiously optimistic. I think it looks very similar to something that I came up with on my own. And so, I'm somewhat impressed, at least, like, much better than things were in the beginning with that first round. So, I'm really curious to see where I can take this. STEPHANIE: Yeah, I think that's cool that you were able to prompt it differently and get something more useful. One of the reasons why I personally have been a little bit hesitant to get into the large language models is because I would love to see the AI show its work, essentially, like, tell me a little bit more about how it got from question to answer. And I thought that framing of kind of step-by-step show me code was a really interesting way, even to just, like, get some different results that do the same thing. But you can kind of evaluate that a little bit more on your own rather than just using that first result that it gave you that was like, eh, like, I don't know if this really did anything for me. So, it would be cool, even if you don't end up using, like, the final one, right? If something along the way also is an improvement from what you started with that would be really interesting. JOËL: Honestly, I think you kind of want the same thing if you're chatting with an AI chatbot or having a conversation in Slack with a colleague. They're just like, "Hey, can you help me refactor this?" And then a sort of, like, totally different chunk of code. And it's just like, "Trust me, it works." STEPHANIE: [laughs]. JOËL: And maybe it does. Maybe you plug it into your codebase and run the tests against it, and the tests are still green. And so, you trust that it works, but you don't really understand where it came from. That doesn't always feel good, even when it comes from a human. So, what I've appreciated with colleagues has been when they've given me a step-by-step. Sometimes, they give me the final product. They just say, "Hey. Try this. Does this work?" Plug it in to the test. It does pass. It's green. Great. "Tell me what black magic you did to get to that." And then they give me the step-by-step and it's like, oh, that's so good because not only do I get a better understanding of what happens at every step, but now I'm equipped the next time I run into this problem to apply the same technique to figure it out on my own. STEPHANIE: Yeah. And I liked, also, that relational algebra pro tip, right? It kind of ensures that what you're getting makes sense or is equivalent along the way [laughs]. JOËL: We think, right? I don't know enough relational algebra to check its work. It is quite possible that it is making some subtle mistakes along the way, or, like, making inferences that it shouldn't be. I'm not going to say I trust that. But I think, specifically, when asking for SQL transformations, prompting it to do so using relational algebra in a step-by-step way seemed to be a way to get it to do something more reliably or at least give more interesting results. STEPHANIE: Cool. JOËL: I was interested in trying this out in part because I've been more curious about AI tools recently, and also because we're hoping to do a deeper dive into AI on a Bike Shed episode at some point later, so very much still in the gathering information phase. But this was a really cool experience. So, having an AI refactor a query for me using relational algebra, definitely something that's new in my world this week. STEPHANIE: Speaking of refactoring and this idea of making improvements to your code and trying to figure out how to get from what you currently have to something new, I have been thinking a lot about how to make code reviews more actionable. And that's because, on my current client project, our team is struggling a little bit with code reviews, especially when you kind of want to give feedback on more of a design change in the code or thinking about some different abstractions. I have found that that is really hard to communicate async and also in a, like, a GitHub code review format where you can really just comment, like, line by line. And I've found that, you know, when someone is leaving feedback, that's like, "I'm having a hard time reading this. And I'm imagining that we could organize the code a bit differently in these three different layers or abstractions," there's a lot of assumptions there, right [laughs]? That your message is being communicated to the author and that they are able to, like, visualize, or have a mental model for what you're explaining as well. And then kind of what I've been seeing in this dynamic is, like, not really knowing what to do with that and to kind of just, like, I don't know where to go from here. So, I guess the next step is just to, like, merge it. Is that something you've experienced before or encountered when it comes to feedback? JOËL: Broader changes are often challenging to explain, especially when they're...sometimes you get so abstract you can just write a quick paragraph. And sometimes it's like, hey, what if we, like, totally change our approach? I've definitely done the thing where I'll just ping someone and say, "Hey, can we talk about this synchronously? Can we get on a call and have a deeper conversation?" How do you tend to approach if you're not going to hop on a call with someone and, like, have a 20 or 30-minute conversation? How do you approach doing that asynchronously on a pull request? Are you the type of person to put, like, a ton of, like, code blocks, like, "Here's what I was thinking. We could instead have this class and this thing"? And, like, pretty soon, it's, like, a page and a half of text. Or do you have another approach that you like to use? STEPHANIE: Yeah. And I think that's where it can get really interesting. Because my process is, I'll usually just start commenting and maybe if I'm seeing some things that can be done differently. If it's not just, like, a really obvious change that I could just use English to describe, I'll add a little suggested change. But I also don't want to just rewrite this person's code [laughs] in a code review. JOËL: That's the challenge, right? STEPHANIE: Yeah. And I've definitely seen that be done before, too. Once I notice I'm at, like, four plus comments, and then they're not just, like, nitpicks about, like, syntax or something like that, that helps me clue into the idea that there is some kind of bigger change that I might be asking of the author. And I don't want to overwhelm them with, like, individual comments that really are trying to convey something more holistic. JOËL: Right. I wonder if having a, like, specialized yet more abstract language is useful for these sorts of things where a whole paragraph in English or, you know, a ton of code examples might be a bit much. If you're able to say something like, "Hey, how would you feel about using a strategy pattern approach here instead of, you know, maybe a template object or some custom thing that we've built here?" that allows us to say a lot in a fairly sort of terse way. And it's the thing that you can leave more generically on the PR instead of, like, individually commenting in a bunch of places. And that can start a broader conversation at more of an architecture level. STEPHANIE: Yes, I really like that. That's a great idea. I would follow that up with, like, I think at the end of the day, there are some conversations that do need to be had synchronously. And so, I like the idea of leaving a comment like that and just kind of giving them resources to learn what a strategy pattern is and then offering support because that's also a way to shorten that feedback loop of trying to communicate an idea. And I like that it's kind of guiding them, but also you're there to add some scaffolding if it ends up being, like, kind of a big ask for them to figure out what to do. JOËL: There's also oftentimes, I think, a tone thing to manage where, especially if there's a difference in seniority or experience between the two people, it can be very easy for something to come across as an ask or a demand rather than a like, "Hey, let's think about some alternatives here." Or, like, "I have some concerns with your implementation. Let's sort of broadly explore some possible alternatives. Maybe a strategy pattern works." But the person reading that who wrote the original code might be, like, receiving that as "Your code is bad. You should have done a strategy pattern instead." And that's not the conversation I want to have, right? I want to have a back-and-forth about, "Hey, what are the trade-offs involved? Do you have a third architecture you'd like to suggest?" And so, that can be a really tricky thing to avoid. STEPHANIE: Yeah, I like that what you're saying also kind of suggested that it's okay if you don't have an idea yet for exactly how it should look like. Maybe you just are like, oh, like, I'm having a hard time understanding this, but I don't think just leaving it at that gives the author a lot to go on. I think there's something to it about maybe the action part of actionable is just like, "Can you talk about it with me?" Or "Could you explain what you're trying to do here?" Or, you know, leave a comment about what this method is doing. There's a lot of ways, I think, that you can reach some amount of improvement, even if it doesn't end up being, like, the ideal code that you would write. JOËL: Yes. There's also maybe a distinction in making it actionable by giving someone some code and saying, "Hey, you should copy-paste this code and make that..." or, you know, use a GitHub suggested code or something, which works on the small. And in the big, you can give some maybe examples and say, "Hey, what if you refactored in this way?" But sometimes, you could even step back and let them do that work and say, "Hey, I have some concerns with the current architecture. It's not flexible in the ways that we need to be flexible. Here's my understanding of the requirements. And here's sort of how I see maybe this architecture not working with that. Let's think of some different ways we could approach this problem." And oftentimes, it's nice to give at least one or two different ideas to help start that. But it can be okay to just ask the person, "Hey, can you come up with some alternate implementations that would fulfill these sets of requirements?" STEPHANIE: Yeah, I like that. And I can even see, like, maybe you do that work, and you don't end up pursuing it completely in addressing that feedback. But even asking someone to do the exercise itself, I think, can then spark new ideas and maybe other improvements. In general, I like to think about...I'm a little hesitant to use this metaphor because I'm not actually giving code, like, letter grades when I review them, but the idea that, like, not all code has to get, like, an A [chuckles], but maybe getting it, like, from one letter grade up to, like, half a letter grade, like, higher, that is valuable, even if it's not always practical to go through multiple rounds of code review. And I think just making it actionable enough to be a little bit better, like, that is, in my opinion, the sweet spot. JOËL: That's true. The sort of over-giving feedback to someone to try to get code perfect, rather than just saying, "Hey, can we make it slightly better?" And, you know, there are probably some minimum standards you need to hit. But at some point, it's a trade-off of like, how much time do we need to put polishing this versus shipping something? STEPHANIE: Yeah, and I think that it is cumulative over time, right? That's how people learn. Yeah, it's like one of the biggest opportunities for developers to level up is from that feedback. And that's why I think it's important that it's actionable because, you know, and you put the time into, like, giving that review, and it's not just to make sure the code works, but it's also, like, one of the touch points for collaboration. JOËL: So, if you had to summarize what makes code review comments actionable, do you have, like, top three tips that make a comment really actionable as opposed to something that's not helpful? Or maybe that's more of the journey that you're on, and you've not distilled it down to three pithy tips that you can put in a listicle. STEPHANIE: Honestly, I think it does kind of just distill down to one, which is for every comment, you should have an idea of what you would like the author to do about it. And it's okay if it's nothing, but then tell them that it's nothing. You could just be expressing, "I thought this was kind of weird, [laughs]" or "This is not my favorite thing, but it's okay." JOËL: And it can be okay for the thing you want the author to do. It doesn't have to be code. It could be a conversation. STEPHANIE: Yeah, exactly. It could be a conversation. It could be asking for information, too, right? Like, "Did you consider alternatives, and could you share them with me?" But that request portion, I think is really important because, yeah, I think there's so much miscommunication that can happen along the way. So, definitely still trying to figure out how to best support that kind of code review culture on my team. JOËL: This week's episode has been really fun because it's just been a combination of a lot of things that are new in our world, things that we've been trying, things that we've been learning. And kind of in an almost, like, a meta sense, one of the things I've been digging into is combinatorics, the branch of math that looks at how things combine and particularly how it works with combining a bunch of ActiveRecord query fragments where there's potential branching, so things like doing a union of two sort of sub queries or doing an or where you're combining two different where queries and trying to figure out what are the different paths through that. STEPHANIE: Wow, what a great way to combine what we were talking about, Joël [laughs]. Did you apply combinatorics to this podcast episode [laughs]? JOËL: Somehow, topics multiply with each other, something, something. STEPHANIE: Yeah, that makes sense to me [laughs]. Okay. Will you tell me more about what you've been using it for in your queries? JOËL: So, one thing I'm trying to do is because I've got these different branching paths through a query, I want to see sort of all the different ways because these are defined as ActiveRecord scopes, and I'm chaining them together. And it looks linear because I'm calling scope1 dot scope2 dot scope3. But each of those have branches inside of them. And so, there's all these different ways that data could get used or not. And one way that I figured out, like, what are the different paths here, was actually drawing out a matrix, just putting together a table. In this case, I had two scopes, each of which had a two-way branch inside, and so I made a two by two matrix. And that gave me all of the combinations of, oh, if you go down one branch in one scope and down another branch in the other scope. And what I went through is then I went in in each square and filled in how many records I would expect to get back from the query from some basic set that I was working on in each of these combinations. And one thing that was really interesting is that some of those combinations were sort of mutually exclusive, where a scope further down the line was filtering on the same field as an earlier one and would overwrite it or not overwrite it, but the two would then sort of you can't have both of those things be true at the same time. So, I'm looking for something that has a particular manager ID, and then I'm looking for something that has a particular different manager ID. And the way Rails combines these, if you just change scopes with where, is to and them together. There are no records that have both manager ID 1 and manager ID 2. You can only have one manager ID. And so, as I'm filling out my matrix, there's some sections I can just zero out and be like, wait, this will always return zero record. And then I can start focusing on the parts that are not zeroed out. So, I've got two or three squares. What's special about those? And that helped me really understand what the combination of these multiple query fragments together were actually trying to do as a holistic whole. STEPHANIE: Wow, yeah, that is really interesting because I hear you when you say it looks linear. And it would be really surprising to me for there to be branching paths. Like, that's not really what I think about when I think about SQL. But that makes a lot of sense that it could get so complicated that it's just impossible to get a certain kind of result. Like, what's going to be the outcome of applying combinatorics to this? Is there a refactoring opportunity, or is it really just to even understand what's going on? JOËL: So, this was a refactoring that I was trying to do, but I didn't really understand the underlying behavior of the chain of scopes. I just knew that they were doing some complex things that were inefficient from a SQL perspective. And so, I was looking at ways to refactor, but I also wanted to get a sense of what is this actually trying to do other than just chaining a bunch of random bits of code together? So, the matrix really helped for that. The other way that I used it was to write some tests because this query I was trying to refactor, this chain of scopes, was untested. And I wanted to write tests that were very thorough because I wanted to make sure that my refactor didn't break any edge cases. And I'm, you know, writing a few tests. Okay, well, here's a record that I definitely want to get returned by this query, and maybe here are a couple of records I don't want to get returned. And the more I was, like, going into this and trying to write test cases, the more I was finding more edge cases that I didn't want to and, oh, but what about this? And what about the combination of these things? And it got to the point where it was just messing with my mind. I was, like, confusing myself and really struggling to write tests that would do anything useful. STEPHANIE: Wow. Yeah. Honestly, I have already started to become a little bit suspicious of complex scopes, and this further pushes me in that direction [laughs] because yeah, once you start to...like, the benefit of them is that you can chain them, but it really hides a lot of the underlying behavior. So, you can easily just turn yourself around or, like, go, you know, kind of end up [laughs] in a little bit of a bind. JOËL: Definitely, especially once it grows a little bit harder to hold in your head. And I don't know exactly where that level is for me. But in this particular situation, I identified, I think, five different dimensions that would impact the results of this query. And then each dimension had maybe three or four different values that we might care about. And, eventually, I just took the time to write this out. So, I created five arrays and then just said, "Hey, here are the different managers that we care about. Here are the different project types we care about. Here are the different..." and we had, like, five of these, and each array had three or four elements in it. And then, in a series of nested loops, I iterated through all of these arrays and at the innermost loop, created the data that I wanted that matched that particular set of values. Now, we're often told you should not be doing things in nested loops because you end up sort of multiplying all of these together, but, in this case, this is actually what I wanted to do. You know, it turns out that I had a hundred-ish records I had to create to sort of create a data set that would be all the possible edge cases I might want to filter on. And creating them all by hand with all of the different variations was going to be too much. And so, I ended up doing this with arrays and nested loops. And it got me the data that I needed. And it gave me then the confidence to know that my refactor did indeed work the way I was expecting. STEPHANIE: Wow. That's truly hero's work [chuckles]. I'm, like, very excited because it sounds like that's a huge opportunity for some performance improvements as well. JOËL: For the underlying code, yes. The test might be a little bit slow because I'm creating a hundred records in the database. And you might say, "Oh, do you really need to do that? Can you maybe collapse some of these cases?" In this particular case, I really wanted to have high confidence that the refactor was not changing anything. And so, I was okay creating a hundred records over a series of nested iterations. That was a price I was willing to pay. The refactored query, it turns out, I was able to write it in a way that was significantly faster. STEPHANIE: Yeah, that's what I suspected. JOËL: So, I had to rewrite it in a way that didn't take advantage of all the change scopes. I had to just sort of write something custom from scratch, which is often the case, right? Performance and reusability sometimes fight against each other, and it's a trade-off. So, I'm not reusing the scopes. I had to write something from scratch, but it's multiple hundreds of times faster. STEPHANIE: Wow. Yeah. That seems worth it for a slow test [laughs] for the user experience to be a lot better, especially when you just reach that level of complexity. And it's a really awesome strategy that you applied to figure that out. I think it's a very unique one [laughs]. That's for sure. JOËL: I've had an interest in sort of analytical tools to help me understand domain models, to help understand problems, to help understand code that I'm working with for a while now, and I think an understanding of combinatorics fits into that. And then, particular tools within that, such as drawing things out in a table, in a two by two matrix, or an end-by-end matrix to get something visual, that's a great tool for debugging or understanding a problem. Thinking of problems as data that exists in multiple dimensions and then asking about the cardinality of that set it's the kind of analysis I did a lot when I was modeling using algebraic data types in Elm. But now I've sort of taken some of the tools and analysis I use from that world into thinking about things like SQL records, things like dealing with data in Ruby. And I'm able to bring those tools and that way of thinking to help me solve some problems that I might struggle to solve otherwise. For any of our listeners who this, like, kind of piques their interest, combinatorics falls under a broader umbrella of mathematics called discrete math. And within that, there's a lot that I think is really useful, a lot of tools and techniques that we can apply to our day-to-day programming. We have a Bike Shed episode where we talked about is discrete math relevant to day-to-day programmers and what are the ways it's so? We'll link that in the show notes. I also gave a talk at RailsConf last year diving into that titled: The Math Every Programmer Needs. So, if you're looking for something that's accessible to someone who's not done a math degree, those are two great jumping-off points. STEPHANIE: Yeah. And then, maybe you'll start drawing out arrays and applying combinatorics to figure out your performance problems. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeee!!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.

Remote Ruby
Andy Croll - Railsconf - Free Chicken

Remote Ruby

Play Episode Listen Later Feb 23, 2024 47:45


In this episode, we jump straight into a candid conversation with Jason, whohumorously contemplates how to kick things off, earning him the title of “recoveringpodcaster” from Chris after a whirlwind month of Ruby discussions without him. We alsohave the charming Andy Croll back, ready to dive into opinions, insights, and personalstories. With RailsConf on the horizon, the conversation brings us to discussing Andy'srole with Ruby Central and his efforts to revitalize the conference experience. As theynavigate through conference planning challenges and the spirit of the community thatdefines the Ruby world, this episode promises a mix of laughter and encouragement forRailsConf attendees, and an enthusiastic invitation from Andy to join what set to be amemorable and engaging event. Press download now to hear more!Honeybadger Honeybadger is an application health monitoring tool built by developers for developers.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Ruby for All
Mastering Rails Callbacks – Deciphering the Secrets of Active Record

Ruby for All

Play Episode Listen Later Feb 8, 2024 30:42


In this episode of Ruby for All, Andrew and Julie discuss the intricacies of callbacks in Active Record models. They talk about their experiences, the pros and cons of using callbacks, and the issues they faced. They also share some helpful use cases for callbacks, including user authentication, logging and auditing, custom slug generation, and the concept of “hooks.”  Also, Andrew and Julie review their ways of dealing with callbacks testing and debugging in a Rails application. Press download now to hear more! [00:02:32] Let's learn about “callbacks” in Rails as Andrew explains what they are and uses an example of a blog post to explain how a callback might function when saving a post. [00:03:56] Julie inquires if Tiptap can be used in a browser for apps and they discuss before-save callbacks in a post model and how they can be used to extract and save a title.[00:06:19] Andrew elaborates on the three different types of callbacks: before, after, and around callbacks, and gives examples of each. [00:10:06] They discuss practical uses for before-validation callbacks, such as setting default values. [00:11:12] Andrew clarifies the concept of “hooks” in programming, comparing it to callbacks. [00:12:18] Julie asks for examples of actions taken after validation versus before validation. [00:13:19] Andrew talks about how a file upload, an after-create callback can be used for processing the file such as generating thumbnails or updating related resources. He lists examples use cases for callbacks like hashing passwords before saving to the database during user authentication, triggering email notifications after a comment is posted, and logging and auditing activities like user sign-ups or errors. [00:15:57] Julie is curious about whether deleted accounts really remove all user data or just make it as inaccessible, noting some services offer a soft delete option with a time window to recover the account. Andrew has not yet encountered the fallback log issue he set up but explains how before-destroy callbacks could be used to implement a time-based soft delete system. [00:17:19] Andrew describes using before-create callbacks for generating custom slugs for blog posts automatically. [00:17:54] Andrew recalls a discussion at RailsConf about the diverse opinions on using callbacks, with some developers strongly against them and others in favor. He acknowledges that while callbacks can simplify complex operations, they can also make debugging difficult and can become problematic if used excessively or inappropriately. [00:23:00] Julie asks Andrew where he stands on the use of callbacks, and he positions himself in the middle, closer to using them when appropriate. [00:25:16] Andrew emphasizes being cautious with callbacks and explains that callbacks are useful when certain actions need to happen automatically without explicit instruction every time a record is saved.[00:27:40] Andrew discusses the challenges of testing callbacks, as they can require additional setup in tests and slow down the test suite.  He concludes that callbacks are an integral part of Rails, he advises against using them as the first solution and recommends weighing their pros and cons carefully. Panelists:Andrew MasonJulie J.Sponsors:HoneybadgerGoRailsLinks:Andrew Mason X/TwitterAndrew Mason WebsiteJulie J. X/TwitterJulie J. WebsiteActive Record CallbacksTiptap (00:00) - Intro and Topic Overview (02:32) - Introduction to Callbacks in Rails (03:56) - Discussing Before-Save Callbacks and Tiptap (06:19) - Types of Callbacks: Before, After, and Around (10:06) - Uses for Before-Validation Callbacks (11:12) - Hooks vs Callbacks (12:18) - Practical Use Cases for Callbacks (15:57) - Soft Delete Options and Before-Destroy Callbacks (17:19) - Generating Custom Slugs with Before-Create Callbacks (17:54) - Diverse Opinions on Using Callbacks from RailsConf (23:00) - Andrew's Not an Expert (25:16) - Caution and Appropriate Use of Callbacks (27:40) - Challenges of Testing Callbacks

Ruby on Rails Podcast
Episode 506: Unwinding Flakey Tests with Alan Ridlehoover & Fito von Zastrow

Ruby on Rails Podcast

Play Episode Listen Later Feb 7, 2024 32:19


Fito and Alan are frequent RubyConf and RailsConf speakers on topics ranging from software complexity to resolving flaky specs. They joined the how to discuss strategies for dealing with unreliable tests and complex code. Show Notes Cisco Meraki: Careers (https://meraki.com/careers) Alan Ridlehoover's LinkedIn (https://linkedin.com/in/aridlehoover) Fito von Zastrow's LinkedIn (https://linkedin.com/in/fitovz) Talks: The Secret Ingredient: How to Understand & Resolve Just about Any Flaky Test (https://youtu.be/zsGloAjneX0?si=CFR48kl3Ke0sShyh) A Brewer's Guide to Filtering Out Complexity & Churn (https://youtu.be/RJRSosxtzbU?si=zkha6RrUoIReuPT3) Resources: The Code Gardener (https://the.codegardener.com) First Try! Software (https://firsttry.software) Sponsors Honeybadger (https://www.honeybadger.io/) As an Engineering Manager or an engineer, too much of your time gets sucked up with downtime issues, troubleshooting, and error tracking. How can you spend more time shipping code and less time putting out fires? Honeybadger is how. It's a suite of monitoring tools specifically for devs. Get started today in as little as 5 minutes at Honeybadger.io (https://www.honeybadger.io/) with plans starting at free!

The Bike Shed
415: Codebase Calibration

The Bike Shed

Play Episode Listen Later Feb 6, 2024 30:54


Stephanie has a delightful and cute Ruby thing to share: Honeybadger, the error monitoring service, has created exceptionalcreatures.com, where they've illustrated and characterized various common Ruby errors into little monsters, and they're adorable. Meanwhile, Joël encourages folks to submit proposals for RailsConf. Together, Stephanie and Joël delve into the nuances of adapting to and working within new codebases, akin to aligning with a shared mental model or vision. They ponder several vital questions that every developer faces when encountering a new project: the balance between exploring a codebase to understand its structure and diving straight into tasks, the decision-making process behind adopting new patterns versus adhering to established ones, and the strategies teams can employ to assist developers who are familiarizing themselves with a new environment. Honeybadger's Exceptional Creatures (https://www.exceptionalcreatures.com/) RailsConf CFP coaching sessions (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform) HTTP Cats (https://http.cat/) Support and Maintenance Episode (https://bikeshed.thoughtbot.com/409) Transcript:  JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I have a delightful and cute Ruby thing to share I'd seen just in our internal company Slack. Honeybadger, the error monitoring service, has created a cute little webpage called exceptionalcreatures.com, where they've basically illustrated and characterized various common Ruby errors into little monsters [laughs], and I find them adorable. I think their goal is also to make it a really helpful resource for people encountering these kinds of errors, learning about them for the first time, and figuring how to triage or debug them. And I just think it's a really cool way of, like, making it super approachable, debugging and, you know, when you first encounter a scary error message, can be really overwhelming, and then Googling about it can also be equally [chuckles] overwhelming. So, I just really liked the whimsy that they kind of injected into something that could be really hard to learn about. Like, there are so many different error messages in Ruby and in Rails and whatever other libraries you're using. And so, that's kind of a...I think they've created a one-stop shop for, you know, figuring out how to move forward with common errors. And I also like that it's a bit of a collective effort. They're calling it, like, a bestiary for all the little creatures [laughs] that they've discovered. And I think you can, like, submit your own favorite Ruby error and any guidance you might have for someone trying to debug it. JOËL: That's adorable. It reminds me a little bit of HTTP status codes as cat memes site. It has that same energy. One thing that I think is really interesting is that because it's Honeybadger, they have stats on, like, frequency of these errors, and a lot of these ones are tied to...I think they're picking some of the most commonly surfaced errors. STEPHANIE: Yeah, there's little, like, ratings, too, for how frequently they occur, kind of just like, I don't know, Pokémon [laughs] [inaudible 02:31]. I think it's really neat that they're using something like a learning from their business or maybe even some, like, proprietary information and sharing it with the world so that we can learn from it. JOËL: I think one thing that's worth specifying as well is that these are specific exception classes that get raised. So, they're not just, like, random error strings that you see in the wild. They don't often have a whole lot of documentation around them, so it's nice to see a dedicated page for each and a little bit of maybe how this is used in the real world versus maybe how they were designed to be used. Maybe there's a line or two in the docs about, you know, core Ruby when a NoMethodError should be raised. How does NoMethodError actually get used, you know, in real life, and the exceptions that Honeybadger is capturing. That's really interesting to see. STEPHANIE: Yeah, I like how each page for the exception class, and I'm glad you made that distinction, is kind of, like, crowdsourced guidance and information from the community, so I think you could even, you know, contribute to it if you wanted. But yeah, just a fun, little website to bring you some delight when you're on your next head-smacking, debugging adventure [laughs]. JOËL: And I love that it brings some joy to the topic, but, honestly, I think it's a pretty good reference. I could see myself linking to this anytime I want to have a deeper discussion on exceptions. So, maybe there's a code review, and maybe I want to suggest that we raise a different error than the one that we're doing. I could see myself in that GitHub comment being like, "Oh, instead of, you know, raising an exception here, why don't we instead raise a NoMethodError or something like that?" And then link to the bestiary page. STEPHANIE: So, Joël, what's new in your world? JOËL: So, just recently, RailsConf announced their call for proposals. It's a fairly short period this year, only about three-ish weeks long. So, I've been really encouraging colleagues to submit and trying to be a resource for people who are interested in speaking at conferences. We did a Q&A session with a fellow thoughtboter, Aji Slater, who's also a former RailsConf speaker, about what makes for a good talk, what is it like to submit to a call for proposals, you know, kind of everything from the process from having an idea all the way to stage presence and delivering. And there's a lot of great questions that got asked and some good discussion that happened there. STEPHANIE: Nice. Yeah, I think I have noticed that you are doing a lot more to help, especially first-time speakers give their first conference talk this year. And I'm wondering if there's anything you've learned or any hopes and dreams you have for kind of the amount of time you're investing into supporting others. JOËL: What I'd like to see is a lot of people submitting proposals; that's always a great thing. And, a proposal, even if it doesn't get accepted, is a thing that you can resubmit. And so, having gone through the effort of building a proposal and especially getting it maybe peer-reviewed by some colleagues to polish your idea, I think is already just a really great exercise, and it's one that you can shop around. It's one that you can maybe convert into a blog post if you need to. You can convert that into some kind of podcast appearance. So, I think it's a great way to take an idea you're excited about and focus it, even if you can't get into RailsConf. STEPHANIE: I really like that metric for success. It reminds me of a writer friend I have who actually was a guest on the show, Nicole Zhu. She submits a lot of short stories to magazines and applications to writing fellowships, and she celebrates every rejection. I think at the end of the year, she, like, celebrates herself for having received, you know, like, 15 rejections or something that year because that meant that she just went for it and, you know, did the hard part of doing the work, putting yourself out there. And that is just as important, you know, if not more than whatever achievement or goal or the idea of having something accepted. JOËL: Yeah, I have to admit; rejection hurts. It's not a fun thing to go through. But I think even if you sort of make it to that final stage of having written a proposal and it gets rejected, you get a lot of value out of that journey sort of regardless of whether you get accepted or not. So, I encourage more people to do that. To any of our listeners who are interested, the RailsConf call for proposals goes through February 13th, 2024. So, if you are listening before then and are inspired, I recommend submitting. If you're unsure of what makes for a good CFP, RailsConf is currently offering coaching sessions to help craft better proposals. They have one on February 5th, one on February 6th, and one on February 7th, so those are also options to look into if this is maybe your first time and you're not sure. There's a signup form. We'll link to it in the show notes. STEPHANIE: So, another update I have that I'm excited to get into for the rest of the episode is my recent work on our support and maintenance team, which I've talked about on the show before. But for any listeners who don't know, it's a kind of sub-team at thoughtbot that is focused on helping maintain multiple client projects at a time. But, at this point, you know, there's not as much active feature development, but the work is focused on keeping the codebase up to date, making any dependency upgrades, fixing any bugs that come up, and general support. So, clients have a team to kind of address those things as they come up. And when I had last talked about it on the podcast, I was really excited because it was a bit of a different way of working. I felt like it was very novel to be, you know, have a lot of different projects and domains to be getting into. And knowing that I was working on this team, like, short-term and, you know, it may not be me in the future continuing what I might have started during my rotation, I thought it was really interesting to be optimizing towards, like, completion of a task. And that had kind of changed my workflow a bit and my process. JOËL: So, now that you've been doing work on the support and maintenance team for a while and you've kind of maybe gotten more comfortable with it, how are you generally feeling about this idea of sort of jumping into new codebases all the time? STEPHANIE: It is both fun and more challenging than I thought it would be. I tend to actually really enjoy that period of joining a new team or a project and exploring, you know, a codebase and getting up to speed, and that's something that we do a lot as consultants. But I think I started to realize that it's a bit of a tricky balance to figure out how much time should I be spending understanding what this codebase is doing? Like, how much of the application do I need to be understanding, and how much poking around should I be doing before just trying to get started on my first task, the first starter ticket that I'm given? There's a bit of a balance there because, on one hand, you could just immediately start on the task and kind of just, you know, have your blinders [chuckles] on and not really care too much about what the rest of the code is doing outside of the change that you're trying to make. But that also means that you don't have that context of why certain things are the way they are. Maybe, like, the way that you want to be building something actually won't work because of some unexpected complexity with the app. So, I think there, you know, needs to be time spent digging around a little bit, but then you could also be digging around for a long time [chuckles] before you feel like, okay, I finally have enough understanding of this new codebase to, like, build a feature exactly how a seasoned developer on the team might. JOËL: I imagine that probably varies a little bit based on the task that you're doing. So, something like, oh, we want to upgrade this codebase to Ruby 3.3, probably requires you to have a very different understanding of the codebase than there's a bug where submitting a comment double posts it, and you have to dig into that. Both of those require you to understand the application on very different levels and kind of understand different mental models of what the app is doing. STEPHANIE: Yeah, absolutely. That's a really good point that it can depend on what you are first asked to work on. And, in fact, I actually think that is a good guidepost for where you should be looking because you could develop a mental model that is just completely unrelated [chuckles] to what you're asked to do. And so, I suppose that is, you know, usually a good place to start, at least is like, okay, I have this first task, and there's some understanding and acceptance that, like, the more you work on this codebase, the more you'll explore and discover other parts of it, and that can be on a need to know kind of basis. JOËL: So, I'm thinking that if you are doing something like a Ruby upgrade or even a Rails upgrade, a lot of what you care about the app is going to be on a more mechanical level. So, you want to know what gems you're using. You want to know what different patterns are being used, maybe how callbacks are happening, any particular features that are version-specific that are being used, things like that. Whereas if you're, you know, say, fixing a bug, you might care a lot more about some of the product-level concerns. What are we actually trying to do here? What is the expected user experience? How does this deviate from that? What were the underlying mental models of the developers? So, there's almost, like, two lenses you can look at the code. Now, I almost want to make this a two-dimensional thing, where you can look at it either from, like, a very kind of mechanical lens or a product lens in one axis. And then, on the other axis, you could look at it from a very high-level 10,000-foot view and maybe zoom in a little bit where you need, versus a very localized view; here's where the bug is happening on this page, and then sort of zoom out as necessary. And I could see different sorts of tasks falling in different quadrants there of, do I need a more mechanical view? Do I need a more product-focused view? And do I need to be looking locally versus globally? STEPHANIE: Wow. I can't believe you just created a Cartesian graph [laughs] for this problem on the fly. But I love it because I do think that actually lines up with different strategies I've taken before. It's like, how much do you even look at the code before deciding that you can't really get a good picture of it, of what the product is, without just poking around from the app itself? I actually think that I tend to start from the code. Like, maybe I'll see a screenshot that someone has shared of the app, you know, like a bug or something that they want me to fix, and then looking for that text in the code first, and then trying to kind of follow that path, whereas it's also, you know, perfectly viable to try to see the app being used in production, or staging, or something first to get a better understanding of some of the business problems it's trying to solve. JOËL: When you jump into a new codebase, do you sort of consciously take the time to plan your approach or sort of think about, like, how much knowledge of this new codebase do I need before I can, like, actually look at the problem at hand? STEPHANIE: Ooh, that's kind of a hard question to answer because I think my experience has told me enough times that it's never what I think it's going to [laughs] be, not never, but it frequently surprises me. It has surprised me enough times that it's kind of hard to know off the bat because it's not...as much as we work in frameworks that have opinions and conventions, a lot of the work that happens is understanding how this particular codebase and team does things and then having to maybe shift or adjust from there. So, I think I don't do a lot of planning. I don't really have an idea about how much time it'll take me because I can't really know until I dive in a little bit. So, that is usually my first instinct, even if someone is wanting to, like, talk to me about an approach or be, like, "Hey, like, how long do you think this might take based on your experience as a consultant?" This is my first task. Oftentimes, I really can't say until I've had a little bit of downtime to, in some ways, like, acquire the knowledge [chuckles] to figure that out or answer that question. JOËL: How much knowledge do you like to get upfront about an app before you dive into actually doing the task at hand? Are there any things, like, when you get access to a new codebase, that you'll always want to look at to get a sense of the project before you look at any tickets? STEPHANIE: I actually start at the model level. Usually, I am curious about what kinds of objects we're working with. In fact, I think that is really helpful for me. They're like building blocks, in order for me to, like, conceptually understand this world that's being represented by a codebase. And I kind of either go outwards or inwards from there. Usually, if there's a model that is, like, calling to me as like, oh, I'll probably need to interact with, then I'll go and seek out, like, where that model is created, maybe through controllers, maybe through background jobs, or something like that, and start to piece together entry points into the application. I find that helpful because a lot of the times, it can be hard to know whether certain pages or routes are even used at all anymore. They could just be dead code and could be a bit misleading. I've certainly been misled [chuckles] more than once. And so, I think if I'm able to pull out the main domain objects that I notice in a ticket or just hear people talk about on the team, that's usually where I gravitate towards first. What about you? Do you have a place you like to start when it comes to exploring a new codebase like that? JOËL: The routes file is always a good sort of overview of, like, what is going on in the app. Scanning the models directory is also a great start in a Rails app to get a sense of what is this app about? What are the core nouns in our vocabulary? Another thing that's good to look for in a codebase is what are the big types of patterns that they tend to use? The Rails ecosystem goes through fads, and, over time, different patterns will be more popular than others. And so, it's often useful to see, oh, is this an app where everything happens in service objects, or is this an app that likes to rely on view components to render their views? Things like that. Once you get a sense of that, you get a little bit of a better sense of how things are architected beyond just the basic MVC. STEPHANIE: I like that you mentioned fads because I think I can definitely tell, you know, how modern an app is or kind of where it might be stuck in time [chuckles] a little bit based on those patterns and libraries that it's heavily utilizing, which I actually find to be an interesting and kind of challenging position to be in because how do you approach making changes to a codebase that is using a lot of patterns or styles from back in the day? Would you continue following those same patterns, or do you feel motivated to introduce something new or kind of what might be trendy now? JOËL: This is the boring answer, but it's almost never worth it to, like, rewrite the codebase just to use a new pattern. Just introducing the new pattern in some of the new things means there are now two patterns. That's also not a great outcome for the team. So, without some other compelling reason, I default to using the established patterns. STEPHANIE: Even if it's something you don't like? JOËL: Yes. I'm not a huge fan of service objects, but I work in plenty of codebases that have them, and so where it makes sense, I will use service objects there. Service objects are not mutually exclusive with other things, and so sometimes it might make sense to say, look, I don't feel like I can justify a service object here. I'll do this logic in a view, or maybe I'll pull this out into some other object that's not a service object and that can live alongside nicely. But I'm not necessarily introducing a new pattern. I'm just deciding that this particular extraction might not necessarily need a service object. STEPHANIE: That's an interesting way to describe it, not as a pattern, but as kind of, like, choosing not to use the existing [chuckles] pattern. But that doesn't mean, like, totally shifting the architecture or even how you're asking other people to understand the codebase. And I think I'm in agreement. I'm actually a bit of a follower, too, [laughs], where I want to, I don't know, just make things match a little bit with what's already been created, follow that style. That becomes pretty important to me when integrating with a team in a codebase. But I actually think that, you know, when you are calibrating to a codebase, you're in a position where you don't have all that baggage and history about how things need to be. And maybe you might be empowered to have a little bit more freedom to question existing patterns or bring some new ideas to the team to, hopefully, like, help the code evolve. I think that's something that I struggle with sometimes is feeling compelled to follow what came before me and also wanting to introduce some new things just to see what the team might think about them. JOËL: A lot of that can vary depending on what is the pattern you want to introduce and sort of what your role is going to be on that team. But that is something that's nice about someone new coming onto a project. They haven't just sort of accepted that things are the way they are, especially for things that the team already doesn't like but doesn't feel like they have the energy to do anything better about it. So, maybe you're in a codebase where there's a ton of Ruby code in your ERB templates, and it's not really a pattern that you're following. It's just a thing that's there. It's been sort of the path of least resistance for a long time, and it's easier to add more lines in there, but nobody likes it. New person joins the team, and their naive exuberance is just like, "We can fix it. We can make it better." And maybe that's, you know, going back and rewriting all of your views. That's probably not the best use of their time. But it could be maybe the first time they have to touch one of these views, cleaning up that one and starting a conversation among the team. "Hey, here are some patterns that we might like to clean up some of these views instead," or "Here are maybe some guidelines for anything new that we write that we want to do to keep our views clean," and sort of start moving the needle in a positive direction. STEPHANIE: I like the idea of moving the needle. Even though I tend to not want to stir the pot with any big changes, one thing that I do find myself doing is in a couple of places in the specs, just trying to refactor a bit away from using lets. There were some kind of forward-thinking decisions made before when RSpec was basically going to deprecate using the describe block without prepending it with their module, so just kind of throwing that in there whenever I would touch a spec and asking other people to do the same. And then, recently, one kind of, like, small syntax thing that I hadn't seen before, and maybe this is just because of the age of the codebases in which I'm working, the argument forwarding syntax in Ruby that has been new, I mean, it's like not totally new anymore [laughs], but throwing that in there a little every now and then to just kind of shift away from this, you know, dated version of the code kind of towards things that other people are seeing and in newer projects. JOËL: I love harnessing that energy of being new on a project and wanting to make things better. How do you avoid just being, you know, that developer, though, that's new, comes in, and just wants to change everything for the sake of change or for your own personal opinions and just kind of moves things around, stirs the pot, but doesn't really contribute anything net positive to the team? Because I've definitely seen that as well, and that's not a good first contribution or, you know, contribution in general as a newer team member. How do we avoid being that person while still capitalizing on that energy of being someone new and wanting to make a positive impact? STEPHANIE: Yeah, that's a great point, and I kind of alluded to this earlier when I asked, like, oh, like, even if you don't like an existing syntax or pattern you'll still follow it? And I think liking something a different way is not a good enough reason [chuckles]. But if you are able to have a good reason, like I mentioned with the RSpec prepending, you know, it didn't need to happen now, but if we would hope to upgrade that gem eventually, then yeah, that was a good reason to make that change as opposed to just purely aesthetic [laughs]. JOËL: That's one where there is pretty much a single right answer to. If you plan to keep staying up to date with versions of RSpec, you will eventually need to do all these code changes because, you know, they're deprecating the old way. Getting ahead of that gradually as we touch spec files, there's kind of no downside to it. STEPHANIE: That's true, though maybe there is a person who exists out there who's like, "I love this old version of RSpec, and I will die on this hill that we have to stay on [laughs] it." But I also think that I have preferences, but I'm not so attached to them. Ideally, you know, what I would love to receive is just, like, curiosity about like, "Oh, like, why did you make this change?" And just kind of share my reasoning. And sometimes in that process, I realize, you know, I don't have a great reason, and I'll just say, "I don't have a great reason. This is just the way I like it. But if it doesn't work for you, like, tell me, and I'll consider changing it back. [chuckles]" JOËL: Maybe that's where there's a lot of benefit is the sort of curiosity on the part of the existing team and sort of openness to both learn about existing practices but also share about different practices from the new teammate. And maybe that's you're coming in, and you have a different style where you like to write tests, maybe without using RSpec's let syntax; the team is using it. Maybe you can have a conversation with the team. It's almost certainly not worth it for you to go and rewrite the entire test suite to not use let and be like, "Hey, first PR. I made your test better." STEPHANIE: Hundreds of files changed, thousands [laughs] of lines of code. I think that's actually a good segue into the question of how can a team support a new hire or a new developer who is still calibrating to a codebase? I think I'm curious about this being different from onboarding because, you know, there are a lot of things that we already kind of expect to give some extra time and leeway for someone who's new coming in. But what might be some ways to support a new developer that are less well known? JOËL: One that I really like is getting them involved as early as possible in code review because then they get to see the patterns that are coming in, and they can be involved in conversations on those. The first PR you're reviewing, and you see a bunch of tests leaning heavily on let, and maybe you ask a question, "Is this a pattern that we're following in this codebase? Did we have a particular motivation for why we chose this?" And, you know, and you don't want to do it in a sort of, like, passive-aggressive way because you're trying to push something else. It has to come from a place of genuine curiosity, but you're allowing the new teammate to both see a lot of the existing patterns kind of in very quick succession because you see a pretty good cross-section of those when you review code. And also, to have conversations about them, to ask anything like, "Oh, that's unusual. I didn't know we were doing that." Or, "Hey, is this a pattern that we're doing kind of just local to this subsystem, or is this something that's happening all the way? Is this a pattern that we're using and liking? Is this a thing that we were doing five years ago that we're phasing out, but there's still a few of them left?" Those are all, I think, great questions to ask when you're getting started. STEPHANIE: That makes a lot of sense. It's different from saying, "This is how we do things here," and expecting them to adapt or, you know, change to fit into that style or culture, and being open to letting it evolve based on the new team, the new people on the team and what they might be bringing to the table. I like to ask the question, "What do you need to know?" Or "What do you need to be successful?" as opposed to telling them what I think they need [laughs]. I think that is something that I actually kind of recently, not regret exactly, but I was kind of helping out some folks who were going to be joining the team and just trying to, like, shove all this information down their throats and be like, "Oh, and watch out for these gotchas. And this app uses a lot of callbacks, and they're really complex." And I think I was maybe coloring their [chuckles] experience a little bit and expecting them to be able to drink from the fire hose, as opposed to trusting that they can see for themselves, you know, like, what is going on, and form opinions about it, and ask questions that will support them in whatever they are looking to do. When we talked earlier about the four different quadrants, like, the kind of information they need to know will differ based off of their task, based off of their experience. So, that's one way that I am thinking about to, like, make space for a new developer to help shape that culture, rather than insisting that things are the way they are. JOËL: It can be a fine balance where you want to be open to change while also you have to remain kind of ruthlessly pragmatic about the fact that change can be expensive. And so, a lot of changes you need to be justified, and you don't want to just be rewriting your patterns for every new employee or, you know, just to follow the latest trends because we've seen a lot of trends come and go in the Rails ecosystem, and getting on all of them is just not worth our time. STEPHANIE: And that's the hard truth of there's always trade-offs [laughs] in software development, isn't that right? JOËL: It sure is. You can't always chase the newest shiny, as fun as that is. STEPHANIE: On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.

Giant Robots Smashing Into Other Giant Robots
510 - The Forecastr Formula: Steven Plappert's Path to Startup Success

Giant Robots Smashing Into Other Giant Robots

Play Episode Listen Later Feb 1, 2024 32:04


Host Victoria Guido sits down with Steven Plappert, CEO of Forecastr, an online software designed to aid founders in financial modeling, which was born to help non-finance savvy founders understand and communicate their company's financial health. Despite the pandemic beginning right after Forecastr's launch in 2020, the company didn't pivot significantly thanks to extensive preparation and customer discovery before the launch. Steven delves into the operational and strategic aspects of Forecastr, highlighting the importance of balancing growth with financial sustainability, a consistent theme in their business strategy. Forecastr's significant development was integrating a strong human element into their software service, a move very well-received by their customers. Steven also outlines the company's key objectives, including cultivating a solid culture, achieving profitability, and exploring opportunities for exponential growth. Additionally, Steven discusses the importance of work-life balance, reflecting on his previous startup experience and emphasizing the necessity of balance for longevity and effectiveness in entrepreneurship. Victoria and Steven further explore how companies, including Forecastr and thoughtbot, incorporate these philosophies into their operations and culture. Forecastr (https://www.forecastr.co/) Follow Forecastr on LinkedIn (https://www.linkedin.com/company/forecastr/), X (https://twitter.com/forecastr), Instagram (https://www.instagram.com/forecastrco/), Facebook (https://www.facebook.com/ForecastrHQ), YouTube (https://www.youtube.com/forecastr), or TikTok (https://www.tiktok.com/@forecastrco). Follow Steven Plappert on LinkedIn (https://www.linkedin.com/in/steven-plappert-59477b3b/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant R¬obots Smashing Into Other Giant Robots podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido. And with me today is Steven Plappert, CEO of Forecastr, an online software that helps founders who hate building financial models in Excel actually understand their numbers, predict runway, and get funded. Steven, thank you for joining us. STEVEN: Hey, yeah, Victoria, thanks for having me. I'm stoked to be here. What's up, guys? VICTORIA: Just to get us warmed up here a little bit, can you tell me what's going on in your world? STEVEN: Well, you know, what is going on in my world? I had a great year last year, very healthy. I have a loving fiancé, and I'm getting married this year, which is going to be super fun. And, obviously, running a business, which takes up more than its fair share of my life. But yeah, it's early Jan, so I've been kind of reflecting on my life, and I got a lot to be grateful for, Victoria, I really do. VICTORIA: That's wonderful. You know, I used to work with a VP of strategic growth who likened forming partnerships with companies as getting into a marriage and building that relationship and that level of trust and communication that you have, which I think is really interesting. STEVEN: Oh, for sure. Emily always, Emily is my fiancé, she always says that, you know, Forecastr is essentially my mistress, if you will, you know what I mean? Because, like, that's [laughs] where the rest of my time goes, isn't it? Between hanging out with her and working on the company, you know, so... VICTORIA: So, how long have you been in a relationship with your business around Forecastr? [laughs] STEVEN: Yeah, right? Yeah [laughs]. Four years with this one. So, you know, we started it actually January 1st of 2020, going into the pandemic, although we didn't know it at the time. And so, we just celebrated our four-year anniversary a few weeks ago. VICTORIA: Well, that's really exciting. So, I'm curious about when you started Forecastr, what was the essential problem that you were trying to solve that you had identified in the market? STEVEN: I'd say the main problem we were trying to solve is that, like, specifically founders, you know, startup founders, really struggle to get, like, a clear picture of their financial health or, like, just the financial aspect of their business. And then they also struggle to communicate that to investors because most founders aren't finance people. You know, like, most people that start a company they don't do it because they're excellent in even, like, business or finance or anything like that. They usually do it because, like, they've identified some problem; they've lived it; they've breathed it, you know what I mean? They're some kind of subject matter expert. They may be good at sales, or marketing, or product. But a lot of times, finance is, like, a weak part for them, you know, it's not something that they're strong in. And so, they really have a hard time, like, understanding the viability of the business and communicating the financial outcome of the company to investors and stuff like that. And my co-founder Logan and I live that because all we did all day was built financial models in Excel for startup founders working for a CFO shop called Venture First. So, that's what we really saw. We really saw that just, you know, it's really hard for folks to get this clear picture. And we thought a big part of that, at least, was just the fact that, you know, there's no great software for it. It was just like, people are using Excel, which, you know, for people that are great in finance, you know, works but for most people, doesn't. And so, yeah, I think that that's what kind of inspired Logan and I to fly the coop there at Venture First and start a company. VICTORIA: No, that's really interesting. So, you found this problem. You knew that this was an issue for founders, and you built this hypothesis and started it. I think you said, like, right before 2020, right before the pandemic. So, were there any decisions you made that once you got more information or once you got started, you decided to pivot? And, like, what were those pivot points for you early on? STEVEN: There wasn't a lot of pivoting early on, I will say. And a part of that is because, like, this isn't my first company. I started a company right out of college back in 2013 called FantasyHub. In that company, we pivoted a lot and, largely because we didn't really put a lot of forethought into that company when we launched it, you know, we didn't do any customer discovery. We just launched the company. And then we skinned our knees a bunch of times [laughs] as we scaled that company up and had to change gears a lot of times. In Forecastr, you know, we had actually been kind of building towards starting the company for 18 months. So, Logan and I actually had the idea originally in middle of 2018. And we decided at that time, look, like, we're not going to go launch this company right away because we got full-time jobs, and we might as well de-risk it. So, we spent about 12 to 18 months just doing a lot of customer discovery, kind of in stealth mode while at Venture First. After about six months, we brought it up to Venture First and said, "Hey, here's this idea for a company we have. We want to go do it." You know, to Venture First's credit, you know, rather than viewing that territorially and saying, "Hey, you know, there's a great new product line for our company," they really inspired us to go forward with it. They said, "Hey, this is great. We want to support you guys." They put some money in. We did some more discovery. We built a prototype. So, long-winded way of saying that by the time we actually got to the starting line in 2020, you know, we had 18 months' worth of really clear thought put into this thing. And we had been building in this space for years, you know, building financial models and Excel for founders. So, I think we had a great understanding of the customer. We had a great understanding of the market and the needs. We'd done our diligence in terms of distribution and figuring out how we wanted to generate, you know, a good, healthy funnel for the business. And so, it was really just kind of a matter of execution at that point. And, you know, here we are four years in, and there really hasn't been anything that we've done that's really pivoted the business that much across those four years, except for one moment, which was actually six months ago. So, in July of 2023, we did finally have our first kind of pivot moment where one of the interesting things about Forecastr versus some other solutions in the market is that we're not just a product, just a SaaS platform. There's a real strong human layer to our solution. We've always felt like a SaaS plus human model was the right model for financial modeling for startups because a lot of these startup founders don't have finance expertise on staff or inherently. And about six months ago, it wasn't as much of a pivot as it was a double down. You know, we really doubled down on that human element, you know, and now that human element isn't just through, like, a white glove onboarding and some email support. But we actually do give our customers an analyst in addition to the software that's with them for the lifetime of their subscription and is with them every step of the way. And so, that's the only time that we really made, like, a significant change into what we were doing. And it was just, I think, off the back of three years of saying, "Hey, like [chuckles], people really love the human element, you know, let's lean into that." VICTORIA: I love that you saw that you couldn't solve this problem with just technology and that you planned for and grew the people element as well. And I'm curious: what other decisions did you have to make as you were growing the business, how to scale the tech side or the people side? STEVEN: So many decisions, right? And that's why I tell people all the time, I'm like, you know, I've been a founder for 11 years now. And, in my opinion, by far, the hardest part about being a founder is that all day, every day, you have to make a bunch of decisions. And you hardly ever have enough data to, like, know, you're making the right decision. So, you got to make a bunch of judgment calls, and ultimately, these are judgment calls that could make or break your company. And it's really taxing. It's taxing on the mind. It's stressful, you know. It is not easy. So, you know, I think it's one of the really hard things about being an entrepreneur. I would say one of the most consistent decisions that we've had to make at the highest level is decisions around kind of capital preservation, fiscal responsibility, and investing in the growth. So, categorically, it's like, on the one hand, you have a desire to build the company kind of sustainably, to get to profitability, to have a healthy working model, you know, where you have some real staying power, you know. And that line of thinking leads you to, you know, be conscientious about investments that you're making that, you know, increase the burn. On the other hand, you have a desire to grow the company very quickly. You know, you have certain benchmarks you need to meet, you know, in order to be attractive to venture capitalists. And so, you have decisions that you want to make, you know, to invest in that growth. And so, I think that's a very consistent theme that's played out across the four years is Logan and I trying to walk that tightrope between growing 2 to 3X year over year and being really mindful of the company's burn, you know, both for equity preservation and just to build the company in a more sustainable way. And I think as financial professionals and founders, the finance person in Logan and I a lot of times wants to be more conservative. The founder in Logan and I, a lot of times, wants to be more aggressive. And so, we kind of just, like, let those two forces kind of play themselves out. And I think it creates, like, a nice, healthy tension. VICTORIA: That is really interesting, yeah. And sometimes you have to make a guess [chuckles] and go with it and then see the results of what happens. So, you're a financial forecasting company. What kind of, like, key results or objectives are you working towards this year with Forecastr? STEVEN: Yeah, great question. So, we're really mindful of this kind of stuff. I'd say, you know, it's something that we really consider at a deep level is, like, you have to ultimately set objectives, which are very aligning and clarifying, you know, at an executive level, and then those should kind of filter all the way down through the organization. Because so much, I think, of building a company is you have to kind of punch above your weight. You have to grow faster than [chuckles] the resources that you're putting into it might expect or whatever. I mean, you have limited resources, limited time, but you got to go really quickly. I think alignment is a big part of that, and that starts with setting clear objectives. So, we actually have three very clear objectives, really four. The first one is living up to our cultural values. You know, we're a culture-first organization. We believe that, like, culture, you know, kind of eats strategy for breakfast, that age-old kind of cliché, but it's true. It's just like, I think, you know, if you build a really good culture, people are just...they're happier. They're more productive. You get more done. So, that is our number one strategic objective. Number two is to become profitable. Like I mentioned, we want to become profitable. We want to build a sustainable company. So, by setting that objective, it kind of forces us to be conscientious about spend and only invest in areas that we think is, like, a one plus one equals three. Our third strategic objective is reach 5 million in annual recurring revenue by the end of the year. We're at 2.4 right now. We want to at least double year over year. That's kind of, like, the minimum threshold to keep playing the venture game. And then number four is unlock exponential growth opportunities. So, we definitely adopt the philosophy of, like, hey, we've got a model. It's working. We've got 700 customers, you know, we've got two and a half million in annual recurring revenue. So, like, 80% of our focus should be on becoming profitable and hitting $5 million in annual recurring revenue. Like, that's, like, the bread and butter there, just keep doing what's working. But 20% of our attention should be paid to, well, what could we be doing to, like, triple down on that, you know, to really start to create an exponential growth curve? And, for us, that stuff and, like, kind of the data in investor space, like, there's a lot of interesting things that we could do, of course, as long as it's consensual, anonymized, et cetera, safe and secure, you know, with the kind of data that we have on private companies, you know, anonymously benchmarking companies against their peers, things like that. And I think there's a really big opportunity that we have to serve investors as well, you know, and to create a better investor experience when it comes to financial reporting, also something that we think can unlock exponential growth. So, those are the four objectives that we have going into this year. VICTORIA: Well, I really appreciate that you had culture at number one, and it reminds me, you know, you said it's old adage, but it's true, and you can verify that in reports like the State of DevOps Report. The number one indicator of a high-security environment is the level of trust and culture that you have within your company, not necessarily the technology or tools that you're using. So, being a financial company, I think you're in a good position [chuckles] to have, like, you know, protect all those assets and protect your data. And yeah, I'm curious to hear more about what you said about just unlocking, like, exponential growth. It's hard to keep both the let's keep the lights on and keep running with what we have, and make room for these bigger strategic initiatives that are really going to help us grow as a company and be more sustainable over time. So, how do you make room for both of those things in a limited team? STEVEN: Yeah, it's a great question. And it's not easy, I would say. I mean, I think the way we make room for it probably on the frontend is just, like, being intentional about creating that space. I mean, ultimately, putting unlocking exponential growth opportunities on the strategic company roadmap, which is the document that kind of memorializes the four objectives that I just went through, that creates space inherently. It's one of four objectives on the board. And that's not just, like, a resource that sits, you know, in a folder somewhere. We use the OKR system, you know, which is a system for setting quarterly objectives and things like that. And these strategic objectives they make it on our OKR board, which filters down into our work. So, I think a big piece of creating the space is just as an executive and as a leader, you know, being intentional about [chuckles] putting it on the board and creating that space. The thing that you have to do, though, to be mindful is you have to make sure that you don't get carried away with it. I mean, like you said, at the end of the day, succeeding in a business requires a proper balancing of short-term and long-term priorities. You know, if you're focused too much on the short term, you know, you can kind of hamstring yourself in the long run. Yeah, maybe you build, like, a decent business, but you don't quite, you know, reach your highest potential because you're not investing in some of those things that take a while to develop and come to play in the long run. But if you're too focused on the long run, which is what these exponential opportunities really are, you know, it's very easy to lose your way [laughs] in the short term, and it's very easy to die along the way. You know, I do think of startups as much of a game of survival as anything. I always say survive until you thrive. And so, that's where the 80/20 comes in, you know, where we just kind of say, "Hey, look, like, 80% of our time and energy needs to be devoted to kind of short-term and less risky priorities, such as doubling down on what's already working. 20% of our time, thereabouts, can be devoted to some of these more long-term strategic objectives, like unlocking exponential growth. And I think it just takes a certain mindfulness and a certain intentionality to, like, every week when you're organizing your calendar, and you're, like, talking with your team and stuff like that, you're just always trying to make sure, hey, am I roughly fitting into that framework, you know? And it doesn't have to be exact. Some weeks, it may be more or less. But I think that's kind of how we approach it, you know, conceptually. VICTORIA: Oh, what a great perspective. I think that I really like hearing those words about, like, balance and, like, being intentional. MID-ROLL AD: Now that you have funding, it's time to design, build, and ship the most impactful MVP that wows customers now and can scale in the future. thoughtbot Liftoff brings you the most reliable cross-functional team of product experts to mitigate risk and set you up for long-term success. As your trusted, experienced technical partner, we'll help launch your new product and guide you into a future-forward business that takes advantage of today's new technologies and agile best practices. Make the right decisions for tomorrow today. Get in touch at thoughtbot.com/liftoff. VICTORIA: You mentioned earlier that you're getting married, so, like, maybe you can talk about how are you intentional with your own time and balancing your personal life and making room for these, you know, big life changes while dealing with also the stress of being in kind of a survivor mode with the company. STEVEN: Like I mentioned, this is my second company, and Emily, bless her heart, my fiancé, she's been with me my entire entrepreneurial career. We started dating the first month that I started my first company, FantasyHub. And in that company, I ran that company for three years. We took it through Techstars down in Austin. It was a consumer gaming company. Interesting company. It ended up being a failure but, like, super interesting and set me on my path. Yeah, I was a complete and total workaholic. I worked around the clock. It was a fantasy sports company, so weekends were our big time, and I worked seven days a week. I worked, like, a lot of 80-hour-plus weeks. And, you know, looking back on it, it was a lot of fun, but it was also miserable. And I also burned out, you know, about six months before the company failed. And had the company not failed when it did, you know, I don't know what the future would have held for us. I was really out of balance. You know, I had deprioritized physical health. I hadn't worked out in years. I wasn't healthy. I had deprioritized mental health. Emily almost left me as a part of that company because I wasn't giving her any attention. And so, you know, when that company failed, and I was left with nothing, you know, and I just was kind of, like, sitting there licking my wounds [laughs], you know, in my childhood bedroom at my parents' house, you know, I was like, you know what? Like, I don't know that that was really worth it, and I don't know that that was the right approach. And I kind of vowed...in that moment, I was like, you know, look, I'm a startup founder. I love building these companies, so I'm, like, definitely going to do it again, but I'm not going to give it my entire life. Like, regardless of your religious beliefs, like, we at least have one life to live. And in my opinion, there's a lot more to life than [chuckles] just cranking out work and building companies. Like, there's a whole world to explore, you know, and there's lots of things that I'm interested in. So, this time around, I'm very thoughtful about creating that balance in my life. I set hard guidelines. There's hard, like, guardrails, I guess I should say, when I start and end work, you know, and I really hold myself accountable to that. Emily holds me accountable to that. And I make sure that, like, I work really hard when I'm at work, but I take the mask off, you know, so to speak, when I'm at home. And I just kind of...I don't deprioritize the rest of my life like I did when I was running FantasyHub. So, I think it's super important. I think it's a marathon building companies. I think you got to do that. I think it's what's in the best interest of the company and you as an individual. So, I think it's something I do a lot better this time around. And I think we're all better off for it, not the least of which is, like, one of our six cultural values is live with balance, and that's why. You know, because, like, we adopt the philosophy that you don't have to work yourself to death to build a great company. You can build a great company working a pretty reasonable workload, you know what I mean? It's not easy. It is kind of a pressure cooker trying to get that much done in that little time, but I think we're living proof that it works. VICTORIA: And if you don't make time to rest, then your ability to make good decisions and build high-quality products really starts to suffer eventually, like, I think, is what you saw at the end there. So, I really appreciate you sharing that and that personal experience. And I'm glad to see the learning from that, and making sure that's a core part of your company values the next time you start a company makes a lot of sense to me. STEVEN: Yeah, totally, you know, yeah. And I've always remembered, although this might be an extreme and a privileged extreme, but, you know, J.P. Morgan, the person, was famous for saying, "I get more done in 9 months than I get in 12," in relationship to the fact that he would take his family over to Europe for, like, three months of the year and, like, summer in Europe and not work. And so, while that's probably an unrealistic, you know, ideal for a startup founder, there's some truth to it, you know what I mean? Like you said, like, you got to rest. And, in fact, if you rest more, you know, yeah, you might be working less hours. You'll actually get more done. You're a lot clearer while you're at work. It's a mindset game. It's a headspace game. And the better you can put yourself in that good mindset, that good headspace, the more effective you are. Yeah, there's just a lot of wisdom to that approach. VICTORIA: Right. And, you know, thoughtbot is a global company, so we have employees all over the world. And I think what's interesting about U.S-based companies, I'm interested in how Forecastr might even help you with this, is that when you start a company, you basically form, like, a mini-government for your employees. And you have input over to how much they paid, how much healthcare they get. You have input over their hours and how much leave and everything. And so, trying to balance all those costs and create a good environment for your employees and make sure they have enough time for rest and for personal care. How does Forecastr kind of help you also imagine all of those costs [laughs] and make sense of what you can offer as a company? STEVEN: I would say the main way that we help folks do that, and we really do play in that space, is just by giving you a clear picture of what the future holds for your company from a financial perspective. I mean, it's one of the things that I think is such a superpower when it comes to financial modeling, you know, it can really help you make better decisions along these lines because, like, what does a financial model do? A financial model just simulates your business into the future, specifically anything related to the cash flow of your business, you know, cash in, cash out, revenue expenses, and the like. And so, your people are in there, and what they're paid is in there and, you know, your revenue and your expenses, your cash flow, your runway, all that's in the model. One of the things I feel like we do really help people do is just get a clearer picture of like, hey, what do the next 3, 6, 12, you know, 24 months look like for my company? What is my runway? When am I going to run out of money? What do I need to do about that? Can I afford to give everybody a raise, or can I afford to max out my benefits plan or whatever that is? It's like, you can make those assessments more easily. You know, if you have a financial model that actually makes sense to you that you can look at and say, oh, okay, cool. Yeah, I can offer that, like, Rolls Royce benefits plan and still have 18 months' worth of runway, or maybe I can't [laughs]. And I have to say, "Sorry, guys, you know, like, we're cash-constrained, and this is all we can do for now. But maybe when we raise that next round and when we hit these growth milestones, you know, we can expand that." So yeah, I think it's all, for us, about just, like, helping founders make better decisions, whether they be your decisions around employees and benefits, et cetera, or growth, or fundraising, you know, through the power of, like, financial health and hygiene. VICTORIA: Great. Thank you. I appreciate you letting me bring it all the way back [laughs]. Yeah, let me see. Let me go through my list of questions and see what else we have here. Do you have any questions for me or thoughtbot? STEVEN: Yeah. I mean, so, like, how do you guys think about this kind of stuff? Like, you know, you said thoughtbot's a global company at this point, but the name would imply, you know, a very thoughtful one. So, I'd be curious in y'alls kind of approach to just, like, culture and balance and some of these things that we're talking about kind of, like, straddling that line, you know, between, like, working really hard, which you have to do to build a great company but, you know, being mindful of everything else that life has to offer. VICTORIA: Yeah. Well, I think thoughtbot, more than any other company I've ever worked for, really emphasizes the value of just, like, people really want you to have a work-life balance and to be able to take time off. And, you know, I think that for a company that does consulting and we're delivering at a certain quality, that means that we're delivering at the quality where if someone needs to take a week off for a vacation, there's enough documentation; there's enough backup support for that service to not be impacted. So, that gives us confidence to be able to take the time off [chuckles], and it also just ends up being a better product for our clients. Like, our team needs to be well-rested. They need to have time to invest in themselves and learning the latest technology, the latest upgrades, contributing to open source, and writing about the problems they're seeing, and contributing back to the community. So, we actually make time every Friday to spend on those types of projects. It's kind of like you were saying before, like, you get as much done in four days as many companies get in five because that time is very highly focused. And then you're getting the benefit of, you know, continually investing in new skills and making sure the people you're working with are at the level that you're paying for [laughs]. STEVEN: Yeah, right. No, that's super cool. That's super cool. VICTORIA: Yeah, and, actually, so we're all remote. We're a fully remote company, and we do offer some in-person events twice a year, so that's been a lot of fun for me. And also, getting to, like, go to conferences like RubyConf and RailsConf and meeting the community has been fantastic. Yeah, you have a lot of value of self-management. So, you have the ability to really adjust your schedule and communicate and work with what meets your needs. It's been really great. STEVEN: Yeah, I love that, too. And we're also a remote company, and I think getting together in person, like you just talked about it, is so important. We can only afford to do it once a year right now as an earlier-stage company. But as amazing as, you know, Zoom and things like this are, it's like, there's not really a perfect replacement for that in-person experience, you know. VICTORIA: I agree, and I also agree that, like, once a year is probably enough [laughter]. That's a great amount of time. Like, it really does help because there are so many ways to build relationships remotely, but sometimes, at least just meeting in person once is enough to be like, oh yeah, like, you build a stronger connection, and I think that's great. Okay. Let me see. What other questions do we have? Final question: is there anything else that you would like to promote? STEVEN: I guess it's my job to say we are a really awesome financial modeling platform and team in general. So, if you are a startup founder or you know a startup founder out there that just could use some help with their financial model, you know, it is definitely something that we'd love to do. And we do a ton of education and a ton of help. We've got a ton of resources that are even freely available as well. So, our role in the market is just to get out there and help folks build great financial models, whether that be on Forecastr or otherwise, and that's kind of the approach that we take to it. And our philosophy is like, if we can get out there and do that, you know, if we can be kind of the go-to resource for folks to build great models regardless, you know, of what that means for them, a rising tide will float all boats, and our boat the most of which, hopefully. So [laughs], if you need a model, I'm your guy. VICTORIA: Thank you so much for sharing that. And I have a fun question for you at the end. What is your favorite hike that you've been on in the last three years or ever, however long you want to go back? [laughs] STEVEN: Well, I would say, you know, I did have the great pleasure this year of returning to the Appalachian Trail to hike the Roan Highlands with a friend of mine who was doing a thru-hike. So, a friend of mine did a southbound thru-hike on the A.T. this year, went from Maine to Georgia. Good friend of mine. And I had not been on the Appalachian Trail since I did a thru-hike in 2017. So, I had not returned to the trail or to that whole community. It's just a very special community. It really is a group of, like, really awesome, eclectic people. And so, yeah, this last year, I got to go down to the Roan Highlands in Tennessee. It's a beautiful, beautiful area of Tennessee and in the Southeast, rolling hills and that kind of thing. And hike, for him, for, like, three or four days and just be a part of his journey. Had a ton of fun, met some awesome people, you know, great nature, and totally destroyed my body because I was not prepared to return to the grueling nature of the Appalachian Trail. So yeah, I'd have to say that one, Victoria. I'd have to say that was my favorite in the last couple of years for sure. VICTORIA: Yeah, it's beautiful there. I've hiked certain parts of it. So, I've heard that obviously the Appalachian Trail, which is the eastern side of the United States, was the earlier trail that was developed because of the dislocation of people over time and that they would create the trail by getting to a peak and then looking to another peak and being like, "Okay, that's where I'm going to go." So, when you say it's grueling, I was, like, a lot of up and down hills. And then what I've heard is that the Pacific Trail on the western part of the United States, they did more of figuring out how to get from place to place with minimizing the elevation change, and so it's a much more, like, sustainable hike. Have you ever heard that? STEVEN: Oh yeah, that is 100% true. In terms of, like, the absolute change in elevation, not, like, the highest elevation and the lowest, just, like, the change up and down, there's twice as much going up and down on the A.T. as there is on the Pacific Crest Trail. And the Pacific Crest Trail is graded for park animals, so it never gets steeper than, like, a 15% grade. So, it's real groovy, you know, on the PCT where you can just get into a groove, and you can just hike and hike and hike and hike for hours, you know, versus the A.T. where you're going straight up, straight down, straight up, straight down, a lot of big movements, very exhausting. I've hiked a good chunk of the PCT and then, obviously, the whole A.T., so I can attest, yes, that is absolutely true. VICTORIA: I feel like there's an analogy behind that and what Forecastr does for you. Like, you'll be able [laughs] to, like, smooth out your hills a little bit more [laughs] with your finances, yeah. STEVEN: [laughs] Oh, I love that. Absolutely. Well, and I've honestly, like, I've often likened, you know, building a company and hiking the Appalachian Trail because it is one of those things where one of the most clarifying things about hiking a long trail is you just have this one monster goal that's, you know, that's months and months ahead of you. But you just got to get up every day, and you just got to grind it out. And every day is grindy, and it's hard, you know, but every day you just get one step closer to this goal. And it's one of the cool things about a trail is that you kind of steep yourself in that one goal, you know, one-track mind. And, you know, like we were saying earlier, there's so much more to life. So, you can't and probably shouldn't do that with your startup. You should continue to invest in other aspects of your life. But maybe while you're within the four walls of your office or when you open up that laptop and get to work on your computer, you know, if you take that kind of similar approach where you got this big goal that's, you know, months or years away but every day you just got to grind it out; you just got to work hard; you got to do what you can to get 1 step closer. And, you know, one day you'll wake up and you'll be like, oh shit, like, I'm [laughs] pretty close, you know what I mean? Yeah, I think there's definitely some similarities to the two experiences. VICTORIA: I appreciate that, yeah. And my team is actually it's more like starting up a business within thoughtbot. So, I'm putting together, like, my three-year plan. It's very exciting. And I think, like, those are the types of things you want to have. It's the high-level goal. Where are we going? [chuckles] Are we on our track to get there? But then day to day, it's like, okay, like, let's get these little actions done that we need to do this week [laughs] to build towards that ultimate goal. Well, thank you so much for joining us today, Steven. I really enjoyed our conversation. Is there anything final you want to say? STEVEN: I just want to thank you, Victoria. I think it's a wonderful podcast that you guys put on, and I really appreciate the opportunity to be here and to chat with you. You're lovely to talk to. I enjoyed the conversation as well, and I hope everyone out there did, also. So, let's make it a great 2024. VICTORIA: Thank you so much. Yeah, this is actually my second podcast recording of the year, so very exciting for me. I appreciate it. Thanks so much for joining again. So, you can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, you can email us at hosts@giantrobots.fm. And you can find me on X, formerly known as Twitter, @victori_ousg. And this podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions.

Ruby on Rails Podcast
Episode 505: RailsConf CFP with Andy Croll

Ruby on Rails Podcast

Play Episode Listen Later Jan 31, 2024 23:39


RailsConf is coming up fast and I can't wait to see you all there! The CFP is open and the program committee is accepting proposals for talks for the conference. Andy Croll joined the show to talk with us about the CFP process and how you can present at RailsConf ** Show Notes** Rails Conf Detroit - https://railsconf.org/ RailsConf CFP - https://sessionize.com/railsconf2024/ Speakerline.io - https://speakerline.io/

cfp railsconf andy croll
The Bike Shed
411: Celebrating and Recapping 2023!

The Bike Shed

Play Episode Listen Later Dec 19, 2023 38:40


Stephanie is hosting a holiday cookie swap. Joël talks about participating in thoughtbot's end-of-the-year hackathon, Ralphapalooza. We had a great year on the show! The hosts wrap up the year and discuss their favorite episodes, the articles, books, and blog posts they've read and loved, and other highlights of 2023 (projects, conferences, etc). Olive Oil Sugar Cookies With Pistachios & Lemon Glaze (https://food52.com/recipes/82228-olive-oil-sugar-cookies-recipe-with-pistachios-lemon) thoughtbot's Blog (https://thoughtbot.com/blog) Episode 398: Developing Heuristics For Writing Software (https://www.bikeshed.fm/398) Episode 374: Discrete Math (https://www.bikeshed.fm/374) Episode 405: Sandi Metz's Rules (https://www.bikeshed.fm/405) Episode 391: Learn with APPL (https://www.bikeshed.fm/391) Engineering Management for the Rest of Us (https://www.engmanagement.dev/) Confident Ruby (https://pragprog.com/titles/agcr/confident-ruby/) Working with Maybe from Elm Europe (https://www.youtube.com/watch?v=43eM4kNbb6c) Sustainable Rails Book (https://sustainable-rails.com/) Episode 368: Sustainable Web Development (https://www.bikeshed.fm/368) Domain Modeling Made Functional (https://pragprog.com/titles/swdddf/domain-modeling-made-functional/) Simplifying Tests by Extracting Side Effects (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects) The Math Every Programmer Needs (https://www.youtube.com/watch?v=wzYYT40T8G8) Mermaid.js sequence diagrams (https://mermaid.js.org/syntax/sequenc) Sense of Belonging and Software Teams (https://www.drcathicks.com/post/sense-of-belonging-and-software-teams) Preemptive Pluralization is (Probably) Not Evil (https://www.swyx.io/preemptive-pluralization) Digging through the ashes (https://everythingchanges.us/blog/digging-through-the-ashes/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: I am so excited to talk about this. I'm, like, literally smiling [chuckles] because I'm so pumped. Sometimes, you know, we get on to record, and I'm like, oh, I got to think of something that's new, like, my life is so boring. I have nothing to share. But today, I am excited to tell you about [chuckles] the holiday cookie swap that I'm hosting this Sunday [laughs] that I haven't been able to stop thinking about or just thinking about all the cookies that I'm going to get to eat. It's going to be my first time throwing this kind of shindig, and I'm so pleased with myself because it's such a great idea. You know, it's like, you get to share cookies, and you get to have all different types of cookies, and then people get to take them home. And I get to see all my friends. And I'm really [chuckles] looking forward to it. JOËL: I don't think I've ever been to a cookie swap event. How does that work? Everybody shows up with cookies, and then you leave with what you want? STEPHANIE: That's kind of the plan. I think it's not really a...there's no rules [laughs]. You can make it whatever you want it to be. But I'm asking everyone to bring, like, two dozen cookies. And, you know, I'm hoping for a lot of fun variety. Myself I'm planning on making these pistachio olive oil cookies with a lemon glaze and also, maybe, like, a chewy ginger cookie. I haven't decided if I'm going to go so extra to make two types, but we'll see. And yeah, we'll, you know, probably have some drinks and be playing Christmas music, and yeah, we'll just hang out. And I'm hoping that everyone can kind of, like, take home a little goodie bag of cookies as well because I don't think we'll be going through all of them. JOËL: Hearing you talk about this gave me an absolutely terrible idea. STEPHANIE: Terrible or terribly awesome? [laughs] JOËL: So, imagine you have the equivalent of, let's say, a LAN party. You all show up with your laptops. STEPHANIE: [laughs] JOËL: You're on a network, and then you swap browser cookies randomly. STEPHANIE: [laughs] Oh no. That would be really funny. That's a developer's take on a cookie party [laughs] if I've ever heard one. JOËL: Slightly terrifying. Now I'm just browsing, and all of a sudden, I guess I'm logged into your Facebook or something. Maybe you only swap the tracking cookies. So, I'm not actually logged into your Facebook, but I just get to see the different ad networks it would typically show you, and you would see my ads. That's maybe kind of fun or maybe terrifying, depending on what kind of ads you normally see. STEPHANIE: That's really funny. I'm thinking about how it would just be probably very misleading and confusing for those [laughs] analytics spenders, but that's totally fine, too. Might I suggest also having real cookies to munch on as well while you are enjoying [laughs] this browser cookie-swapping party? JOËL: I 100% agree. STEPHANIE: [laughs] JOËL: I'm curious: where do you stand on raisins in oatmeal cookies? STEPHANIE: Ooh. JOËL: This is a divisive question. STEPHANIE: They're fine. I'll let other people eat them. And occasionally, I will also eat an oatmeal cookie with raisins, but I much prefer if the raisins are chocolate chips [chuckles]. JOËL: That is the correct answer. STEPHANIE: [laughs] Thank you. You know, I understand that people like them. They're not for me [laughs]. JOËL: It's okay. Fans can send us hate mail about why we're wrong about oatmeal cookies. STEPHANIE: Yeah, honestly, that's something that I'm okay with being wrong about on the internet [laughs]. So, Joël, what's new in your world? JOËL: So, as of this recording, we've just recently done thoughtbot's end-of-the-year hackathon, what we call Ralphapalooza. And this is sort of a time where you kind of get to do pretty much any sort of company or programming-related activity that you want as long as...you have to pitch it and get at least two other colleagues to join you on the project, and then you've got two days to work on it. And then you can share back to the team what you've done. I was on a project where we were trying to write a lot of blog posts for the thoughtbot blog. And so, we're just kind of getting together and pitching ideas, reviewing each other's articles, writing things at a pretty intense rate for a couple of days, trying to flood the blog with articles for the next few weeks. So, if you're following the blog and as the time this episode gets released, you're like, "Wow, there's been a lot of articles from the thoughtbot blog recently," that's why. STEPHANIE: Yes, that's awesome. I love how much energy that the blog post-writing party garnered. Like, I was just kind of observing from afar, but it sounds like, you know, people who maybe had started posts, like, throughout the year had dedicated time and a good reason to revisit them, even if they had been, you know, kind of just, like, sitting in a draft for a while. And I think what also seemed really nice was people were just around to support, to review, and were able to make that a priority. And it was really cool to see all the blog posts that are queued up for December as a result. JOËL: People wrote some great stuff. So, I'm excited to see all of those come out. I think we've got pretty much a blog post every day coming out through almost the end of December. So, it's exciting to see that much content created. STEPHANIE: Yeah. If our listeners want more thoughtbot content, check out our blog. JOËL: So, as mentioned, we're recording this at the end of the year. And I thought it might be fun to do a bit of a retrospective on what this year has been like for you and I, Stephanie, both in terms of different work that we've done, the learnings we've had, but maybe also look back a little bit on 2023 for The Bike Shed and what that looked like. STEPHANIE: Yes. I really enjoyed thinking about my year and kind of just reveling and having been doing this podcast for over a year now. And yeah, I'm excited to look back a little bit on both things we have mentioned on the show before and things maybe we haven't. To start, I'm wondering if you want to talk a little bit about some of our favorite episodes. JOËL: Favorite episodes, yes. So, I've got a couple that are among my favorites. We did a lot of good episodes this year. I really liked them. But I really appreciated the episode we did on heuristics, that's Episode 398, where we got to talk a little bit about what goes into a good heuristic, how we tend to come up with them. A lot of those, like, guidelines and best practices that you hear people talk about in the software world and how to make your own but then also how to deal with the ones you hear from others in the software community. So, I think that was an episode that the idea, on the surface, seemed really basic, and then we went pretty deep with it. And that was really fun. I think a second one that I really enjoyed was also the one that I did with Sara Jackson as a guest, talking about discrete math and its relevance to the day-to-day work that we do. That's Episode 374. We just had a lot of fun with that. I think that's a topic that more developers, more web developers, would benefit from just getting a little bit more discrete math in their lives. And also, there's a clip in there where Sara reinterprets a classic marketing jingle with some discrete math terms in there instead. It was a lot of fun. So, we'd recommend people checking that one out. STEPHANIE: Nice. Yes. I also loved those episodes. The heuristics one was really great. I'm glad you mentioned it because one of my favorite episodes is kind of along a similar vein. It's one of the more recent ones that we did. It's Episode 405, where we did a bit of a retro on Sandi Metz' Rules For Developers. And those essentially are heuristics, right? And we got to kind of be like, hey, these are someone else's heuristics. How do we feel about them? Have we embodied them ourselves? Do we follow them? What parts do we take or leave? And I just remember having a really enjoyable conversation with you about that. You and I have kind of treated this podcast a little bit like our own two-person book club [laughs]. So, it felt a little bit like that, right? Where we were kind of responding to, you know, something that we both have read up on, or tried, or whatever. So, that was a good one. Another one of my favorite episodes was Episode 391: Learn with APPL [laughs], in which we basically developed our own learning framework, or actually, credit goes to former Bike Shed host, Steph Viccari, who came up with this fun, little acronym to talk about different things that we all kind of need in our work lives to be fulfilled. Our APPL stands for Adventure, Passion, Profit, and Low risk. And that one was really fun just because it was, like, the opposite of what I just described where we're not discussing someone else's work but discovered our own thing out of, you know, these conversations that we have on the show, conversations we have with our co-workers. And yeah, I'm trying to make it a thing, so I'm plugging it again [laughs]. JOËL: I did really like that episode. One, I think, you know, this APPL framework is a little bit playful, which makes it fun. But also, I think digging into it really gives some insight on the different aspects that are relevant when planning out further growth or where you want to invest your sort of professional development time. And so, breaking down those four elements led to some really insightful conversation around where do I want to invest time learning in the next year? STEPHANIE: Yeah, absolutely. JOËL: By the way, we're mentioning a bunch of our favorite things, some past episodes, and we'll be talking about a lot of other types of resources. We will be linking all of these in the show notes. So, for any of our listeners who are like, "Oh, I wonder what is that thing they mentioned," there's going to be a giant list that you can check out. STEPHANIE: Yeah. I love whenever we are able to put out an episode with a long list of things [laughs]. JOËL: It's one of the fun things that we get to do is like, oh yeah, we referenced all these things. And there is this sort of, like, further reading, more threads to pull on for people who might be interested. So, you'd mentioned, Stephanie, that, you know, sometimes we kind of treat this as our own little mini, like, two-person book club. I know that you're a voracious reader, and you've mentioned so many books over the course of the year. Do you have maybe one or two books that have been kind of your favorites or that have stood out to you over 2023? STEPHANIE: I do. I went back through my reading list in preparation for this episode and wanted to call out the couple of books that I finished. And I think I have, you know, I mentioned I was reading them along the way. But now I get to kind of see how having read them influenced my work life this past year, which is pretty cool. So, one of them is Engineering Management for the Rest of Us by Sarah Drasner. And that's actually one that really stuck with me, even though I'm not a manager; I don't have any plans to become a manager. But one thing that she talks about early on is this idea of having a shared value system. And you can have that at the company level, right? You have your kind of corporate values. You can have that at the team level with this smaller group of people that you get to know better and kind of form relationships with. And then also, part of that is, like, knowing your individual values. And having alignment in all three of those tiers is really important in being a functioning and fulfilled team, I think. And that is something that I don't think was really spelled out very explicitly for me before, but it was helpful in framing, like, past work experiences, where maybe I, like, didn't have that alignment and now identify why. And it has helped me this year as I think about my client work, too, and kind of where I sit from that perspective and helps me realize like, oh, like, this is why I'm feeling this way, and this is why it's not quite working. And, like, what do I do about it now? So, I really enjoyed that. JOËL: Would you recommend this book to others who are maybe not considering a management path? STEPHANIE: Yeah. JOËL: So, even if you're staying in the IC track, at least for now, you think that's a really powerful book for other people. STEPHANIE: Yeah, I would say so. You know, maybe not, like, all of it, but there's definitely parts that, you know, she's writing for the rest of us, like, all of us maybe not necessarily natural born leaders who knew that that's kind of what we wanted. And so, I can see how people, you know, who are uncertain or maybe even, like, really clearly, like, "I don't think that's for me," being able to get something out of, like, either those lessons in leadership or just to feel a bit, like, validated [laughs] about the type of work that they aren't interested in. Another book that I want to plug real quick is Confident Ruby by Avdi Grimm. That one was one I referenced a lot this year, working with newer developers especially. And it actually provided a good heuristic [laughs] for me to talk about areas that we could improve code during code review. I think that wasn't really vocabulary that I'd used, you know, saying, like, "Hey, how confident is this code? How confident is this method and what it will receive and what it's returning?" And I remember, like, several conversations that I ended up having on my teams about, like, return types as a result and them having learned, like, a new way to view their code, and I thought that was really cool. JOËL: I mean, learning to deal with uncertainty and nil in Ruby or maybe even, like, error states is just such a core part of writing software. I feel like this is something that I almost wish everyone was sort of assigned maybe, like, a year into their programming career because, you know, I think the first year there's just so many things you've got to learn, right? Like basic programming and, like, all these things. But, like, you're looking maybe I can start going a little bit deeper into some topic. I think that some topic, like, pretty high up, would be building a mental model for how to deal with uncertainty because it's such a source of bugs. And Avdi Grimm's book, Confident Ruby, is...I would put that, yeah, definitely on a recommended reading list for everybody. STEPHANIE: Yeah, I agree. And I think that's why I found myself, you know, then recommending it to other people on my team and kind of having something I can point to. And that was really helpful in the kind of mentorship that I wanted to offer. JOËL: I did a deep dive into uncertainty and edge cases in programs several years back when I was getting into Elm. And I was giving a talk at Elm Europe about how Elm handles uncertainty, which is a little bit different than how Ruby does it. But a lot of the underlying concepts are very similar in terms of quarantining uncertainty and pushing it to the edges and things like that. Trying to write code that is more confident that is definitely a term that I used. And so Confident Ruby ended up being a little bit of an inspiration for my own journey there, and then, eventually, the talk that I gave that summarized my learnings there. STEPHANIE: Nice. Do you have any reading recommendations or books that stood out to you this year? JOËL: So, I've been reading two technical books kind of in tandem this year. I have not finished either of them, but I have been enjoying them. One is Sustainable Rails by David Bryant Copeland. We had an episode at the beginning of this year where we talked a little bit about our initial impressions from, I think, the first chapter of the book. But I really love that vocabulary of writing Ruby and Rails code, in particular, in a way that is sustainable for a team. And that premise, I think, just gives a really powerful mindset to approach structuring Rails apps. And the other book that I've been reading is Domain Modeling Made Functional, so kind of looking at some domain-driven design ideas. But most of the literature is typically written to an object-oriented audience, so taking a look at it from more of a functional programming perspective has been really interesting. And then I've been, weirdly enough, taking some of those ideas and translating back into the object-oriented world to apply to code I'm writing in Ruby. I think that has been a very useful exercise. STEPHANIE: That's awesome. And it's weird and cool how all those things end up converging, right? And exploring different paradigms really just lets you develop more insight into wherever you're working. JOËL: Sometimes the sort of conversion step that you have to do, that translation, can be a good tool for kind of solidifying learnings or better understanding. So, I'm doing this sort of deep learning thing where I'm taking notes as I go along. And those notes are typically around, what other concepts can I connect ideas in the book? So, I'll be reading and say, okay, on page 150, he mentioned this concept. This reminds me of this idea from TDD. I could see this applying in a different way in an object-oriented world. And interestingly, if you apply this, it sort of converges on maybe single responsibility or whatever other OO principle. And that's a really interesting connection. I always love it when you do see sort of two or three different angles converging together on the same idea. STEPHANIE: Yeah, absolutely. JOËL: I've written a blog post, I think, two years ago around how some theory from functional programming sort of OO best practices and then TDD all kind of converge on sort of the same approach to designing software. So, you can sort of go from either direction, and you kind of end in the same place or sort of end up rediscovering principles from the other two. We'll link that in the show notes. But that's something that I found was really exciting. It didn't directly come from this book because, again, I wrote this a couple of years ago. But it is always fun when you're exploring two or three different paradigms, and you find a convergence. It really deepens your understanding of what's happening. STEPHANIE: Yeah, absolutely. I like what you said about how this book is different because it is making that connection between things that maybe seem less related on the surface. Like you're saying, there's other literature written about how domain modeling and object-oriented programming make more sense a little bit more together. But it is that, like, bringing in of different schools of thought that can lead to a lot of really interesting discovery about those foundational concepts. JOËL: I feel like dabbling in other paradigms and in other languages has made me a better Ruby developer and a better OO programmer, a lot of the work I've done in Elm. This book that I'm reading is written in F#. And all these things I can kind of bring back, and I think, have made me a better Ruby developer. Have you had any experiences like that? STEPHANIE: Yeah. I think I've talked a little bit about it on the show before, but I can't exactly recall. There were times when my exploration in static typing ended up giving me that different mindset in terms of the next time I was coding in Ruby after being in TypeScript for a while, I was, like, thinking in types a lot more, and I think maybe swung a little bit towards, like, not wanting to metaprogram as much [laughs]. But I think that it was a useful, like you said, exercise sometimes, too, and just, like, doing that conversion or translating in your head to see more options available to you, and then deciding where to go from there. So, we've talked a bit about technical books that we've read. And now I kind of want to get into some in-person highlights for the year because you and I are both on the conference circuit and had some fun trips this year. JOËL: Yeah. So, I spoke at RailsConf this spring. I gave a talk on discrete math and how it is relevant in day-to-day work for developers, actually inspired by that Bike Shed episode that I mentioned earlier. So, that was kind of fun, turning a Bike Shed episode into a conference talk. And then just recently, I was at RubyConf in San Diego, and I gave a talk there around time. We often talk about time as a single quantity, but there's some subtle distinctions, so the difference between a moment in time versus a duration and some of the math that happens around that. And I gave a few sort of visual mental models to help people keep track of that. As of this recording, the talk is not out yet, so we're not going to be able to link to it. But if you're listening to this later in 2024, you can probably just Google RubyConf "Which Time Is It?" That's the name of the talk. And you'll be able to find it. STEPHANIE: Awesome. So, as someone who is giving talks and attending conferences every year, I'm wondering, was this year particularly different in any way? Was there something that you've, like, experienced or felt differently community-wise in 2023? JOËL: Conferences still feel a little bit smaller than they were pre-COVID. I think they are still bouncing back. But there's definitely an energy that's there that's nice to have on the conference scene. I don't know, have you experienced something similar? STEPHANIE: I think I know what you're talking about where, you know, there was that time when we weren't really meeting in person. And so, now we're still kind of riding that wave of, like, getting together again and being able to celebrate and have fun in that way. I, this year, got to speak at Blue Ridge Ruby in June. And that was a first-time regional conference. And so, that was, I think, something I had noticed, too, is the emergence of regional conferences as being more viable options after not having conferences for a few years. And as a regional conference, it was even smaller than the bigger national Ruby Central conferences. I really enjoyed the intimacy of that, where it was just a single track. So, everyone was watching talks together and then was on breaks together, so you could mingle. There was no FOMO of like, oh, like, I can't make this talk because I want to watch this other one. And that was kind of nice because I could, like, ask anyone, "What did you think of, like, X talk or like the one that we just kind of came out of and had that shared experience?" That was really great. And I got to go tubing for the first time [laughs] in Asheville. That's a memory, but I am still thinking about that as we get into winter. I'm like, oh yeah, the glorious days of summer [laughs] when I was getting to float down a lazy river. JOËL: Nice. I wasn't sure if this was floating down a lazy river on an inner tube or if this was someone takes you out on a lake with a speed boat, and you're getting pulled. STEPHANIE: [laughs] That's true. As a person who likes to relax [laughs], I definitely prefer that kind of tubing over a speed boat [laughs]. JOËL: What was the topic of your talk? STEPHANIE: So, I got to give my talk about nonviolent communication in pair programming for a second time. And that was also my first time giving a talk for a second time [laughs]. That was cool, too, because I got to revisit something and go deeper and kind of integrate even more experiences I had. I just kind of realized that even if you produce content once, like, there's always ways to deepen it or shape it a little better, kind of, you know, just continually improving it and as you learn more and as you get more experience and change. JOËL: Yeah. I've never given a talk twice, and now you've got me wondering if that's something I should do. Because making a bespoke talk for every conference is a lot of work, and it might be nice to be able to use it more than once. Especially I think for some of the regional conferences, there might be some value there in people who might not be able to go to a big national conference but would still like to see your talk live. Having a mix of maybe original content and then content that is sort of being reshared is probably a great combo for a regional conference. STEPHANIE: Yeah, definitely. That's actually a really good idea, yeah, to just be able to have more people see that content and access it. I like that a lot. And I think it could be really cool for you because we were just talking about all the ways that our mental models evolve the more stuff that we read and consume. And I think there's a lot of value there. One other conference that I went to this year that I just want to highlight because it was really cool that I got to do this: I went to RubyKaigi in Japan [laughs] back in the spring. And I had never gone to an international conference before, and now I'm itching to do more of that. So, it would be remiss not to mention it [laughs]. I'm definitely inspired to maybe check out some of the conferences outside of the U.S. in 2024. I think I had always been a little intimidated. I was like, oh, like, it's so far [laughs]. Do I really have, like, that good of a reason to make a trip out there? But being able to meet Rubyists from different countries and seeing how it's being used in other parts of the world, I think, made me realize that like, oh yeah, like, beyond my little bubble, there's so many cool things happening and people out there who, again, like, have that shared love of Ruby. And connecting with them was, yeah, just so new and something that I would want to do more of. So, another thing that we haven't yet gotten into is our actual work-work or our client work [laughs] that we do at thoughtbot for this year. Joël, I'm wondering, was there anything especially fun or anything that really stood out to you in terms of client work that you had to do this year? JOËL: So, two things come to mind that were novel for me. One is I did a Rails integration against Snowflake, the data warehouse, using an ODBC connection. We're not going through an API; we're going through this DB connection. And I never had to do that before. I also got to work with the new-ish Rails multi-database support, which actually worked quite nice. That was, I think, a great learning experience. Definitely ran into some weird edge cases, or some days, I was really frustrated. Some days, I was actually, like, digging into the source code of the C bindings of the ODBC gem. Those were not the best days. But definitely, I think, that kind of integration and then Snowflake as a technology was really interesting to explore. The other one that's been really interesting, I think, has been going much deeper into the single sign-on world. I've been doing an integration against a kind of enterprise SAML server that wants to initiate sign-in requests from their portal. And this is a bit of an alphabet soup, but the term here is IdP-initiated SSO. And so, I've been working with...it's a combination of this third-party kind of corporate SAML system, our application, which is a Rails app, and then Auth0 kind of sitting in the middle and getting all of them to talk to each other. There's a ridiculous number of redirects because we're talking SAML on one side and OIDC on the other and getting everything to line up correctly. But that's been a really fun, new set of things to learn. STEPHANIE: Yeah, that does sound complicated [laughs] just based on what you shared with me, but very cool. And I was excited to hear that you had had a good experience with the Rails multi-database part because that was another thing that I remember being...it had piqued my interest when it first came out. I hope I get to, you know, utilize that feature on a project soon because that sounds really fun. JOËL: One thing I've had to do for this SSO project is lean a lot on sequence diagrams, which are those diagrams that sort of show you, like, being redirected from different places, and, like, okay, server one talks to server two talks, to the browser. And so, when I've got so many different actors and sort of controllers being passed around everywhere, it's been hard to keep track of it in my head. And so, I've been doing a lot of these diagrams, both for myself to help understand it during development, and then also as documentation to share back with the team. And I found that Mermaid.js supports sequence diagrams as a diagram type. Long-term listeners of the show will know that I am a sucker for a good diagram. I love using Mermaid for a lot of things because it's supported. You can embed it in a lot of places, including in GitHub comments, pull requests. You can use it in various note systems like Notion or Obsidian. And you can also just generate your own on mermaid.live. And so, that's been really helpful to communicate with the rest of the team, like, "Hey, we've got this whole process where we've got 14 redirects across four different servers. Here's what it looks like. And here, like, we're getting a bug on, you know, redirect number 8 of 14. I wonder why," and then you can start a conversation around debugging that. STEPHANIE: Cool. I was just about to ask what tool you're using to generate your sequence diagrams. I didn't know that Mermaid supported them. So, that's really neat. JOËL: So, last year, when we kind of looked back over 2022, one thing that was really interesting that we did is we talked about what are articles that you find yourself linking to a lot that are just kind of things that maybe were on your mind or that were a big part of conversations that happened over the year? So, maybe for you, Stephanie, in 2023, what are one or two articles that you find yourself sort of constantly linking to other people? STEPHANIE: Yes. I'm excited you asked about this. One of them is an article by a person named Cat Hicks, who has a PhD in experimental psychology. She's a data scientist and social scientist. And lately, she's been doing a lot of research into the sense of belonging on software teams. And I think that's a theme that I am personally really interested in, and I think has kind of been something more people are talking about in the last few years. And she is kind of taking that maybe more squishy idea and getting numbers for it and getting statistics, and I think that's really cool. She points out belonging as, like, a different experience from just, like, happiness and fulfillment, and that really having an impact on how well a team is functioning. I got to share this with a few people who were, you know, just in that same boat of, like, trying to figure out, what are the behaviors kind of on my team that make me feel supported or not supported? And there were a lot of interesting discussions that came out of sharing this article and kind of talking about, especially in software, where we can be a little bit dogmatic. And we've kind of actually joked about it on the podcast [chuckles] before about, like, we TDD or don't TDD, or, you know, we use X tool, and that's just like what we have to do here. She writes a little bit about how that can end up, you know, not encouraging people offering, like, differing opinions and being able to feel like they have a say in kind of, like, the team's direction. And yeah, I just really enjoyed a different way of thinking about it. Joël, what about you? What are some articles you got bookmarked? [chuckles] JOËL: This year, I started using a bookmark manager, Raindrop.io. That's been nice because, for this episode, I could just look back on, what are some of my bookmarks this year? And be like, oh yeah, this is the thing that I have been using a lot. So, an article that I've been linking is an article called Preemptive Pluralization is (Probably) Not Evil. And it kind of talks a little bit about how going from code that works over a collection of two items to a collection of, you know, 20 items is very easy. But sometimes, going from one to two can be really challenging. And when are the times where you might want to preemptively make something more than one item? So, maybe using it has many association rather than it has one or making an attribute a collection rather than a single item. Controversial is not the word for it, but I think challenges a little bit of the way people typically like to write code. But across this year, I've run into multiple projects where they have been transitioning from one to many. That's been an interesting article to surface as part of those conversations. Whether your team wants to do this preemptively or whether they want to put it off and say in classic YAGNI (You Aren't Gonna Need It) form, "We'll make it single for now, and then we'll go plural," that's a conversation for your team. But I think this article is a great way to maybe frame the conversation. STEPHANIE: Cool. Yeah, I really like that almost, like, a counterpoint to YAGNI [laughs], which I don't think I've ever heard anyone say that out loud [laughs] before. But as soon as you said preemptive pluralization is not evil, I thought about all the times that I've had to, like, write code, text in which a thing, a variable could be either one or many [laughs] things. And I was like, ooh, maybe this will solve that problem for me [laughs]. JOËL: Speaking of pluralization, I'm sure you've been linking to more than just one article this year. Do you have another one that you find yourself coming up in conversations where you've always kind of like, "Hey, dropping this link," where it's almost like your thing? STEPHANIE: Yes. And that is basically everything written by Mandy Brown [laughs], who is a work coach that I actually started working with this year. And one of the articles that really inspired me or really has been a topic of conversation among my friends and co-workers is she has a blog post called Digging Through the Ashes. And it's kind of a meditation on, like, post burnout or, like, what's next, and how we have used this word as kind of a catch-all to describe, you know, this collective sense of being just really tired or demoralized or just, like, in need of a break. And what she offers in that post is kind of, like, some suggestions about, like, how can we be more specific here and really, you know, identify what it is that you're needing so that you can change how you engage with work? Because burnout can mean just that you are bored. It can mean that you are overworked. It can mean a lot of things for different people, right? And so, I definitely don't think I'm alone [laughs] in kind of having to realize that, like, oh, these are the ways that my work is or isn't changing and, like, where do I want to go next so that I might feel more sustainable? I know that's, like, a keyword that we talked about earlier, too. And that, on one hand, is both personal but also technical, right? It, like, informs the kinds of decisions that we make around our codebase and what we are optimizing for. And yeah, it is both technical and cultural. And it's been a big theme for me this year [laughs]. JOËL: Yeah. Would you say it's safe to say that sustainability would be, if you want to, like, put a single word on your theme for the year? Would that be a fair word to put there? STEPHANIE: Yeah, I think so. Definitely discovering what that means for me and helping other people discover what that means for them, too. JOËL: I feel like we kicked off the year 2023 by having that discussion of Sustainable Rails and how different technical practices can make the work there feel sustainable. So, I think that seems to have really carried through as a theme through the year for you. So, that's really cool to have seen that. And I'm sure listeners throughout the year have heard you mention these different books and articles. Maybe you've also been able to pick up a little bit on that. So, I'm glad that we do this show because you get a little bit of, like, all the bits and pieces in the day-to-day, and then we aggregate it over a year, and you can look back. You can be like, "Oh yeah, I definitely see that theme in your work." STEPHANIE: Yeah, I'm glad you pointed that out. It is actually really interesting to see how something that we had talked about early, early on just had that thread throughout the year. And speaking of sustainability, we are taking a little break from the show to enjoy the holidays. We'll be off for a few weeks, and we will be back with a new Bike Shed in January. JOËL: Cheers to a new year. STEPHANIE: Yeah, cheers to a new year. Wrapping up 2023. And we will see you all in 2024. JOËL: On that note, shall we wrap up the whole year? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/ referral. Or you can email us at referrals@thoughtbot.com with any questions.